query_id
stringlengths
32
32
query
stringlengths
7
2.91k
positive_passages
listlengths
1
7
negative_passages
listlengths
10
100
subset
stringclasses
7 values
5a4098d72885cbcbcffd0f1fb7eb6091
The beliefs behind the teacher that influences their ICT practices
[ { "docid": "ecddd4f80f417dcec49021065394c89a", "text": "Research in the area of educational technology has often been critiqued for a lack of theoretical grounding. In this article we propose a conceptual framework for educational technology by building on Shulman’s formulation of ‘‘pedagogical content knowledge’’ and extend it to the phenomenon of teachers integrating technology into their pedagogy. This framework is the result of 5 years of work on a program of research focused on teacher professional development and faculty development in higher education. It attempts to capture some of the essential qualities of teacher knowledge required for technology integration in teaching, while addressing the complex, multifaceted, and situated nature of this knowledge. We argue, briefly, that thoughtful pedagogical uses of technology require the development of a complex, situated form of knowledge that we call Technological Pedagogical Content Knowledge (TPCK). In doing so, we posit the complex roles of, and interplay among, three main components of learning environments: content, pedagogy, and technology. We argue that this model has much to offer to discussions of technology integration at multiple levels: theoretical, pedagogical, and methodological. In this article, we describe the theory behind our framework, provide examples of our teaching approach based upon the framework, and illustrate the methodological contributions that have resulted from this work.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
[ { "docid": "2683c65d587e8febe45296f1c124e04d", "text": "We present a new autoencoder-type architecture, that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the canonical distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures.", "title": "" }, { "docid": "4f096ba7fc6164cdbf5d37676d943fa8", "text": "This work presents an intelligent clothes search system based on domain knowledge, targeted at creating a virtual assistant to search clothes matched to fashion and userpsila expectation using all what have already been in real closet. All what garment essentials and fashion knowledge are from visual images. Users can simply submit the desired image keywords, such as elegant, sporty, casual, and so on, and occasion type, such as formal meeting, outdoor dating, and so on, to the system. And then the fashion style recognition module is activated to search the desired clothes within the personal garment database. Category learning with supervised neural networking is applied to cluster garments into different impression groups. The input stimuli of the neural network are three sensations, warmness, loudness, and softness, which are transformed from the physical garment essentials like major color tone, print type, and fabric material. The system aims to provide such an intelligent user-centric services system functions as a personal fashion advisor.", "title": "" }, { "docid": "1a9e2481abf23501274e67575b1c9be6", "text": "The multiple criteria decision making (MCDM) methods VIKOR and TOPSIS are based on an aggregating function representing “closeness to the idealâ€​, which originated in the compromise programming method. In VIKOR linear normalization and in TOPSIS vector normalization is used to eliminate the units of criterion functions. The VIKOR method of compromise ranking determines a compromise solution, providing a maximum “group utilityâ€​ for the “majorityâ€​ and a minimum of an individual regret for the “opponentâ€​. The TOPSIS method determines a solution with the shortest distance to the ideal solution and the greatest distance from the negative-ideal solution, but it does not consider the relative importance of these distances. A comparative analysis of these two methods is illustrated with a numerical example, showing their similarity and some differences. a, 1 b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.", "title": "" }, { "docid": "71aae4cbccf6d3451d35528ceca8b8a9", "text": "We propose Hierarchical Space-Time Segments as a new representation for action recognition and localization. This representation has a two-level hierarchy. The first level comprises the root space-time segments that may contain a human body. The second level comprises multi-grained space-time segments that contain parts of the root. We present an unsupervised method to generate this representation from video, which extracts both static and non-static relevant space-time segments, and also preserves their hierarchical and temporal relationships. Using simple linear SVM on the resultant bag of hierarchical space-time segments representation, we attain better than, or comparable to, state-of-the-art action recognition performance on two challenging benchmark datasets and at the same time produce good action localization results.", "title": "" }, { "docid": "372c5918e55e79c0a03c14105eb50fad", "text": "Boosting is one of the most significant advances in machine learning for classification and regression. In its original and computationally flexible version, boosting seeks to minimize empirically a loss function in a greedy fashion. The resulted estimator takes an additive function form and is built iteratively by applying a base estimator (or learner) to updated samples depending on the previous iterations. An unusual regularization technique, early stopping, is employed based on CV or a test set. This paper studies numerical convergence, consistency, and statistical rates of convergence of boosting with early stopping, when it is carried out over the linear span of a family of basis functions. For general loss functions, we prove the convergence of boosting’s greedy optimization to the infinimum of the loss function over the linear span. Using the numerical convergence result, we find early stopping strategies under which boosting is shown to be consistent based on iid samples, and we obtain bounds on the rates of convergence for boosting estimators. Simulation studies are also presented to illustrate the relevance of our theoretical results for providing insights to practical aspects of boosting. As a side product, these results also reveal the importance of restricting the greedy search step sizes, as known in practice through the works of Friedman and others. Moreover, our results lead to a rigorous proof that for a linearly separable problem, AdaBoost with ǫ → 0 stepsize becomes an L-margin maximizer when left to run to convergence.", "title": "" }, { "docid": "efc11b77182119202190f97d705b3bb7", "text": "In many E-commerce recommender systems, a special class of recommendation involves recommending items to users in a life cycle. For example, customers who have babies will shop on Diapers.com within a relatively long period, and purchase different products for babies within different growth stages. Traditional recommendation algorithms produce recommendation lists similar to items that the target user has accessed before (content filtering), or compute recommendation by analyzing the items purchased by the users who are similar to the target user (collaborative filtering). Such recommendation paradigms cannot effectively resolve the situation with a life cycle, i.e., the need of customers within different stages might vary significantly. In this paper, we model users’ behavior with life cycles by employing handcrafted item taxonomies, of which the background knowledge can be tailored for the computation of personalized recommendation. In particular, our method first formalizes a user’s long-term behavior using the item taxonomy, and then identifies the exact stage of the user. By incorporating collaborative filtering into recommendation, we can easily provide a personalized item list to the user through other similar users within the same stage. An empirical evaluation conducted on a purchasing data collection obtained from Diapers.com demonstrates the efficacy of our proposed method. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b3e9c251b2da6c704da6285602773afe", "text": "It has been well established that most operating system crashes are due to bugs in device drivers. Because drivers are normally linked into the kernel address space, a buggy driver can wipe out kernel tables and bring the system crashing to a halt. We have greatly mitigated this problem by reducing the kernel to an absolute minimum and running each driver as a separate, unprivileged process in user space. In addition, we implemented a POSIX-conformant operating system as multiple user-mode processes. In this design, all that is left in kernel mode is a tiny kernel of under 3800 lines of executable code for catching interrupts, starting and stopping processes, and doing IPC. By moving nearly the entire operating system to multiple, protected user-mode processes we reduce the consequences of faults, since a driver failure no longer is fatal and does not require rebooting the computer. In fact, our system incorporates a reincarnation server that is designed to deal with such errors and often allows for full recovery, transparent to the application and without loss of data. To achieve maximum reliability, our design was guided by simplicity, modularity, least authorization, and fault tolerance. This paper discusses our lightweight approach and reports on its performance and reliability. It also compares our design to other proposals for protecting drivers using kernel wrapping and virtual machines.", "title": "" }, { "docid": "7e5d83af3c6496e41c19b36b2392f076", "text": "JavaScript is an interpreted programming language most often used for enhancing webpage interactivity and functionality. It has powerful capabilities to interact with webpage documents and browser windows, however, it has also opened the door for many browser-based security attacks. Insecure engineering practices of using JavaScript may not directly lead to security breaches, but they can create new attack vectors and greatly increase the risks of browser-based attacks. In this article, we present the first measurement study on insecure practices of using JavaScript on the Web. Our focus is on the insecure practices of JavaScript inclusion and dynamic generation, and we examine their severity and nature on 6,805 unique websites. Our measurement results reveal that insecure JavaScript practices are common at various websites: (1) at least 66.4% of the measured websites manifest the insecure practices of including JavaScript files from external domains into the top-level documents of their webpages; (2) over 44.4% of the measured websites use the dangerous eval() function to dynamically generate and execute JavaScript code on their webpages; and (3) in JavaScript dynamic generation, using the document.write() method and the innerHTML property is much more popular than using the relatively secure technique of creating script elements via DOM methods. Our analysis indicates that safe alternatives to these insecure practices exist in common cases and ought to be adopted by website developers and administrators for reducing potential security risks.", "title": "" }, { "docid": "54e5cd296371e7e058a00b1835251242", "text": "In this paper, a quasi-millimeter-wave wideband bandpass filter (BPF) is designed by using a microstrip dual-mode ring resonator and two folded half-wavelength resonators. Based on the transmission line equivalent circuit of the filter, variations of the frequency response of the filter versus the circuit parameters are investigated first by using the derived formulas and circuit simulators. Then a BPF with a 3dB fractional bandwidth (FBW) of 20% at 25.5 GHz is designed, which realizes the desired wide passband, sharp skirt property, and very wide stopband. Finally, the designed BPF is fabricated, and its measured frequency response is found agree well with the simulated result.", "title": "" }, { "docid": "93d06eafb15063a7d17ec9a7429075f0", "text": "Non-orthogonal multiple access (NOMA) is emerging as a promising, yet challenging, multiple access technology to improve spectrum utilization for the fifth generation (5G) wireless networks. In this paper, the application of NOMA to multicast cognitive radio networks (termed as MCR-NOMA) is investigated. A dynamic cooperative MCR-NOMA scheme is proposed, where the multicast secondary users serve as relays to improve the performance of both primary and secondary networks. Based on the available channel state information (CSI), three different secondary user scheduling strategies for the cooperative MCR-NOMA scheme are presented. To evaluate the system performance, we derive the closed-form expressions of the outage probability and diversity order for both networks. Furthermore, we introduce a new metric, referred to as mutual outage probability to characterize the cooperation benefit compared to non-cooperative MCR-NOMA scheme. Simulation results demonstrate significant performance gains are obtained for both networks, thanks to the use of our proposed cooperative MCR-NOMA scheme. It is also demonstrated that higher spatial diversity order can be achieved by opportunistically utilizing the CSI available for the secondary user scheduling.", "title": "" }, { "docid": "92386ee2988b6d7b6f2f0b3cdcbf44ba", "text": "In the rst part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weightupdate rule of Littlestone and Warmuth [20] can be adapted to this model yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games and prediction of points in R n . In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary nite set or a bounded segment of the real line.", "title": "" }, { "docid": "858a5ed092f02d057437885ad1387c9f", "text": "The current state-of-the-art singledocument summarization method generates a summary by solving a Tree Knapsack Problem (TKP), which is the problem of finding the optimal rooted subtree of the dependency-based discourse tree (DEP-DT) of a document. We can obtain a gold DEP-DT by transforming a gold Rhetorical Structure Theory-based discourse tree (RST-DT). However, there is still a large difference between the ROUGE scores of a system with a gold DEP-DT and a system with a DEP-DT obtained from an automatically parsed RST-DT. To improve the ROUGE score, we propose a novel discourse parser that directly generates the DEP-DT. The evaluation results showed that the TKP with our parser outperformed that with the state-of-the-art RST-DT parser, and achieved almost equivalent ROUGE scores to the TKP with the gold DEP-DT.", "title": "" }, { "docid": "49329aef5ac732cc87b3cc78520c7ff5", "text": "This paper surveys the previous and ongoing research on surface electromyogram (sEMG) signal processing implementation through various hardware platforms. The development of system that incorporates sEMG analysis capability is essential in rehabilitation devices, prosthesis arm/limb and pervasive healthcare in general. Most advanced EMG signal processing algorithms rely heavily on computational resource of a PC that negates the elements of portability, size and power dissipation of a pervasive healthcare system. Signal processing techniques applicable to sEMG are discussed with aim for proper execution in platform other than full-fledge PC. Performance and design parameters issues in some hardware implementation are also being pointed up. The paper also outlines the trends and alternatives solutions in developing portable and efficient EMG signal processing hardware.", "title": "" }, { "docid": "1785d1d7da87d1b6e5c41ea89e447bf9", "text": "Web usage mining is the application of data mining techniques to discover usage patterns from Web data, in order to understand and better serve the needs of Web-based applications. Web usage mining consists of three phases, namely preprocessing, pattern discovery, and pattern analysis. This paper describes each of these phases in detail. Given its application potential, Web usage mining has seen a rapid increase in interest, from both the research and practice communities. This paper provides a detailed taxonomy of the work in this area, including research efforts as well as commercial offerings. An up-to-date survey of the existing work is also provided. Finally, a brief overview of the WebSIFT system as an example of a prototypical Web usage mining system is given.", "title": "" }, { "docid": "18e1f1171844fa27905246b9246cc975", "text": "Autonomous robots must be able to learn and maintain models of their environments. Research on mobile robot navigation has produced two major paradigms for mapping indoor environments: grid-based and topological. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments. Topological maps, on the other hand, can be used much more efficiently, yet accurate and consistent topological maps are often difficult to learn and maintain in large-scale environments, particularly if momentary sensor data is highly ambiguous. This paper describes an approach that integrates both paradigms: grid-based and topoIogica1. Grid-based maps are learned using artificial neural networks and naive Bayesian integration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms, the approach presented here gains advantages from both worlds: accuracy/consistency and efficiency. The paper gives results for autonomous exploration, mapping and operation of a mobile robot in populated multi-room environments. @ 1998 Elsevier Science B.V.", "title": "" }, { "docid": "60182038191a764fd7070e8958185718", "text": "Shales of very low metamorphic grade from the 2.78 to 2.45 billion-year-old (Ga) Mount Bruce Supergroup, Pilbara Craton, Western Australia, were analyzed for solvent extractable hydrocarbons. Samples were collected from ten drill cores and two mines in a sampling area centered in the Hamersley Basin near Wittenoom and ranging 200 km to the southeast, 100 km to the southwest and 70 km to the northwest. Almost all analyzed kerogenous sedimentary rocks yielded solvent extractable organic matter. Concentrations of total saturated hydrocarbons were commonly in the range of 1 to 20 ppm ( g/g rock) but reached maximum values of 1000 ppm. The abundance of aromatic hydrocarbons was 1 to 30 ppm. Analysis of the extracts by gas chromatography-mass spectrometry (GC-MS) and GC-MS metastable reaction monitoring (MRM) revealed the presence of n-alkanes, midand end-branched monomethylalkanes, -cyclohexylalkanes, acyclic isoprenoids, diamondoids, trito pentacyclic terpanes, steranes, aromatic steroids and polyaromatic hydrocarbons. Neither plant biomarkers nor hydrocarbon distributions indicative of Phanerozoic contamination were detected. The host kerogens of the hydrocarbons were depleted in C by 2 to 21‰ relative ton-alkanes, a pattern typical of, although more extreme than, other Precambrian samples. Acyclic isoprenoids showed carbon isotopic depletion relative to n-alkanes and concentrations of 2 -methylhopanes were relatively high, features rarely observed in the Phanerozoic but characteristic of many other Precambrian bitumens. Molecular parameters, including sterane and hopane ratios at their apparent thermal maxima, condensate-like alkane profiles, high monoand triaromatic steroid maturity parameters, high methyladamantane and methyldiamantane indices and high methylphenanthrene maturity ratios, indicate thermal maturities in the wet-gas generation zone. Additionally, extracts from shales associated with iron ore deposits at Tom Price and Newman have unusual polyaromatic hydrocarbon patterns indicative of pyrolytic dealkylation. The saturated hydrocarbons and biomarkers in bitumens from the Fortescue and Hamersley Groups are characterized as ‘probably syngenetic with their Archean host rock’ based on their typical Precambrian molecular and isotopic composition, extreme maturities that appear consistent with the thermal history of the host sediments, the absence of biomarkers diagnostic of Phanerozoic age, the absence of younger petroleum source rocks in the basin and the wide geographic distribution of the samples. Aromatic hydrocarbons detected in shales associated with iron ore deposits at Mt Tom Price and Mt Whaleback are characterized as ‘clearly Archean’ based on their hypermature composition and covalent bonding to kerogen. Copyright © 2003 Elsevier Ltd", "title": "" }, { "docid": "9b942a1342eb3c4fd2b528601fa42522", "text": "Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.", "title": "" }, { "docid": "14bcbfcb6e7165e67247453944f37ac0", "text": "This study investigated whether psychologists' confidence in their clinical decisions is really justified. It was hypothesized that as psychologists study information about a case (a) their confidence about the case increases markedly and steadily but (b) the accuracy of their conclusions about the case quickly reaches a ceiling. 32 judges, including 8 clinical psychologists, read background information about a published case, divided into 4 sections. After reading each section of the case, judges answered a set of 25 questions involving personality judgments about the case. Results strongly supported the hypotheses. Accuracy did not increase significantly with increasing information, but confidence increased steadily and significantly. All judges except 2 became overconfident, most of them markedly so. Clearly, increasing feelings of confidence are not a sure sign of increasing predictive accuracy about a case.", "title": "" }, { "docid": "1d1ba5f131c9603fe3d919ad493a6dc1", "text": "By its very nature, software development consists of many knowledge-intensive processes. One of the most difficult to model, however, is requirements elicitation. This paper presents a mathematical model of the requirements elicitation process that clearly shows the critical role of knowledge in its performance. One metaprocess of requirements elicitation, selection of an appropriate elicitation technique, is also captured in the model. The values of this model are: (1) improved understanding of what needs to be performed during elicitation helps analysts improve their elicitation efforts, (2) improved understanding of how elicitation techniques are selected helps less experienced analysts be as successful as more experienced analysts, and (3) as we improve our ability to perform elicitation, we improve the likelihood that the systems we create will meet their intended customers’ needs. Many papers have been written that promulgate specific elicitation methods. A few have been written that model elicitation in general. However, none have yet to model elicitation in a way that makes clear the critical role played by knowledge. This paper’s model captures the critical roles played by knowledge in both elicitation and elicitation technique selection.", "title": "" }, { "docid": "632fc99930154b2caaa83254a0cc3c52", "text": "Article history: Received 1 May 2012 Received in revised form 1 May 2014 Accepted 3 May 2014 Available online 10 May 2014", "title": "" } ]
scidocsrr
e173053430b9612fdc518cf83dc7a7d2
Novel Leakage Detection by Ensemble CNN-SVM and Graph-Based Localization in Water Distribution Systems
[ { "docid": "3f45d5b611b59e0bcaa0ff527d11f5af", "text": "Ensemble methods use multiple models to get better performance. Ensemble methods have been used in multiple research fields such as computational intelligence, statistics and machine learning. This paper reviews traditional as well as state-of-the-art ensemble methods and thus can serve as an extensive summary for practitioners and beginners. The ensemble methods are categorized into conventional ensemble methods such as bagging, boosting and random forest, decomposition methods, negative correlation learning methods, multi-objective optimization based ensemble methods, fuzzy ensemble methods, multiple kernel learning ensemble methods and deep learning based ensemble methods. Variations, improvements and typical applications are discussed. Finally this paper gives some recommendations for future research directions.", "title": "" }, { "docid": "b9d25bdbb337a9d16a24fa731b6b479d", "text": "The implementation of effective strategies to manage leaks represents an essential goal for all utilities involved with drinking water supply in order to reduce water losses affecting urban distribution networks. This study concerns the early detection of leaks occurring in small-diameter customers’ connections to water supply networks. An experimental campaign was carried out in a test bed to investigate the sensitivity of Acoustic Emission (AE) monitoring to water leaks. Damages were artificially induced on a polyethylene pipe (length 28 m, outer diameter 32 mm) at different distances from an AE transducer. Measurements were performed in both unburied and buried pipe conditions. The analysis permitted the identification of a clear correlation between three monitored parameters (namely total Hits, Cumulative Counts and Cumulative Amplitude) and the characteristics of the examined leaks.", "title": "" }, { "docid": "a9931e49d853b5c35735bb7770ceeee1", "text": "Human activity recognition involves classifying times series data, measured at inertial sensors such as accelerometers or gyroscopes, into one of pre-defined actions. Recently, convolutional neural network (CNN) has established itself as a powerful technique for human activity recognition, where convolution and pooling operations are applied along the temporal dimension of sensor signals. In most of existing work, 1D convolution operation is applied to individual univariate time series, while multi-sensors or multi-modality yield multivariate time series. 2D convolution and pooling operations are applied to multivariate time series, in order to capture local dependency along both temporal and spatial domains for uni-modal data, so that it achieves high performance with less number of parameters compared to 1D operation. However for multi-modal data existing CNNs with 2D operation handle different modalities in the same way, which cause interferences between characteristics from different modalities. In this paper, we present CNNs (CNN-pf and CNN-pff), especially CNN-pff, for multi-modal data. We employ both partial weight sharing and full weight sharing for our CNN models in such a way that modality-specific characteristics as well as common characteristics across modalities are learned from multi-modal (or multi-sensor) data and are eventually aggregated in upper layers. Experiments on benchmark datasets demonstrate the high performance of our CNN models, compared to state of the arts methods.", "title": "" } ]
[ { "docid": "e730935b097cb4c4f36221d774d2e63a", "text": "This paper outlines key design principles of Scilla—an intermediatelevel language for verified smart contracts. Scilla provides a clean separation between the communication aspect of smart contracts on a blockchain, allowing for the rich interaction patterns, and a programming component, which enjoys principled semantics and is amenable to formal verification. Scilla is not meant to be a high-level programming language, and we are going to use it as a translation target for high-level languages, such as Solidity, for performing program analysis and verification, before further compilation to an executable bytecode. We describe the automata-based model of Scilla, present its programming component and show how contract definitions in terms of automata streamline the process of mechanised verification of their safety and temporal properties.", "title": "" }, { "docid": "972fe2e08a7317a674115e361eed898f", "text": "The topic of emotions in the workplace is beginning to garner closer attention by researchers and theorists. The study of emotional labor addresses the stress of managing emotions when the work role demands that certain expressions be shown to customers. However, there has been no overarching framework to guide this work, and the previous studies have often disagreed on the definition and operationalization of emotional labor. The purposes of this article are as follows: to review and compare previous perspectives of emotional labor, to provide a definition of emotional labor that integrates these perspectives, to discuss emotion regulation as a guiding theory for understanding the mechanisms of emotional labor, and to present a model of emotional labor that includes individual differences (such as emotional intelligence) and organizational factors (such as supervisor support).", "title": "" }, { "docid": "493b22055a1b9bda564c2c1ae6727cba", "text": "Earlier studies have introduced a list of high-level evaluation criteria to assess how well a language supports generic programming. Since each language that meets all criteria is considered generic, those criteria are not fine-grained enough to differentiate between languages for generic programming. We refine these criteria into a taxonomy that captures differences between type classes in Haskell and concepts in C++, and discuss which differences are incidental and which ones are due to other language features. The taxonomy allows for an improved understanding of language support for generic programming, and the comparison is useful for the ongoing discussions among language designers and users of both languages.", "title": "" }, { "docid": "3bf252bcb0953016cc5a834d9d9325d3", "text": "This paper proposes a digital phase leading filter current compensation (PLFCC) technique for a continuous conduction mode boost power factor correction to improve PF in high line voltage and light load conditions. The proposed technique provides a corrected average inductor current reference and utilizes an enhanced duty ratio feed-forward technique which can cancel the adverse effect of the phase leading currents caused by filter capacitors. Moreover, the proposed PLFCC technique also provides the switching dead-zone in nature so the switching loss can be reduced. Therefore, the proposed PLFCC can significantly improve power quality and can achieve a high efficiency in high line voltage and light load conditions. The principle and analysis of the proposed PLFCC are presented, and performance and feasibility are verified by experimental results from the universal input (90-260 VAC) and 750 W-400 V output laboratory prototype.", "title": "" }, { "docid": "ba5b5732dd7c48874e4f216903bba0b1", "text": "This article presents a review of the application of insole plantar pressure sensor system in recognition and analysis of the hemiplegic gait in stroke patients. Based on the review, tailor made 3D insoles for plantar pressure measurement were designed and fabricated. The function is to compare with that of conventional flat insoles. Tailor made 3D contour of the insole can improve the contact between insole and foot and enable sampling plantar pressure at a high reproducibility.", "title": "" }, { "docid": "456b7ad01115d9bc04ca378f1eb6d7f2", "text": "Article history: Received 13 October 2007 Received in revised form 12 June 2008 Accepted 31 July 2008", "title": "" }, { "docid": "99574bec7125cfa9e2ebc19bb6bb4bf5", "text": "Health care delivery and education has become a challenge for providers. Nurses and other professionals are challenged daily to assure that the patient has the necessary information to make informed decisions. Patients and their families are given a multitude of information about their health and commonly must make important decisions from these facts. Obstacles that prevent easy delivery of health care information include literacy, culture, language, and physiological barriers. It is up to the nurse to assess and evaluate the patient's learning needs and readiness to learn because everyone learns differently. This article will examine how each of these barriers impact care delivery along with teaching and learning strategies will be examined.", "title": "" }, { "docid": "8b7d3410e279f335f3ed5c6d6e9b60bc", "text": "A wideband patch antenna loaded with a planar metamaterial unit cell is proposed. The metamaterial unit cell is composed of an interdigital capacitor and a complementary split-ring resonator (CSRR) slot. A dispersion analysis of the metamaterial unit cell reveals that an increase in series capacitance can decrease the half-wavelength resonance frequency, thus reducing the electrical size of the proposed antenna. In addition, circulating current distributions around the CSRR slot with increased interdigital finger length bring about the TM01 mode radiation, while the normal radiation mode is the TM10 mode. Furthermore, the TM01 mode can be combined with the TM10 mode without a pattern distortion. The hybridization of the two modes yields a wideband property (6.8%) and a unique radiation pattern that is comparable with two independent dipole antennas positioned orthogonally. Also, the proposed antenna achieves high efficiency (96%) and reasonable gain (3.85 dBi), even though the electrical size of the antenna is only 0.24λ0×0.24λ0×0.02λ0.", "title": "" }, { "docid": "2701f46ac9a473cb809773df5ae1d612", "text": "Testing and measuring the security of software system architectures is a difficult task. An attempt is made in this paper to analyze the issues of architecture security of object-oriented software’s using common security concepts to evaluate the security of a system under design. Object oriented systems are based on various architectures like COM, DCOM, CORBA, MVC and Broker. In object oriented technology the basic system component is an object. Individual system component is posing it own risk in the system. Security policies and the associated risk in these software architectures can be calculated for the individual component. Overall risk can be calculated based on the context and risk factors in the architecture. Small risk factors get accumulated together and form a major risk in the systems and can damage the systems.", "title": "" }, { "docid": "5e0921d158f0fa7b299fffba52f724d5", "text": "Space syntax derives from a set of analytic measures of configuration that have been shown to correlate well with how people move through and use buildings and urban environments. Space syntax represents the open space of an environment in terms of the intervisibility of points in space. The measures are thus purely configurational, and take no account of attractors, nor do they make any assumptions about origins and destinations or path planning. Space syntax has found that, despite many proposed higher-level cognitive models, there appears to be a fundamental process that informs human and social usage of an environment. In this paper we describe an exosomatic visual architecture, based on space syntax visibility graphs, giving many agents simultaneous access to the same pre-processed information about the configuration of a space layout. Results of experiments in a simulated retail environment show that a surprisingly simple ‘random next step’ based rule outperforms a more complex ‘destination based’ rule in reproducing observed human movement behaviour. We conclude that the effects of spatial configuration on movement patterns that space syntax studies have found are consistent with a model of individual decision behaviour based on the spatial affordances offered by the morphology of the local visual field.", "title": "" }, { "docid": "8b66ffe2afae5f1f46b7803d80422248", "text": "This paper describes the torque production capabilities of electrical machines with planar windings and presents an automated procedure for coils conductors' arrangement. The procedure has been applied on an ironless axial flux slotless permanent magnet machines having stator windings realized using printed circuit board (PCB) coils. An optimization algorithm has been implemented to find a proper arrangement of PCB traces in order to find the best compromise between the maximization of average torque and the minimization of torque ripple. A time-efficient numerical model has been developed to reduce computational load and thus make the optimization based design feasible.", "title": "" }, { "docid": "035feb63adbe5f83b691e8baf89629cc", "text": "In this article we study the problem of document image representation based on visual features. We propose a comprehensive experimental study that compares three types of visual document image representations: (1) traditional so-called shallow features, such as the RunLength and the Fisher-Vector descriptors, (2) deep features based on Convolutional Neural Networks, and (3) features extracted from hybrid architectures that take inspiration from the two previous ones. We evaluate these features in several tasks ( i.e. classification, clustering, and retrieval) and in different setups ( e.g. domain transfer) using several public and in-house datasets. Our results show that deep features generally outperform other types of features when there is no domain shift and the new task is closely related to the one used to train the model. However, when a large domain or task shift is present, the Fisher-Vector shallow features generalize better and often obtain the best results.", "title": "" }, { "docid": "6ee2d94f0ccebbb05df2ea4b79b30976", "text": "Received: 25 June 2013 Revised: 11 October 2013 Accepted: 25 November 2013 Abstract This paper distinguishes and contrasts two design science research strategies in information systems. In the first strategy, a researcher constructs or builds an IT meta-artefact as a general solution concept to address a class of problem. In the second strategy, a researcher attempts to solve a client’s specific problem by building a concrete IT artefact in that specific context and distils from that experience prescriptive knowledge to be packaged into a general solution concept to address a class of problem. The two strategies are contrasted along 16 dimensions representing the context, outcomes, process and resource requirements. European Journal of Information Systems (2015) 24(1), 107–115. doi:10.1057/ejis.2013.35; published online 7 January 2014", "title": "" }, { "docid": "91e0722c00b109d7db137fb3468c088a", "text": "This paper proposes a novel flexible piezoelectric micro-machined ultrasound transducer, which is based on PZT and a polyimide substrate. The transducer is made on the polyimide substrate and packaged with medical polydimethylsiloxane. Instead of etching the PZT ceramic, this paper proposes a method of putting diced PZT blocks into holes on the polyimide which are pre-etched. The device works in d31 mode and the electromechanical coupling factor is 22.25%. Its flexibility, good conformal contacting with skin surfaces and proper resonant frequency make the device suitable for heart imaging. The flexible packaging ultrasound transducer also has a good waterproof performance after hundreds of ultrasonic electric tests in water. It is a promising ultrasound transducer and will be an effective supplementary ultrasound imaging method in the practical applications.", "title": "" }, { "docid": "6d44c4244064634deda30a5059acd87e", "text": "Currently, gene sequence genealogies of the Oligotrichea Bütschli, 1889 comprise only few species. Therefore, a cladistic approach, especially to the Oligotrichida, was made, applying Hennig's method and computer programs. Twenty-three characters were selected and discussed, i.e., the morphology of the oral apparatus (five characters), the somatic ciliature (eight characters), special organelles (four characters), and ontogenetic particulars (six characters). Nine of these characters developed convergently twice. Although several new features were included into the analyses, the cladograms match other morphological trees in the monophyly of the Oligotrichea, Halteriia, Oligotrichia, Oligotrichida, and Choreotrichida. The main synapomorphies of the Oligotrichea are the enantiotropic division mode and the de novo-origin of the undulating membranes. Although the sister group relationship of the Halteriia and the Oligotrichia contradicts results obtained by gene sequence analyses, no morphologic, ontogenetic or ultrastructural features were found, which support a branching of Halteria grandinella within the Stichotrichida. The cladistic approaches suggest paraphyly of the family Strombidiidae probably due to the scarce knowledge. A revised classification of the Oligotrichea is suggested, including all sufficiently known families and genera.", "title": "" }, { "docid": "4cfc991f626f6fc9d131514985863127", "text": "Simultaneous recordings from large neural populations are becoming increasingly common. An important feature of population activity is the trial-to-trial correlated fluctuation of spike train outputs from recorded neuron pairs. Similar to the firing rate of single neurons, correlated activity can be modulated by a number of factors, from changes in arousal and attentional state to learning and task engagement. However, the physiological mechanisms that underlie these changes are not fully understood. We review recent theoretical results that identify three separate mechanisms that modulate spike train correlations: changes in input correlations, internal fluctuations and the transfer function of single neurons. We first examine these mechanisms in feedforward pathways and then show how the same approach can explain the modulation of correlations in recurrent networks. Such mechanistic constraints on the modulation of population activity will be important in statistical analyses of high-dimensional neural data.", "title": "" }, { "docid": "ad2a1afc5602057d76caa34abc92feba", "text": "We have developed a proprietary package that is fully compatible with variously sized chips. In this paper, we present design and development of a Quad Flat No-Lead (QFN) package. We will show how we have built and characterized low-loss packages using standard Printed Circuit Board (PCB) laminate materials. In particular, this package has been developed using Liquid Crystal Polymer (LCP). These packages are unique in that they fully account for and incorporate solder joint and ball bond wire parasitic effects into design. The package has a large cavity section that allow for a variety of chips and decoupling capacitors to be quickly and easily packaged. Insertion loss through a single package transition is measured to be less than 0.4 dB across DC to 40 GHz. Return losses are measured to be better than 15 dB up through 40 GHz. Further, a bare die low noise amplifier (LNA) is packaged using this technology and measured after being surface mounted onto PCB. The packaged LNA is measured to show 19 dB gain over 32 GHz to 44 GHz. Return loss for both bare die and packaged version show no difference, and both measure 15 dB. The LCP package LNA exhibits 4.5 dB noise figure over 37 GHz to 40 GHz. Keywords-Hybrid integrated circuit packaging, liquid crystal polymer, and microwave devices", "title": "" }, { "docid": "4d32e09258fb80eb7f79c25549f808b7", "text": "Data-efficient learning in continuous state-action spaces using very high-dimensional observations remains a key challenge in developing fully autonomous systems. In this paper, we consider one instance of this challenge, the pixels to torques problem, where an agent must learn a closed-loop control policy from pixel information only. We introduce a data-efficient, model-based reinforcement learning algorithm that learns such a closed-loop policy directly from pixel information. The key ingredient is a deep dynamical model that uses deep autoencoders to learn a low-dimensional embedding of images jointly with a predictive model in this low-dimensional feature space. Joint learning ensures that not only static but also dynamic properties of the data are accounted for. This is crucial for long-term predictions, which lie at the core of the adaptive model predictive control strategy that we use for closed-loop control. Compared to state-of-the-art reinforcement learning methods for continuous states and actions, our approach learns quickly, scales to highdimensional state spaces and is an important step toward fully autonomous learning from pixels to torques.", "title": "" }, { "docid": "b0afcee1ac7ce691f60302dd8298b633", "text": "With the increase of online customer opinions in specialised websites and social networks, the necessity of automatic systems to help to organise and classify customer reviews by domain-specific aspect/categories and sentiment polarity is more important than ever. Supervised approaches for Aspect Based Sentiment Analysis obtain good results for the domain/language they are trained on, but having manually labelled data for training supervised systems for all domains and languages is usually very costly and time consuming. In this work we describe W2VLDA, an almost unsupervised system based on topic modelling, that combined with some other unsupervised methods and a minimal configuration, performs aspect/category classification, aspectterms/opinion-words separation and sentiment polarity classification for any given domain and language. We evaluate the performance of the aspect and sentiment classification in the multilingual SemEval 2016 task 5 (ABSA) dataset. We show competitive results for several languages (English, Spanish, French and Dutch) and domains (hotels, restaurants, electronic devices).", "title": "" }, { "docid": "a62c1426e09ab304075e70b61773914f", "text": "Converting a scanned or shot line drawing image into a vector graph can facilitate further editand reuse, making it a hot research topic in computer animation and image processing. Besides avoiding noiseinfluence, its main challenge is to preserve the topological structures of the original line drawings, such as linejunctions, in the procedure of obtaining a smooth vector graph from a rough line drawing. In this paper, wepropose a vectorization method of line drawings based on junction analysis, which retains the original structureunlike done by existing methods. We first combine central line tracking and contour tracking, which allowsus to detect the encounter of line junctions when tracing a single path. Then, a junction analysis approachbased on intensity polar mapping is proposed to compute the number and orientations of junction branches.Finally, we make use of bending degrees of contour paths to compute the smoothness between adjacent branches,which allows us to obtain the topological structures corresponding to the respective ones in the input image.We also introduce a correction mechanism for line tracking based on a quadratic surface fitting, which avoidsaccumulating errors of traditional line tracking and improves the robustness for vectorizing rough line drawings.We demonstrate the validity of our method through comparisons with existing methods, and a large amount ofexperiments on both professional and amateurish line drawing images. 本文提出一种基于交叉点分析的线条矢量化方法, 克服了现有方法难以保持拓扑结构的不足。通过中心路径跟踪和轮廓路径跟踪相结合的方式, 准确检测交叉点的出现提出一种基于极坐标亮度映射的交叉点分析方法, 计算交叉点的分支数量和朝向; 利用轮廓路径的弯曲角度判断交叉点相邻分支间的光顺度, 从而获得与原图一致的拓扑结构。", "title": "" } ]
scidocsrr
194eb4db59d2578c68acf2278f07f7aa
Visualizing Workload and Emotion Data in Air Traffic Control - An Approach Informed by the Supervisors Decision Making Process
[ { "docid": "9089a8cc12ffe163691d81e319ec0f25", "text": "Complex problem solving (CPS) emerged in the last 30 years in Europe as a new part of the psychology of thinking and problem solving. This paper introduces into the field and provides a personal view. Also, related concepts like macrocognition or operative intelligence will be explained in this context. Two examples for the assessment of CPS, Tailorshop and MicroDYN, are presented to illustrate the concept by means of their measurement devices. Also, the relation of complex cognition and emotion in the CPS context is discussed. The question if CPS requires complex cognition is answered with a tentative “yes.”", "title": "" } ]
[ { "docid": "993b753e365e6a1956c425c7d0bf1a2a", "text": "Injection molding is a very complicated process to monitor and control. With its high complexity and many process parameters, the optimization of these systems is a very challenging problem. To meet the requirements and costs demanded by the market, there has been an intense development and research with the aim to maintain the process under control. This paper outlines the latest advances in necessary algorithms for plastic injection process and monitoring, and also a flexible data acquisition system that allows rapid implementation of complex algorithms to assess their correct performance and can be integrated in the quality control process. This is the main topic of this paper. Finally, to demonstrate the performance achieved by this combination, a real case of use is presented. Keywords—Plastic injection, machine learning, rapid complex algorithm prototyping.", "title": "" }, { "docid": "20662e12b45829c00c67434277ab9a26", "text": "Given the significance of placement in IC physical design, extensive research studies performed over the last 50 years addressed numerous aspects of global and detailed placement. The objectives and the constraints dominant in placement have been revised many times over, and continue to evolve. Additionally, the increasing scale of placement instances affects the algorithms of choice for high-performance tools. We survey the history of placement research, the progress achieved up to now, and outstanding challenges.", "title": "" }, { "docid": "b0747e6cbc20a8e4d9dec0ef75386701", "text": "The US Vice President, Al Gore, in a speech on the information superhighway, suggested that it could be used to remotely control a nuclear reactor. We do not have enough confidence in computer software, hardware, or networks to attempt this experiment, but have instead built a Internet-accessible, remote-controlled model car that provides a race driver's view via a video camera mounted on the model car. The remote user can see live video from the car, and, using a mouse, control the speed and direction of the car. The challenge was to build a car that could be controlled by novice users in narrow corridors, and that would work not only with the full motion video that the car natively provides, but also with the limited size and frame rate video available over the Internet multicast backbone. We have built a car that has been driven from a site 50 miles away over a 56-kbps IP link using $\\mbox{{\\tt nv}}$ format video at as little as one frame per second and at as low as $100\\times 100$ pixels resolution. We also built hardware to control the car, using a slightly modified voice grade channel videophone. Our experience leads us to believe that it is now possible to put together readily available hardware and software components to build a cheap and effective telepresence.", "title": "" }, { "docid": "dfb68d81ed159e82b6c9f2e930436e97", "text": "The last decade has seen the fields of molecular biology and genetics transformed by the development of CRISPR-based gene editing technologies. These technologies were derived from bacterial defense systems that protect against viral invasion. Elegant studies focused on the evolutionary battle between CRISPR-encoding bacteria and the viruses that infect and kill them revealed the next step in this arms race, the anti-CRISPR proteins. Investigation of these proteins has provided important new insight into how CRISPR-Cas systems work and how bacterial genomes evolve. They have also led to the development of important biotechnological tools that can be used for genetic engineering, including off switches for CRISPR-Cas9 genome editing in human cells.", "title": "" }, { "docid": "0bc847391ea276e19d91bdb0ab14a5e5", "text": "Modern machine learning models are beginning to rival human performance on some realistic object recognition tasks, but we still lack a full understanding of how the human brain solves this same problem. This thesis combines knowledge from machine learning and computational neuroscience to create models of human object recognition that are increasingly realistic both in their treatment of low-level neural mechanisms and in their reproduction of high-level human behaviour. First, I present extensions to the Neural Engineering Framework to make its preferred type of model—the “fixed-encoding” network—more accurate for object recognition tasks. These extensions include better distributions—such as Gabor filters—for the encoding weights, and better loss functions—namely weighted squared loss, softmax loss, and hinge loss—to solve for decoding weights. Second, I introduce increased biological realism into deep convolutional neural networks trained with backpropagation, by training them to run using spiking leaky integrate-andfire (LIF) neurons. Convolutional neural networks have been successful in machine learning, and I am able to convert them to spiking networks while retaining similar levels of performance. I present a novel method to smooth the LIF rate response function in order to avoid the common problems associated with differentiating spiking neurons in general and LIF neurons in particular. I also derive a number of novel characterizations of spiking variability, and use these to train spiking networks to be more robust to this variability. Finally, to address the problems with implementing backpropagation in a biological system, I train spiking deep neural networks using the more biological Feedback Alignment algorithm. I examine this algorithm in depth, including many variations on the core algorithm, methods to train using non-differentiable spiking neurons, and some of the limitations of the algorithm. Using these findings, I construct a spiking model that learns online in a biologically realistic manner. The models developed in this thesis help to explain both how spiking neurons in the brain work together to allow us to recognize complex objects, and how the brain may learn this behaviour. Their spiking nature allows them to be implemented on highly efficient neuromorphic hardware, opening the door to object recognition on energy-limited devices such as cell phones and mobile robots.", "title": "" }, { "docid": "4eebd9eb516bf2fe0b89c5d684f1ff96", "text": "Psychological theories have suggested that creativity involves a twofold process characterized by a generative component facilitating the production of novel ideas and an evaluative component enabling the assessment of their usefulness. The present study employed a novel fMRI paradigm designed to distinguish between these two components at the neural level. Participants designed book cover illustrations while alternating between the generation and evaluation of ideas. The use of an fMRI-compatible drawing tablet allowed for a more natural drawing and creative environment. Creative generation was associated with preferential recruitment of medial temporal lobe regions, while creative evaluation was associated with joint recruitment of executive and default network regions and activation of the rostrolateral prefrontal cortex, insula, and temporopolar cortex. Executive and default regions showed positive functional connectivity throughout task performance. These findings suggest that the medial temporal lobe may be central to the generation of novel ideas and creative evaluation may extend beyond deliberate analytical processes supported by executive brain regions to include more spontaneous affective and visceroceptive evaluative processes supported by default and limbic regions. Thus, creative thinking appears to recruit a unique configuration of neural processes not typically used together during traditional problem solving tasks.", "title": "" }, { "docid": "32817233f5aa05036ca292e7b57143fb", "text": "Asphalt pavement distresses have significant importance in roads and highways. This paper addresses the detection and localization of one of the key pavement distresses, the potholes using computer vision. Different kinds of pothole and non-pothole images from asphalt pavement are considered for experimentation. Considering the appearance-shape based nature of the potholes, Histograms of oriented gradients (HOG) features are computed for the input images. Features are trained and classified using Naïve Bayes classifier resulting in labeling of the input as pothole or non-pothole image. To locate the pothole in the detected pothole images, normalized graph cut segmentation scheme is employed. Proposed scheme is tested on a dataset having broad range of pavement images. Experimentation results showed 90 % accuracy for the detection of pothole images and high recall for the localization of pothole in the detected images.", "title": "" }, { "docid": "4b453a0f541d1efcd7e24dfc631aaecb", "text": "Intelligent tutoring systems (ITSs), which provide step-by-step guidance to students in complex problem-solving activities, have been shown to enhance student learning in a range of domains. However, they tend to be difficult to build. Our project investigates whether the process of authoring an ITS can be simplified, while at the same time maintaining the characteristics that make ITS effective, and also maintaining the ability to support large-scale tutor development. Specifically, our project tests whether authoring tools based on programming-by-demonstration techniques (developed in prior research) can support the development of a large-scale, real-world tutor. We are creating an open-access Web site, called Mathtutor (http://webmathtutor.org), where middle school students can solve math problems with step-by-step guidance from ITS. The Mathtutor site fields example-tracing tutors, a novel type of ITS that are built \"by demonstration,\" without programming, using the cognitive tutor authoring tools (CTATs). The project's main contribution will be that it represents a stringent test of large-scale tutor authoring through programming by demonstration. A secondary contribution will be that it tests whether an open-access site (i.e., a site that is widely and freely available) with software tutors for math learning can attract and sustain user interest and learning on a large scale.", "title": "" }, { "docid": "3eaec3c5f9681f131cde6dd72c3ad141", "text": "This paper proposes a novel acoustic echo suppression (AES) algorithm based on speech presence probability in a frequency domain. Double talk detection algorithm based on two cross-correlation coefficients modeled by Beta distribution controls the update of echo path response to improve the quality of near-end speech. The near-end speech presence probability combined with the Wiener gain function is used to reduce the residual echo. The performance of the proposed algorithm is evaluated by objective tests. High echo-return-loss enhancement and perceptual evaluation of speech quality (PESQ) scores are obtained by comparing with the conventional AES method.", "title": "" }, { "docid": "c04cc8c930b534d57f729d9e53fd283b", "text": "This paper presents a morphological classification of languages from the IR perspective. Linguistic typology research has shown that the morphological complexity of each language of the world can be described by two variables, index of synthesis and index of fusion. These variables provide a theoretical basis for IR research handling morphological issues. A common theoretical framework is needed in particular due to the increasing significance of cross-language retrieval research and CLIR systems processing different languages. The paper elaborates the linguistic morphological typology for the purposes of IR research. It is studied how the indices of synthesis and fusion could be used as practical tools in monoand cross-lingual IR research. The need for semantic and syntactic typologies is discussed. The paper also reviews studies done in different languages on the effects of morphology and stemming in IR.", "title": "" }, { "docid": "b052e965bd0a28bf52d8faa6f177ed1a", "text": "Cloud computing requires comprehensive security sol utions based upon many aspects of a large and loosely integrated system. The application software and databases in cloud computing are moved to the centralized large data centers, where the managemen t of the data and services may not be fully trustwo r hy. Threats, vulnerabilities and risks for cloud comput ing are explained, and then, we have designed a clo ud computing security development lifecycle model to a chieve safety and enable the user to take advantage of this technology as much as possible of security and f ce the risks that may be exposed to data. A data integrity checking algorithm; which eliminates the third party auditing, is explained to protect stati c and dynamic data from unauthorized observation, modific ation, or interference. Keyword: Cloud Computing, Cloud Computing Security, Data Integrity, Cloud Threads, Cloud Risks", "title": "" }, { "docid": "413d0b457cc1b96bf65d8a3e1c98ed41", "text": "Peer-to-peer (P2P) lending is a fast growing financial technology (FinTech) trend that is displacing traditional retail banking. Studies on P2P lending have focused on predicting individual interest rates or default probabilities. However, the relationship between aggregated P2P interest rates and the general economy will be of interest to investors and borrowers as the P2P credit market matures. We show that the variation in P2P interest rates across grade types are determined by three macroeconomic latent factors formed by Canonical Correlation Analysis (CCA) — macro default, investor uncertainty, and the fundamental value of the market. However, the variation in P2P interest rates across term types cannot be explained by the general economy.", "title": "" }, { "docid": "c135c90c9af331d89982dce0b4454a87", "text": "Suicide attempts often are impulsive, yet little is known about the characteristics of impulsive suicide. We examined impulsive suicide attempts within a population-based, case-control study of nearly lethal suicide attempts among people 13-34 years of age. Attempts were considered impulsive if the respondent reported spending less than 5 minutes between the decision to attempt suicide and the actual attempt. Among the 153 case-subjects, 24% attempted impulsively. Impulsive attempts were more likely among those who had been in a physical fight and less likely among those who were depressed. Relative to control subjects, male sex, fighting, and hopelessness distinguished impulsive cases but depression did not. Our findings suggest that inadequate control of aggressive impulses might be a greater indicator of risk for impulsive suicide attempts than depression.", "title": "" }, { "docid": "8b7cc94a7284d4380537418ed9ee0f01", "text": "The subject matter of this research; employee motivation and performance seeks to look at how best employees can be motivated in order to achieve high performance within a company or organization. Managers and entrepreneurs must ensure that companies or organizations have a competent personnel that is capable to handle this task. This takes us to the problem question of this research “why is not a sufficient motivation for high performance?” This therefore establishes the fact that money is for high performance but there is need to look at other aspects of motivation which is not necessarily money. Four theories were taken into consideration to give an explanation to the question raised in the problem formulation. These theories include: Maslow’s hierarchy of needs, Herzberg two factor theory, John Adair fifty-fifty theory and Vroom’s expectancy theory. Furthermore, the performance management process as a tool to measure employee performance and company performance. This research equally looked at the various reward systems which could be used by a company. In addition to the above, culture and organizational culture and it influence on employee behaviour within a company was also examined. An empirical study was done at Ultimate Companion Limited which represents the case study of this research work. Interviews and questionnaires were conducted to sample employee and management view on motivation and how it can increase performance at the company. Finally, a comparison of findings with theories, a discussion which raises critical issues on motivation/performance and conclusion constitute the last part of the research. Subject headings, (keywords) Motivation, Performance, Intrinsic, Extrinsic, Incentive, Tangible and Intangible, Reward", "title": "" }, { "docid": "4a5abe07b93938e7549df068967731fc", "text": "A novel compact dual-polarized unidirectional wideband antenna based on two crossed magneto-electric dipoles is proposed. The proposed miniaturization method consist in transforming the electrical filled square dipoles into vertical folded square loops. The surface of the radiating element is reduced to 0.23λ0∗0.23λ0, where λ0 is the wavelength at the lowest operation frequency for a standing wave ratio (SWR) <2.5, which corresponds to a reduction factor of 48%. The antenna has been prototyped using 3D printing technology. The measured input impedance bandwidth is 51.2% from 1.7 GHz to 2.9 GHz with a Standing wave ratio (SWR) <2.", "title": "" }, { "docid": "7a5fb7d551d412fd8bdbc3183dafc234", "text": "Presentations have been an effective means of delivering information to groups for ages. Over the past few decades, technological advancements have revolutionized the way humans deliver presentations. Despite that, the quality of presentations can be varied and affected by a variety of reasons. Conventional presentation evaluation usually requires painstaking manual analysis by experts. Although the expert feedback can definitely assist users in improving their presentation skills, manual evaluation suffers from high cost and is often not accessible to most people. In this work, we propose a novel multi-sensor self-quantification framework for presentations. Utilizing conventional ambient sensors (i.e., static cameras, Kinect sensor) and the emerging wearable egocentric sensors (i.e., Google Glass), we first analyze the efficacy of each type of sensor with various nonverbal assessment rubrics, which is followed by our proposed multi-sensor presentation analytics framework. The proposed framework is evaluated on a new presentation dataset, namely NUS Multi-Sensor Presentation (NUSMSP) dataset, which consists of 51 presentations covering a diverse set of topics. The dataset was recorded with ambient static cameras, Kinect sensor, and Google Glass. In addition to multi-sensor analytics, we have conducted a user study with the speakers to verify the effectiveness of our system generated analytics, which has received positive and promising feedback.", "title": "" }, { "docid": "1d98b5bd0c7178b39b7da0e0f9586615", "text": "TDMA has been proposed as a MAC protocol for wireless sensor networks (WSNs) due to its efficiency in high WSN load. However, TDMA is plagued with shortcomings; we present modifications to TDMA that will allow for the same efficiency of TDMA, while allowing the network to conserve energy during times of low load (when there is no activity being detected). Recognizing that aggregation plays an essential role in WSNs, TDMA-ASAP adds to TDMA: (a) transmission parallelism based on a level-by-level localized graph-coloring, (b) appropriate sleeping between transmissions (\"napping\"), (c) judicious and controlled TDMA slot stealing to avoid empty slots to be unused and (d) intelligent scheduling/ordering transmissions. Our results show that TDMA-ASAP's unique combination of TDMA, slot-stealing, napping, and message aggregation significantly outperforms other hybrid WSN MAC algorithms and has a performance that is close to optimal in terms of energy consumption and overall delay.", "title": "" }, { "docid": "ab1b9e358d10fc091e8c7eedf4674a8a", "text": "An effective and efficient defect inspection system for TFT-LCD polarised films using adaptive thresholds and shape-based image analyses Chung-Ho Noha; Seok-Lyong Leea; Deok-Hwan Kimb; Chin-Wan Chungc; Sang-Hee Kimd a School of Industrial and Management Engineering, Hankuk University of Foreign Studies, Yonginshi, Korea b School of Electronics Engineering, Inha University, Yonghyun-dong, Incheon-shi, Korea c Division of Computer Science, KAIST, Daejeon-shi, Korea d Key Technology Research Center, Agency for Defense Development, Daejeon-shi, Korea", "title": "" }, { "docid": "d71af4267f6e54288ecff049748bcd7d", "text": "Background: The purpose of this study was to investigate the effect of a combined visual efficiency and perceptual-motor training programme on the handwriting performance of Chinese children aged 6 to 9 years with handwriting difficulties (HWD). Methods: Twenty-six children with HWD were assigned randomly and equally into two groups. The training programme was provided over eight consecutive weeks with one session per week. The perceptual-motor group received training only on perceptual-motor functions, including visual spatial relationship, visual sequential memory, visual constancy, visual closure, graphomotor control and grip control. The combined training group received additional training components on visual efficiency, including accommodation, ocular motility, and binocular fusion. Visual efficiency, visual perceptual skills, and Chinese handwriting performance were assessed before and after the training programme. Results: The results showed statistically significant improvement in handwriting speed after the training in both groups. However, the combined training gave no additional benefit on improving handwriting speed (ANCOVA: F=0.43, p=0.52). In terms of visual efficiency, participants in the combined training group showed greater improvement in amplitude of accommodation measured with right eye (F=4.34, p<0.05), left eye (F=5.77, p<0.05) and both eyes (F=11.08, p<0.01). Conclusions: Although the additional visual efficiency training did not provide further improvement in the handwriting speed of children with HWD, children showed improvement in their accommodation amplitude. As accommodative function is important for providing sustainable and clear near vision in the process of reading and word recognition for writing, the effect of the combined training on handwriting performance should be further investigated.", "title": "" }, { "docid": "d878e4bb4b17901a36c2cf7235c4568f", "text": "Cloud computing is the future generation of computational services delivered over the Internet. As cloud infrastructure expands, resource management in such a large heterogeneous and distributed environment is a challenging task. In a cloud environment, uncertainty and dispersion of resources encounters problems of allocation of resources. Unfortunately, existing resource management techniques, frameworks and mechanisms are insufficient to handle these environments, applications and resource behaviors. To provide an efficient performance and to execute workloads, there is a need of quality of service (QoS) based autonomic resource management approach which manages resources automatically and provides reliable, secure and cost efficient cloud services. In this paper, we present an intelligent QoS-aware autonomic resource management approach named as CHOPPER (Configuring, Healing, Optimizing and Protecting Policy for Efficient Resource management). CHOPPER offers self-configuration of applications and resources, self-healing by handling sudden failures, self-protection against security attacks and self-optimization for maximum resource utilization. We have evaluated the performance of the proposed approach in a real cloud environment and the experimental results show that the proposed approach performs better in terms of cost, execution time, SLA violation, resource contention and also provides security against attacks.", "title": "" } ]
scidocsrr
a933e9c1ab6140aac102052413b934b7
Increased nature relatedness and decreased authoritarian political views after psilocybin for treatment-resistant depression
[ { "docid": "9ff6d7a36646b2f9170bd46d14e25093", "text": "Psychedelic drugs such as LSD and psilocybin are often claimed to be capable of inducing life-changing experiences described as mystical or transcendental, especially if high doses are taken. The present study examined possible enduring effects of such experiences by comparing users of psychedelic drugs (n = 88), users of nonpsychedelic illegal drugs (e.g., marijuana, amphetamines) (n = 29) and non illicit drug-using social drinkers (n = 66) on questionnaire measures of values, beliefs and emotional empathy. Samples were obtained from Israel (n = 110) and Australia (n = 73) in a cross-cultural comparison to see if values associated with psychedelic drug use transcended culture of origin. Psychedelic users scored significantly higher on mystical beliefs (e.g., oneness with God and the universe) and life values of spirituality and concern for others than the other groups, and lower on the value of financial prosperity, irrespective of culture of origin. Users of nonpsychedelic illegal drugs scored significantly lower on a measure of coping ability than both psychedelic users and non illicit drug users. Both groups of illegal drug users scored significantly higher on empathy than non illicit drug users. Results are discussed in the context of earlier findings from Pahnke (1966) and Doblin (1991) of the transformative effect of psychedelic experiences, although the possibility remains that present findings reflect predrug characteristics of those who chose to take psychedelic drugs rather than effects of the drugs themselves.", "title": "" } ]
[ { "docid": "932934a4362bd671427954d0afb61459", "text": "On the basis of the similarity between spinel and rocksalt structures, it is shown that some spinel oxides (e.g., MgCo2O4, etc) can be cathode materials for Mg rechargeable batteries around 150 °C. The Mg insertion into spinel lattices occurs via \"intercalation and push-out\" process to form a rocksalt phase in the spinel mother phase. For example, by utilizing the valence change from Co(III) to Co(II) in MgCo2O4, Mg insertion occurs at a considerably high potential of about 2.9 V vs. Mg2+/Mg, and similarly it occurs around 2.3 V vs. Mg2+/Mg with the valence change from Mn(III) to Mn(II) in MgMn2O4, being comparable to the ab initio calculation. The feasibility of Mg insertion would depend on the phase stability of the counterpart rocksalt XO of MgO in Mg2X2O4 or MgX3O4 (X = Co, Fe, Mn, and Cr). In addition, the normal spinel MgMn2O4 and MgCr2O4 can be demagnesiated to some extent owing to the robust host structure of Mg1-xX2O4, where the Mg extraction/insertion potentials for MgMn2O4 and MgCr2O4 are both about 3.4 V vs. Mg2+/Mg. Especially, the former \"intercalation and push-out\" process would provide a safe and stable design of cathode materials for polyvalent cations.", "title": "" }, { "docid": "2833dbe3c3e576a3ba8f175a755b6964", "text": "The accuracy and granularity of network flow measurement play a critical role in many network management tasks, especially for anomaly detection. Despite its important, traffic monitoring often introduces overhead to the network, thus, operators have to employ sampling and aggregation to avoid overloading the infrastructure. However, such sampled and aggregated information may affect the accuracy of traffic anomaly detection. In this work, we propose a novel method that performs adaptive zooming in the aggregation of flows to be measured. In order to better balance the monitoring overhead and the anomaly detection accuracy, we propose a prediction based algorithm that dynamically change the granularity of measurement along both the spatial and the temporal dimensions. To control the load on each individual switch, we carefully delegate monitoring rules in the network wide. Using real-world data and three simple anomaly detectors, we show that the adaptive based counting can detect anomalies more accurately with less overhead.", "title": "" }, { "docid": "631b473342cc30360626eaea0734f1d8", "text": "Argument extraction is the task of identifying arguments, along with their components in text. Arguments can be usually decomposed into a claim and one or more premises justifying it. The proposed approach tries to identify segments that represent argument elements (claims and premises) on social Web texts (mainly news and blogs) in the Greek language, for a small set of thematic domains, including articles on politics, economics, culture, various social issues, and sports. The proposed approach exploits distributed representations of words, extracted from a large non-annotated corpus. Among the novel aspects of this work is the thematic domain itself which relates to social Web, in contrast to traditional research in the area, which concentrates mainly on law documents and scientific publications. The huge increase of social web communities, along with their user tendency to debate, makes the identification of arguments in these texts a necessity. In addition, a new manually annotated corpus has been constructed that can be used freely for research purposes. Evaluation results are quite promising, suggesting that distributed representations can contribute positively to the task of argument extraction.", "title": "" }, { "docid": "88a27616b16a0d643939a40685be12f1", "text": "The water supply system has a high operational cost associated with its operations. This is characteristically due to the operations of the pumps that consume significantly high amount of electric energy. In order to minimize the electric energy consumption and reduce the maintenance cost of the system, this paper proposes the use of an Adaptive Weighted sum Genetic Algorithm (AWGA) in creating an optimal pump schedule, which can minimize the cost of electricity and satisfy the constraint of the maximum and minimum levels of water in the reservoir as well. The Adaptive weighted sum GA is based on popular weighted sum approach GA for multi-objective optimization problem wherein the weights multipliers of the individual fitness functions are adaptively selected. The algorithm has been tested using a hypothetical case study and promising results have been obtained and presented.", "title": "" }, { "docid": "10ebda480df1157da5581b6219a9464a", "text": "Our goal is to create a convenient natural language interface for performing wellspecified but complex actions such as analyzing data, manipulating text, and querying databases. However, existing natural language interfaces for such tasks are quite primitive compared to the power one wields with a programming language. To bridge this gap, we start with a core programming language and allow users to “naturalize” the core language incrementally by defining alternative, more natural syntax and increasingly complex concepts in terms of compositions of simpler ones. In a voxel world, we show that a community of users can simultaneously teach a common system a diverse language and use it to build hundreds of complex voxel structures. Over the course of three days, these users went from using only the core language to using the naturalized language in 85.9% of the last 10K utterances.", "title": "" }, { "docid": "fe16f2d946b3ea7bc1169d5667365dbe", "text": "This study assessed embodied simulation via electromyography (EMG) as participants first encoded emotionally ambiguous faces with emotion concepts (i.e., \"angry,\"\"happy\") and later passively viewed the faces without the concepts. Memory for the faces was also measured. At initial encoding, participants displayed more smiling-related EMG activity in response to faces paired with \"happy\" than in response to faces paired with \"angry.\" Later, in the absence of concepts, participants remembered happiness-encoded faces as happier than anger-encoded faces. Further, during passive reexposure to the ambiguous faces, participants' EMG indicated spontaneous emotion-specific mimicry, which in turn predicted memory bias. No specific EMG activity was observed when participants encoded or viewed faces with non-emotion-related valenced concepts, or when participants encoded or viewed Chinese ideographs. From an embodiment perspective, emotion simulation is a measure of what is currently perceived. Thus, these findings provide evidence of genuine concept-driven changes in emotion perception. More generally, the findings highlight embodiment's role in the representation and processing of emotional information.", "title": "" }, { "docid": "ba9de90efb41ef69e64a6880e420e0ac", "text": "The emergence of chronic inflammation during obesity in the absence of overt infection or well-defined autoimmune processes is a puzzling phenomenon. The Nod-like receptor (NLR) family of innate immune cell sensors, such as the nucleotide-binding domain, leucine-rich–containing family, pyrin domain–containing-3 (Nlrp3, but also known as Nalp3 or cryopyrin) inflammasome are implicated in recognizing certain nonmicrobial originated 'danger signals' leading to caspase-1 activation and subsequent interleukin-1β (IL-1β) and IL-18 secretion. We show that calorie restriction and exercise-mediated weight loss in obese individuals with type 2 diabetes is associated with a reduction in adipose tissue expression of Nlrp3 as well as with decreased inflammation and improved insulin sensitivity. We further found that the Nlrp3 inflammasome senses lipotoxicity-associated increases in intracellular ceramide to induce caspase-1 cleavage in macrophages and adipose tissue. Ablation of Nlrp3 in mice prevents obesity-induced inflammasome activation in fat depots and liver as well as enhances insulin signaling. Furthermore, elimination of Nlrp3 in obese mice reduces IL-18 and adipose tissue interferon-γ (IFN-γ) expression, increases naive T cell numbers and reduces effector T cell numbers in adipose tissue. Collectively, these data establish that the Nlrp3 inflammasome senses obesity-associated danger signals and contributes to obesity-induced inflammation and insulin resistance.", "title": "" }, { "docid": "3c3e377d9e06499549e4de8e13e39612", "text": "A plastic ankle foot orthosis (AFO) was developed, referred to as functional ankle foot orthosis Type 2 (FAFO (II)), which can deal with genu recurvatum and the severe spastic foot in walking. Clinical trials were successful for all varus and drop feet, and for most cases of genu recurvatum. Electromyogram studies showed that the FAFO (II) reduced the spasticity of gastrocnemius and hamstring muscles and activated the quadricep muscles. Gait analysis revealed a reduction of the knee angles in the stance phase on the affected side when using the FAFO (II). Mechanical stress tests showed excellent durability of the orthosis and demonstrated its effectiveness for controlling spasticity in comparison with other types of plastic AFOs.", "title": "" }, { "docid": "1a23c0ed6aea7ba2cf4d3021de4cfa8b", "text": "This article focuses on the traffic coordination problem at traffic intersections. We present a decentralized coordination approach, combining optimal control with model-based heuristics. We show how model-based heuristics can lead to low-complexity solutions that are suitable for a fast online implementation, and analyze its properties in terms of efficiency, feasibility and optimality. Finally, simulation results for different scenarios are also presented.", "title": "" }, { "docid": "af78c57378a472c8f7be4eb354feb442", "text": "Mutations in the human sonic hedgehog gene ( SHH) are the most frequent cause of autosomal dominant inherited holoprosencephaly (HPE), a complex brain malformation resulting from incomplete cleavage of the developing forebrain into two separate hemispheres and ventricles. Here we report the clinical and molecular findings in five unrelated patients with HPE and their relatives with an identified SHH mutation. Three new and one previously reported SHH mutations were identified, a fifth proband was found to carry a reciprocal subtelomeric rearrangement involving the SHH locus in 7q36. An extremely wide intrafamilial phenotypic variability was observed, ranging from the classical phenotype with alobar HPE accompanied by typical severe craniofacial abnormalities to very mild clinical signs of choanal stenosis or solitary median maxillary central incisor (SMMCI) only. Two families were initially ascertained because of microcephaly in combination with developmental delay and/or mental retardation and SMMCI, the latter being a frequent finding in patients with an identified SHH mutation. In other affected family members a delay in speech acquisition and learning disabilities were the leading clinical signs. Conclusion: mutational analysis of the sonic hedgehog gene should not only be considered in patients presenting with the classical holoprosencephaly phenotype but also in those with two or more clinical signs of the wide phenotypic spectrum of associated abnormalities, especially in combination with a positive family history.", "title": "" }, { "docid": "a0ca6986d59905cea49ed28fa378c69e", "text": "The epidemic of type 2 diabetes and impaired glucose tolerance is one of the main causes of morbidity and mortality worldwide. In both disorders, tissues such as muscle, fat and liver become less responsive or resistant to insulin. This state is also linked to other common health problems, such as obesity, polycystic ovarian disease, hyperlipidaemia, hypertension and atherosclerosis. The pathophysiology of insulin resistance involves a complex network of signalling pathways, activated by the insulin receptor, which regulates intermediary metabolism and its organization in cells. But recent studies have shown that numerous other hormones and signalling events attenuate insulin action, and are important in type 2 diabetes.", "title": "" }, { "docid": "7604835dc6d7927880abcf7b91b5c405", "text": "The computational modeling of emotion has been an area of growing interest in cognitive robotics research in recent years, but also a source of contention regarding how to conceive of emotion and how to model it. In this paper, emotion is characterized as (a) closely connected to embodied cognition, (b) grounded in homeostatic bodily regulation, and (c) a powerful organizational principle—affective modulation of behavioral and cognitive mechanisms—that is ‘useful’ in both biological brains and robotic cognitive architectures. We elaborate how emotion theories and models centered on core neurological structures in the mammalian brain, and inspired by embodied, dynamical, and enactive approaches in cognitive science, may impact on computational and robotic modeling. In light of the theoretical discussion, work in progress on the development of an embodied cognitive-affective architecture for robots is presented, incorporating aspects of the theories discussed.", "title": "" }, { "docid": "5f3b787993ae1ebae34d8cee3ba1a975", "text": "Neisseria meningitidis remains an important cause of severe sepsis and meningitis worldwide. The bacterium is only found in human hosts, and so must continually coexist with the immune system. Consequently, N meningitidis uses multiple mechanisms to avoid being killed by antimicrobial proteins, phagocytes, and, crucially, the complement system. Much remains to be learnt about the strategies N meningitidis employs to evade aspects of immune killing, including mimicry of host molecules by bacterial structures such as capsule and lipopolysaccharide, which poses substantial problems for vaccine design. To date, available vaccines only protect individuals against subsets of meningococcal strains. However, two promising vaccines are currently being assessed in clinical trials and appear to offer good prospects for an effective means of protecting individuals against endemic serogroup B disease, which has proven to be a major challenge in vaccine research.", "title": "" }, { "docid": "b045350bfb820634046bff907419d1bf", "text": "Action recognition and human pose estimation are closely related but both problems are generally handled as distinct tasks in the literature. In this work, we propose a multitask framework for jointly 2D and 3D pose estimation from still images and human action recognition from video sequences. We show that a single architecture can be used to solve the two problems in an efficient way and still achieves state-of-the-art results. Additionally, we demonstrate that optimization from end-to-end leads to significantly higher accuracy than separated learning. The proposed architecture can be trained with data from different categories simultaneously in a seamlessly way. The reported results on four datasets (MPII, Human3.6M, Penn Action and NTU) demonstrate the effectiveness of our method on the targeted tasks.", "title": "" }, { "docid": "ecd486fabd206ad8c28ea9d9da8cd0ee", "text": "The prevailing binding of SOAP to HTTP specifies that SOAP messages be encoded as an XML 1.0 document which is then sent between client and server. XML processing however can be slow and memory intensive, especially for scientific data, and consequently SOAP has been regarded as an inappropriate protocol for scientific data. Efficiency considerations thus lead to the prevailing practice of separating data from the SOAP control channel. Instead, it is stored in specialized binary formats and transmitted either via attachments or indirectly via a file sharing mechanism, such as GridFTP or HTTP. This separation invariably complicates development due to the multiple libraries and type systems to be handled; furthermore it suffers from performance issues, especially when handling small binary data. As an alternative solution, binary XML provides a highly efficient encoding scheme for binary data in the XML and SOAP messages, and with it we can gain high performance as well as unifying the development environment without unduly impacting the Web service protocol stack. In this paper we present our implementation of a generic SOAP engine that supports both textual XML and binary XML as the encoding scheme of the message. We also present our binary XML data model and encoding scheme. Our experiments show that for scientific applications binary XML together with the generic SOAP implementation not only ease development, but also provide better performance and are more widely applicable than the commonly used separated schemes", "title": "" }, { "docid": "e9f19d60dfa80d34ca4db370080b977d", "text": "This paper reviews three recent books on data mining written from three different perspectives, i.e. databases, machine learning, and statistics. Although the exploration in this paper is suggestive instead of conclusive, it reveals that besides some common properties, different perspectives lay strong emphases on different aspects of data mining. The emphasis of the database perspective is on efficiency because this perspective strongly concerns the whole discovery process and huge data volume. The emphasis of the machine learning perspective is on effectiveness because this perspective is heavily attracted by substantive heuristics working well in data analysis although they may not always be useful. As for the statistics perspective, its emphasis is on validity because this perspective cares much for mathematical soundness behind mining methods.", "title": "" }, { "docid": "243d1dc8df4b8fbd37cc347a6782a2b5", "text": "This paper introduces a framework for`curious neural controllers' which employ an adaptive world model for goal directed on-line learning. First an on-line reinforcement learning algorithm for autonomousànimats' is described. The algorithm is based on two fully recurrent`self-supervised' continually running networks which learn in parallel. One of the networks learns to represent a complete model of the environmental dynamics and is called thèmodel network'. It provides completècredit assignment paths' into the past for the second network which controls the animats physical actions in a possibly reactive environment. The an-imats goal is to maximize cumulative reinforcement and minimize cumulativèpain'. The algorithm has properties which allow to implement something like the desire to improve the model network's knowledge about the world. This is related to curiosity. It is described how the particular algorithm (as well as similar model-building algorithms) may be augmented by dynamic curiosity and boredom in a natural manner. This may be done by introducing (delayed) reinforcement for actions that increase the model network's knowledge about the world. This in turn requires the model network to model its own ignorance, thus showing a rudimentary form of self-introspective behavior.", "title": "" }, { "docid": "709aa1bc4ace514e46f7edbb07fb03a9", "text": "Empirical scoring functions based on either molecular force fields or cheminformatics descriptors are widely used, in conjunction with molecular docking, during the early stages of drug discovery to predict potency and binding affinity of a drug-like molecule to a given target. These models require expert-level knowledge of physical chemistry and biology to be encoded as hand-tuned parameters or features rather than allowing the underlying model to select features in a data-driven procedure. Here, we develop a general 3-dimensional spatial convolution operation for learning atomic-level chemical interactions directly from atomic coordinates and demonstrate its application to structure-based bioactivity prediction. The atomic convolutional neural network is trained to predict the experimentally determined binding affinity of a protein-ligand complex by direct calculation of the energy associated with the complex, protein, and ligand given the crystal structure of the binding pose. Non-covalent interactions present in the complex that are absent in the protein-ligand sub-structures are identified and the model learns the interaction strength associated with these features. We test our model by predicting the binding free energy of a subset of protein-ligand complexes found in the PDBBind dataset and compare with state-of-the-art cheminformatics and machine learning-based approaches. We find that all methods achieve experimental accuracy (less than 1 kcal/mol mean absolute error) and that atomic convolutional networks either outperform or perform competitively with the cheminformatics based methods. Unlike all previous protein-ligand prediction systems, atomic convolutional networks are end-to-end and fully-differentiable. They represent a new data-driven, physics-based deep learning model paradigm that offers a strong foundation for future improvements in structure-based bioactivity prediction.", "title": "" }, { "docid": "74fcade8e5f5f93f3ffa27c4d9130b9f", "text": "Resampling is an important signature of manipulated images. In this paper, we propose two methods to detect and localize image manipulations based on a combination of resampling features and deep learning. In the first method, the Radon transform of resampling features are computed on overlapping image patches. Deep learning classifiers and a Gaussian conditional random field model are then used to create a heatmap. Tampered regions are located using a Random Walker segmentation method. In the second method, resampling features computed on overlapping image patches are passed through a Long short-term memory (LSTM) based network for classification and localization. We compare the performance of detection/localization of both these methods. Our experimental results show that both techniques are effective in detecting and localizing digital image forgeries.", "title": "" }, { "docid": "80383246c35226231b4f136c6cc0019b", "text": "How to automatically monitor wide critical open areas is a challenge to be addressed. Recent computer vision algorithms can be exploited to avoid the deployment of a large amount of expensive sensors. In this work, we propose our object tracking system which, combined with our recently developed anomaly detection system. can provide intelligence and protection for critical areas. In this work. we report two case studies: an international pier and a city parking lot. We acquire sequences to evaluate the effectiveness of the approach in challenging conditions. We report quantitative results for object counting, detection, parking analysis, and anomaly detection. Moreover, we report state-of-the-art results for statistical anomaly detection on a public dataset.", "title": "" } ]
scidocsrr
5cffc236db2765f3925a57401d746f06
Quarterly Time-Series Forecasting With Neural Networks
[ { "docid": "ab813ff20324600d5b765377588c9475", "text": "Estimating the flows of rivers can have significant economic impact, as this can help in agricultural water management and in protection from water shortages and possible flood damage. The first goal of this paper is to apply neural networks to the problem of forecasting the flow of the River Nile in Egypt. The second goal of the paper is to utilize the time series as a benchmark to compare between several neural-network forecasting methods.We compare between four different methods to preprocess the inputs and outputs, including a novel method proposed here based on the discrete Fourier series. We also compare between three different methods for the multistep ahead forecast problem: the direct method, the recursive method, and the recursive method trained using a backpropagation through time scheme. We also include a theoretical comparison between these three methods. The final comparison is between different methods to perform longer horizon forecast, and that includes ways to partition the problem into the several subproblems of forecasting K steps ahead.", "title": "" } ]
[ { "docid": "72f6f6484499ccaa0188d2a795daa74c", "text": "Road detection is one of the most important research areas in driver assistance and automated driving field. However, the performance of existing methods is still unsatisfactory, especially in severe shadow conditions. To overcome those difficulties, first we propose a novel shadow-free feature extractor based on the color distribution of road surface pixels. Then we present a road detection framework based on the extractor, whose performance is more accurate and robust than that of existing extractors. Also, the proposed framework has much low-complexity, which is suitable for usage in practical systems.", "title": "" }, { "docid": "e148d17a78b3b8e144bf0db5a218bd97", "text": "Novel synchronous machines with doubly salient structure and permanent magnets (PMs) in stator yoke have been developed in this paper. The stator is constituted by T-shaped lamination segments sandwiched with circumferentially magnetized PMs with alternate polarity, while the rotor is identical to that of switched reluctance machines (SRMs). The stator pole number is multiples of six, which is the number of stator poles in a unit machine. Similar to variable flux reluctance machines (VFRMs), the rotor pole numbers in the novel machines are not restricted to those in SRMs. When the stator and rotor pole numbers differ by one (or the number of multiples), the novel synchronous machines show sinusoidal bipolar phase flux linkage and back electromotive force (EMF), which make the machines suitable for brushless ac operation. Moreover, two prototype machines with six-pole stator and five-pole, seven-pole rotors are designed and optimized by 2-D finite element analysis. It shows that, compared with VFRMs, the novel machines can produce ~70 % higher torque density with the same copper loss and machine size. Meanwhile, the proposed machines have negligible reluctance torque due to very low saliency ratio. Experimental results of back EFM, cogging torque, and average torque on the prototypes are provided to validate the analysis.", "title": "" }, { "docid": "a914d26b2086e20a7452f0634574820d", "text": "In this paper, we provide a semantic foundation for role-related concepts in enterprise modelling. We use a conceptual modelling framework to provide a well-founded underpinning for these concepts. We review a number of enterprise modelling approaches in light of the concepts described. This allows us to understand the various approaches, to contrast them and to identify problems in the definition and/or usage of these concepts.", "title": "" }, { "docid": "60c9355aba12e84461519f28b157c432", "text": "Successful recurrent models such as long short-term memories (LSTMs) and gated recurrent units (GRUs) use ad hoc gating mechanisms. Empirically these models have been found to improve the learning of medium to long term temporal dependencies and to help with vanishing gradient issues. We prove that learnable gates in a recurrent model formally provide quasiinvariance to general time transformations in the input data. We recover part of the LSTM architecture from a simple axiomatic approach. This result leads to a new way of initializing gate biases in LSTMs and GRUs. Experimentally, this new chrono initialization is shown to greatly improve learning of long term dependencies, with minimal implementation effort. Recurrent neural networks (e.g. (Jaeger, 2002)) are a standard machine learning tool to model and represent temporal data; mathematically they amount to learning the parameters of a parameterized dynamical system so that its behavior optimizes some criterion, such as the prediction of the next data in a sequence. Handling long term dependencies in temporal data has been a classical issue in the learning of recurrent networks. Indeed, stability of a dynamical system comes at the price of exponential decay of the gradient signals used for learning, a dilemma known as the vanishing gradient problem (Pascanu et al., 2012; Hochreiter, 1991; Bengio et al., 1994). This has led to the introduction of recurrent models specifically engineered to help with such phenomena. Use of feedback connections (Hochreiter & Schmidhuber, 1997) and control of feedback weights through gating mechanisms (Gers et al., 1999) partly alleviate the vanishing gradient problem. The resulting architectures, namely long short-term memories (LSTMs (Hochreiter & Schmidhuber, 1997; Gers et al., 1999)) and gated recurrent units (GRUs (Chung et al., 2014)) have become a standard for treating sequential data. Using orthogonal weight matrices is another proposed solution to the vanishing gradient problem, thoroughly studied in (Saxe et al., 2013; Le et al., 2015; Arjovsky et al., 2016; Wisdom et al., 2016; Henaff et al., 2016). This comes with either computational overhead, or limitation in representational power. Furthermore, restricting the weight matrices to the set of orthogonal matrices makes forgetting of useless information difficult. The contribution of this paper is threefold: ∙ We show that postulating invariance to time transformations in the data (taking invariance to time warping as an axiom) necessarily leads to a gate-like mechanism in recurrent models (Section 1). This provides a clean derivation of part of the popular LSTM and GRU architectures from first principles. In this framework, gate values appear as time contraction or time dilation coefficients, similar in spirit to the notion of time constant introduced in (Mozer, 1992). ∙ From these insights, we provide precise prescriptions on how to initialize gate biases (Section 2) depending on the range of time dependencies to be captured. It has previously been advocated that setting the bias of the forget gate of LSTMs to 1 or 2 provides overall good performance (Gers & Schmidhuber, 2000; Jozefowicz et al., 2015). The viewpoint here 1 ar X iv :1 80 4. 11 18 8v 1 [ cs .L G ] 2 3 M ar 2 01 8 Published as a conference paper at ICLR 2018 explains why this is reasonable in most cases, when facing medium term dependencies, but fails when facing long to very long term dependencies. ∙ We test the empirical benefits of the new initialization on both synthetic and real world data (Section 3). We observe substantial improvement with long-term dependencies, and slight gains or no change when short-term dependencies dominate. 1 FROM TIME WARPING INVARIANCE TO GATING When tackling sequential learning problems, being resilient to a change in time scale is crucial. Lack of resilience to time rescaling implies that we can make a problem arbitrarily difficult simply by changing the unit of measurement of time. Ordinary recurrent neural networks are highly nonresilient to time rescaling: a task can be rendered impossible for an ordinary recurrent neural network to learn, simply by inserting a fixed, small number of zeros or whitespaces between all elements of the input sequence. An explanation is that, with a given number of recurrent units, the class of functions representable by an ordinary recurrent network is not invariant to time rescaling. Ideally, one would like a recurrent model to be able to learn from time-warped input data x(c(t)) as easily as it learns from data x(t), at least if the time warping c(t) is not overly complex. The change of time c may represent not only time rescalings, but, for instance, accelerations or decelerations of the phenomena in the input data. We call a class of models invariant to time warping, if for any model in the class with input data x(t), and for any time warping c(t), there is another (or the same) model in the class that behaves on data x(c(t)) in the same way the original model behaves on x(t). (In practice, this will only be possible if the warping c is not too complex.) We will show that this is deeply linked to having gating mechanisms in the model. Invariance to time rescaling Let us first discuss the simpler case of a linear time rescaling. Formally, this is a linear transformation of time, that is c : R+ −→ R+ t ↦−→ αt (1) with α > 0. For instance, receiving a new input character every 10 time steps only, would correspond to α = 0.1. Studying time transformations is easier in the continuous-time setting. The discrete time equation of a basic recurrent network with hidden state ht, ht+1 = tanh (Wx xt +Wh ht + b) (2) can be seen as a time-discretized version of the continuous-time equation1 dh(t) dt = tanh (︀ Wx x(t) +Wh h(t) + b )︀ − h(t) (3) namely, (2) is the Taylor expansion h(t+ δt) ≈ h(t) + δt dh(t) dt with discretization step δt = 1. Now imagine that we want to describe time-rescaled data x(αt) with a model from the same class. Substituting t← c(t) = αt, x(t)← x(αt) and h(t)← h(αt) and rewriting (3) in terms of the new variables, the time-rescaled model satisfies2 dh(t) dt = α tanh (︀ Wx x(t) +Wh h(t) + b )︀ − αh(t). (4) However, when translated back to a discrete-time model, this no longer describes an ordinary RNN but a leaky RNN (Jaeger, 2002, §8.1). Indeed, taking the Taylor expansion of h(t+ δt) with δt = 1 in (4) yields the recurrent model ht+1 = α tanh (Wx xt +Wh ht + b) + (1− α)ht (5) We will use indices ht for discrete time and brackets h(t) for continuous time. More precisely, introduce a new time variable T and set the model and data with variable T to H(T ) := h(c(T )) and X(T ) := x(c(T )). Then compute dH(T ) dT . Then rename H to h, X to x and T to t to match the original notation.", "title": "" }, { "docid": "bfc349d95143237cc1cf55f77cb2044f", "text": "Additive manufacturing, commonly referred to as 3D printing, is a technology that builds three-dimensional structures and components layer by layer. Bioprinting is the use of 3D printing technology to fabricate tissue constructs for regenerative medicine from cell-laden bio-inks. 3D printing and bioprinting have huge potential in revolutionizing the field of tissue engineering and regenerative medicine. This paper reviews the application of 3D printing and bioprinting in the field of pediatrics.", "title": "" }, { "docid": "44e3ca0f64566978c3e0d0baeaa93543", "text": "Many applications of fast Fourier transforms (FFT’s), such as computer tomography, geophysical signal processing, high-resolution imaging radars, and prediction filters, require high-precision output. An error analysis reveals that the usual method of fixed-point computation of FFT’s of vectors of length2 leads to an average loss of/2 bits of precision. This phenomenon, often referred to as computational noise, causes major problems for arithmetic units with limited precision which are often used for real-time applications. Several researchers have noted that calculation of FFT’s with algebraic integers avoids computational noise entirely, see, e.g., [1]. We will combine a new algorithm for approximating complex numbers by cyclotomic integers with Chinese remaindering strategies to give an efficient algorithm to compute -bit precision FFT’s of length . More precisely, we will approximate complex numbers by cyclotomic integers in [ 2 2 ] whose coefficients, when expressed as polynomials in 2 2 , are bounded in absolute value by some integer . For fixed our algorithm runs in time (log( )), and produces an approximation with worst case error of (1 2 ). We will prove that this algorithm has optimal worst case error by proving a corresponding lower bound on the worst case error of any approximation algorithm for this task. The main tool for designing the algorithms is the use of the cyclotomic units, a subgroup of finite index in the unit group of the cyclotomic field. First implementations of our algorithms indicate that they are fast enough to be used for the design of low-cost high-speed/highprecision FFT chips.", "title": "" }, { "docid": "aa1a97f8f6f9f1c2627f63e1ec13e8cf", "text": "In this paper, we review recent emerging theoretical and technological advances of artificial intelligence (AI) in the big data settings. We conclude that integrating data-driven machine learning with human knowledge (common priors or implicit intuitions) can effectively lead to explainable, robust, and general AI, as follows: from shallow computation to deep neural reasoning; from merely data-driven model to data-driven with structured logic rules models; from task-oriented (domain-specific) intelligence (adherence to explicit instructions) to artificial general intelligence in a general context (the capability to learn from experience). Motivated by such endeavors, the next generation of AI, namely AI 2.0, is positioned to reinvent computing itself, to transform big data into structured knowledge, and to enable better decision-making for our society.", "title": "" }, { "docid": "90ba548ae91dbd94ea547a372422181f", "text": "The hypothesis that Attention-Deficit/Hyperactivity Disorder (ADHD) reflects a primary inhibitory executive function deficit has spurred a substantial literature. However, empirical findings and methodological issues challenge the etiologic primacy of inhibitory and executive deficits in ADHD. Based on accumulating evidence of increased intra-individual variability in ADHD, we reconsider executive dysfunction in light of distinctions between 'hot' and 'cool' executive function measures. We propose an integrative model that incorporates new neuroanatomical findings and emphasizes the interactions between parallel processing pathways as potential loci for dysfunction. Such a reconceptualization provides a means to transcend the limits of current models of executive dysfunction in ADHD and suggests a plan for future research on cognition grounded in neurophysiological and developmental considerations.", "title": "" }, { "docid": "453d5d826e0292245f8fa12ec564c719", "text": "Work with patient H.M., beginning in the 1950s, established key principles about the organization of memory that inspired decades of experimental work. Since H.M., the study of human memory and its disorders has continued to yield new insights and to improve understanding of the structure and organization of memory. Here we review this work with emphasis on the neuroanatomy of medial temporal lobe and diencephalic structures important for memory, multiple memory systems, visual perception, immediate memory, memory consolidation, the locus of long-term memory storage, the concepts of recollection and familiarity, and the question of how different medial temporal lobe structures may contribute differently to memory functions.", "title": "" }, { "docid": "1ceb1718fe3200853204d795c80481ab", "text": "Open-circuit-voltage (OCV) data is widely used for characterizing battery properties under different conditions. It contains important information that can help to identify battery state-of-charge (SOC) and state-of-health (SOH). While various OCV models have been developed for battery SOC estimation, few have been designed for SOH monitoring. In this paper, we propose a unified OCV model that can be applied for both SOC estimation and SOH monitoring. Improvements in SOC estimation using the new model compared to other existing models are demonstrated. Moreover, it is shown that the proposed OCV model can be used to perform battery SOH monitoring as it effectively captures aging information based on incremental capacity analysis (ICA). Parametric analysis and model complexity reduction are also addressed. Experimental data is used to illustrate the effectiveness of the model and its simplified version in the application context of SOC estimation and SOH monitoring. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "574b980883ffb73dd7cddf62f627c6b1", "text": "Getting computers to understand and process audio recordings in terms of their musical content is a difficult challenge. We describe a method in which general, polyphonic audio recordings of music can be aligned to symbolic score information in standard MIDI files. Because of the difficulties of polyphonic transcription, we perform matching directly on acoustic features that we extract from MIDI and audio. Polyphonic audio matching can be used for polyphonic score following, building intelligent editors that understand the content of recorded audio, and the analysis of expressive performance.", "title": "" }, { "docid": "170e2b0f15d9485bb3c00026c6c384a8", "text": "Chatbots are a rapidly expanding application of dialogue systems with companies switching to bot services for customer support, and new applications for users interested in casual conversation. One style of casual conversation is argument; many people love nothing more than a good argument. Moreover, there are a number of existing corpora of argumentative dialogues, annotated for agreement and disagreement, stance, sarcasm and argument quality. This paper introduces Debbie, a novel arguing bot, that selects arguments from conversational corpora, and aims to use them appropriately in context. We present an initial working prototype of Debbie, with some preliminary evaluation and describe future work.", "title": "" }, { "docid": "f8c4fd23f163c0a604569b5ecf4bdefd", "text": "The goal of interactive machine learning is to help scientists and engineers exploit more specialized data from within their deployed environment in less time, with greater accuracy and fewer costs. A basic introduction to the main components is provided here, untangling the many ideas that must be combined to produce practical interactive learning systems. This article also describes recent developments in machine learning that have significantly advanced the theoretical and practical foundations for the next generation of interactive tools.", "title": "" }, { "docid": "a3c011d846fed4f910cd3b112767ccc1", "text": "Tooth morphometry is known to be influenced by cultural, environmental and racial factors. Tooth size standards can be used in age and sex determination. One hundred models (50 males & 50 females) of normal occlusion were evaluated and significant correlations (p<0.001) were found to exist between the combined maxillary incisor widths and the maxillary intermolar and interpremolar arch widths. The study establishes the morphometric criterion for premolar and molar indices and quantifies the existence of a statistically significant sexual dimorphism in arch widths (p<0.02). INTRODUCTION Teeth are an excellent material in living and non-living populations for anthropological, genetic, odontologic and forensic investigations 1 .Their morphometry is known to be influenced by cultural, environmental and racial factors. The variations in tooth form are a common occurrence & these can be studied by measurements. Out of the two proportionswidth and length, the former is considered to be more important 2 . Tooth size standards can be used in age and sex determination 3 . Whenever it is possible to predict the sex, identification is simplified because then only missing persons of one sex need to be considered. In this sense identification of sex takes precedence over age 4 . Various features like tooth morphology and crown size are characteristic for males and females 5 .The present study on the maxillary arch takes into account the premolar arch width, molar arch width and the combined width of the maxillary central incisors in both the sexes. Pont's established constant ratio's between tooth sizes and arch widths in French population which came to be known as premolar and molar indices 6 .In the ideal dental arch he concluded that the ratio of combined incisor width to transverse arch width was .80 in the premolar area and .64 in the molar area. There has been a recent resurgence of interest in the clinical use of premolar and molar indices for establishing dental arch development objectives 7 . The present study was conducted to ascertain whether or not Pont's Index can be used reliably on north Indians and to establish the norms for the same. MATERIAL AND METHODS SELECTION CRITERIA One hundred subjects, fifty males and fifty females in the age group of 17-21 years were selected for the study as attrition is considered to be minimal for this age group. The study was conducted on the students of Sudha Rustagi College of Dental Sciences & Research, Faridabad, Haryana. INCLUSION CRITERIA Healthy state of gingival and peridontium.", "title": "" }, { "docid": "bf85be55fefe866d6cf35161bfa08836", "text": "Today, video distribution platforms use adaptive video streaming to deliver the maximum Quality of Experience to a wide range of devices connected to the Internet through different access networks. Among the techniques employed to implement video adaptivity, the stream-switching over HTTP is getting a wide acceptance due to its deployment and implementation simplicity. Recently it has been shown that the client-side algorithms proposed so far generate an on-off traffic pattern that may lead to unfairness and underutilization when many video flows share a bottleneck. In this paper we propose ELASTIC (fEedback Linearization Adaptive STreamIng Controller), a client-side controller designed using feedback control theory that does not generate an on-off traffic pattern. By employing a controlled testbed, allowing bandwidth capacity and delays to be set, we compare ELASTIC with other client-side controllers proposed in the literature. In particular, we have checked to what extent the considered algorithms are able to: 1) fully utilize the bottleneck, 2) fairly share the bottleneck, 3) obtain a fair share when TCP greedy flows share the bottleneck with video flows. The obtained results show that ELASTIC achieves a very high fairness and is able to get the fair share when coexisting with TCP greedy flows.", "title": "" }, { "docid": "7490197babcd735c48e1c42af03c8473", "text": "Clustering is one of the most fundamental tasks in data analysis and machine learning. It is central to many data-driven applications that aim to separate the data into groups with similar patterns. Moreover, clustering is a complex procedure that is affected significantly by the choice of the data representation method. Recent research has demonstrated encouraging clustering results by learning effectively these representations. In most of these works a deep auto-encoder is initially pre-trained to minimize a reconstruction loss, and then jointly optimized with clustering centroids in order to improve the clustering objective. Those works focus mainly on the clustering phase of the procedure, while not utilizing the potential benefit out of the initial phase. In this paper we propose to optimize an auto-encoder with respect to a discriminative pairwise loss function during the auto-encoder pre-training phase. We demonstrate the high accuracy obtained by the proposed method as well as its rapid convergence (e.g. reaching above 92% accuracy on MNIST during the pre-training phase, in less than 50 epochs), even with small networks.", "title": "" }, { "docid": "ec3661f09e857568d32c6452bd8c4445", "text": "User identification and differentiation have implications in many application domains, including security, personalization, and co-located multiuser systems. In response, dozens of approaches have been developed, from fingerprint and retinal scans, to hand gestures and RFID tags. In this work, we propose CapAuth, a technique that uses existing, low-level touchscreen data, combined with machine learning classifiers, to provide real-time authentication and even identification of users. As a proof-of-concept, we ran our software on an off-the-shelf Nexus 5 smartphone. Our user study demonstrates twenty-participant authentication accuracies of 99.6%. For twenty-user identification, our software achieved 94.0% accuracy and 98.2% on groups of four, simulating family use.", "title": "" }, { "docid": "5f89aac70e93b9fcf4c37d119770f747", "text": "Partial differential equations (PDEs) play a prominent role in many disciplines of science and engineering. PDEs are commonly derived based on empirical observations. However, with the rapid development of sensors, computational power, and data storage in the past decade, huge quantities of data can be easily collected and efficiently stored. Such vast quantity of data offers new opportunities for data-driven discovery of physical laws. Inspired by the latest development of neural network designs in deep learning, we propose a new feed-forward deep network, called PDENet, to fulfill two objectives at the same time: to accurately predict dynamics of complex systems and to uncover the underlying hidden PDE models. Comparing with existing approaches, our approach has the most flexibility by learning both differential operators and the nonlinear response function of the underlying PDE model. A special feature of the proposed PDE-Net is that all filters are properly constrained, which enables us to easily identify the governing PDE models while still maintaining the expressive and predictive power of the network. These constrains are carefully designed by fully exploiting the relation between the orders of differential operators and the orders of sum rules of filters (an important concept originated from wavelet theory). Numerical experiments show that the PDE-Net has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment. Equal contribution School of Mathematical Sciences, Peking University, Beijing, China Beijing Computational Science Research Center, Beijing, China Beijing International Center for Mathematical Research, Peking University, Beijing, China Center for Data Science, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research. Correspondence to: Bin Dong <dongbin@math.pku.edu.cn>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).", "title": "" }, { "docid": "1168c9e6ce258851b15b7e689f60e218", "text": "Modern deep learning architectures produce highly accurate results on many challenging semantic segmentation datasets. State-of-the-art methods are, however, not directly transferable to real-time applications or embedded devices, since naïve adaptation of such systems to reduce computational cost (speed, memory and energy) causes a significant drop in accuracy. We propose ContextNet, a new deep neural network architecture which builds on factorized convolution, network compression and pyramid representation to produce competitive semantic segmentation in real-time with low memory requirement. ContextNet combines a deep network branch at low resolution that captures global context information efficiently with a shallow branch that focuses on highresolution segmentation details. We analyse our network in a thorough ablation study and present results on the Cityscapes dataset, achieving 66.1% accuracy at 18.3 frames per second at full (1024× 2048) resolution (23.2 fps with pipelined computations for streamed data).", "title": "" }, { "docid": "e0696dfe3d01003197516adbabeac67d", "text": "The incidence of rectal foreign bodies is increasing by the day, though not as common as that of upper gastrointestinal foreign bodies. Various methods for removal of foreign bodies have been reported. Removal during endoscopy using endoscopic devices is simple and safe, but if the foreign body is too large to be removed by this method, other methods are required. We report two cases of rectal foreign body removal by a relatively simple and inexpensive technique. A 42-year-old man with a vibrator in the rectum was admitted due to inability to remove it by himself and various endoscopic methods failed. Finally, the vibrator was removed successfully by using tenaculum forceps under endoscopic assistance. Similarly, a 59-year-old man with a carrot in the rectum was admitted. The carrot was removed easily by using the same method as that in the previous case. The use of tenaculum forceps under endoscopic guidance may be a useful method for removal of rectal foreign bodies.", "title": "" } ]
scidocsrr
05a499f4c2859ddecc610df78bf9cdf3
Classifying imbalanced data sets using similarity based hierarchical decomposition
[ { "docid": "2990de2e037498b22fb66b3ddc635d49", "text": "Class imbalance is a problem that is common to many application domains. When examples of one class in a training data set vastly outnumber examples of the other class(es), traditional data mining algorithms tend to create suboptimal classification models. Several techniques have been used to alleviate the problem of class imbalance, including data sampling and boosting. In this paper, we present a new hybrid sampling/boosting algorithm, called RUSBoost, for learning from skewed training data. This algorithm provides a simpler and faster alternative to SMOTEBoost, which is another algorithm that combines boosting and data sampling. This paper evaluates the performances of RUSBoost and SMOTEBoost, as well as their individual components (random undersampling, synthetic minority oversampling technique, and AdaBoost). We conduct experiments using 15 data sets from various application domains, four base learners, and four evaluation metrics. RUSBoost and SMOTEBoost both outperform the other procedures, and RUSBoost performs comparably to (and often better than) SMOTEBoost while being a simpler and faster technique. Given these experimental results, we highly recommend RUSBoost as an attractive alternative for improving the classification performance of learners built using imbalanced data.", "title": "" } ]
[ { "docid": "ac4627c8dbacfc24ae45c6e1596f1079", "text": "Obtaining accurate and automated lung field segmentation is a challenging step in the development of Computer-Aided Diagnosis (CAD) system. In this paper, fully automatic lung field segmentation is proposed. Initially, novel features are extracted by considering spatial interaction of the neighbouring pixels. Then constrained non-negative matrix factorization (CNMF) factorized the data matrix obtained from the visual appearance model into basis and coefficient matrices. Initial lung segmentation is achieved by applying unsupervised learning on the coefficient matrix. 2-D region growing operation removes trachea and bronchi appearing in the initial lung segmentation. The experimental results on different database shows that the proposed method produces significant DSC 0.973 as compared to the existing lung segmentation algorithms.", "title": "" }, { "docid": "cc8e52fdb69a9c9f3111287905f02bfc", "text": "We present a new methodology for exploring and analyzing navigation patterns on a web site. The patterns that can be analyzed consist of sequences of URL categories traversed by users. In our approach, we first partition site users into clusters such that users with similar navigation paths through the site are placed into the same cluster. Then, for each cluster, we display these paths for users within that cluster. The clustering approach we employ is model-based (as opposed to distance-based) and partitions users according to the order in which they request web pages. In particular, we cluster users by learning a mixture of first-order Markov models using the Expectation-Maximization algorithm. The runtime of our algorithm scales linearly with the number of clusters and with the size of the data; and our implementation easily handles hundreds of thousands of user sessions in memory. In the paper, we describe the details of our method and a visualization tool based on it called WebCANVAS. We illustrate the use of our approach on user-traffic data from msnbc.com.", "title": "" }, { "docid": "64d4776be8e2dbb0fa3b30d6efe5876c", "text": "This paper presents a novel method for hierarchically organizing large face databases, with application to efficient identity-based face retrieval. The method relies on metric learning with local binary pattern (LBP) features. On one hand, LBP features have proved to be highly resilient to various appearance changes due to illumination and contrast variations while being extremely efficient to calculate. On the other hand, metric learning (ML) approaches have been proved very successful for face verification ‘in the wild’, i.e. in uncontrolled face images with large amounts of variations in pose, expression, appearances, lighting, etc. While such ML based approaches compress high dimensional features into low dimensional spaces using discriminatively learned projections, the complexity of retrieval is still significant for large scale databases (with millions of faces). The present paper shows that learning such discriminative projections locally while organizing the database hierarchically leads to a more accurate and efficient system. The proposed method is validated on the standard Labeled Faces in the Wild (LFW) benchmark dataset with millions of additional distracting face images collected from photos on the internet.", "title": "" }, { "docid": "0a9bfd72a12dcddfd27d982c8b27b9d5", "text": "A novel concept of a series feeding network of a microstrip antenna array has been shown. The proposed feeding network utilizes a four-port slot coupler as a three-way power divider, which is composed of two microstrip lines appropriately coupled through a slot within a common ground plane. The proposed power divider is used for simultaneous power distribution between two consecutive linear subarrays and between two 4 × 1 linear arrays constituting a single 8 × 1 linear subarray, where equal-amplitude and out-of-phase signals are required. Such a solution allows for realization of antenna arrays, in which all linear subarrays designed with the use of the “through-element” series feeding technique are fed at their centers from single transmission lines. The theoretical analysis as well as measurement results of the 8 × 8 antenna array operating within 10.5-GHz frequency range are shown.", "title": "" }, { "docid": "d076cb1cf48cf0a9e7eb5fee749ed10e", "text": "Cats have protractible claws to fold their tips to keep them sharp. They protract claws while hunting and pawing on slippery surfaces. Protracted claws by tendons and muscles of toes can help cats anchoring themselves steady while their locomotion trends to slip and releasing the hold while they retract claws intentionally. This research proposes a kind of modularized self-adaptive toe mechanism inspired by cat claws to improve the extremities' contact performance for legged robot. The mechanism is constructed with four-bar linkage actuated by contact reaction force and retracted by applied spring tension. A feasible mechanical design based on several essential parameters is introduced and an integrated Sole-Toe prototype is built for experimental evaluation. Mechanical self-adaption and actual contact performance on specific surface have been evaluated respectively on a biped walking platform and a bench-top mechanical testing.", "title": "" }, { "docid": "31cec57cc62759852a6500d9b0102333", "text": "Migraine is a common disease throughout the world. Not only does it affect the life of people tremendously, but it also leads to high costs, e.g. due to inability to work or various required drug-taking cycles for finding the best drug for a patient. Solving the latter aspect could help to improve the life of patients and decrease the impact of the other consequences. Therefore, in this paper, we present an approach for a drug recommendation system based on the highly scalable native graph database Neo4J. The presented system uses simulated patient data to help physicians gain more transparency about which drug fits a migraine patient best considering her individual features. Our evaluation shows that the proposed system works as intended. This means that only drugs with highest relevance scores and no interactions with the patient's diseases, drugs or pregnancy are recommended.", "title": "" }, { "docid": "1dbb04e806b1fd2a8be99633807d9f4d", "text": "Realistically animated fluids can add substantial realism to interactive applications such as virtual surgery simulators or computer games. In this paper we propose an interactive method based on Smoothed Particle Hydrodynamics (SPH) to simulate fluids with free surfaces. The method is an extension of the SPH-based technique by Desbrun to animate highly deformable bodies. We gear the method towards fluid simulation by deriving the force density fields directly from the Navier-Stokes equation and by adding a term to model surface tension effects. In contrast to Eulerian grid-based approaches, the particle-based approach makes mass conservation equations and convection terms dispensable which reduces the complexity of the simulation. In addition, the particles can directly be used to render the surface of the fluid. We propose methods to track and visualize the free surface using point splatting and marching cubes-based surface reconstruction. Our animation method is fast enough to be used in interactive systems and to allow for user interaction with models consisting of up to 5000 particles.", "title": "" }, { "docid": "0d966c39aabe4f51181b1e8cf520cae3", "text": "The deflated surfaces of the alluvial fans in Saheki crater reveal the most detailed 28 record of fan stratigraphy and evolution found, to date, on Mars. During deposition of at least the 29 uppermost 100 m of fan deposits, discharges from the source basin consisted of channelized 30 flows transporting sediment (which we infer to be primarily sandand gravel-sized) as bedload 31 coupled with extensive overbank mud-rich flows depositing planar beds of sand-sized or finer 32 sediment. Flow events are inferred to have been of modest magnitude (probably less than ~60 33 m 3 /s), of short duration, and probably occupied only a few distributaries during any individual 34 flow event. Occasional channel avulsions resulted in the distribution of sediment across the 35 entire fan. A comparison with fine-grained alluvial fans in Chile’s Atacama Desert provides 36 insights into the processes responsible for constructing the Saheki crater fans: sediment is 37 deposited by channelized flows (transporting sand through boulder-sized material) and overbank 38 mudflows (sand size and finer) and wind erosion leaves channels expressed in inverted 39 topographic relief. The most likely source of water was snowmelt released after annual or 40 epochal accumulation of snow in the headwater source basin on the interior crater rim during the 41 Hesperian to Amazonian periods. We infer the Saheki fans to have been constructed by many 42 hundreds of separate flow events, and accumulation of the necessary snow and release of 43 meltwater may have required favorable orbital configurations or transient global warming. 44", "title": "" }, { "docid": "3ec9107c5d389425e1a89086948ea0c7", "text": "BACKGROUND\nA reduction in the reported incidence of malignant degeneration within nevus sebaceus has led many physicians to recommend serial clinical evaluation and biopsy of suspicious areas rather than prophylactic surgical excision. Unfortunately, no well-defined inclusion criteria, including lesion size and location, have been described for the management of nevus sebaceus.\n\n\nMETHODS\nTo assess whether the incidence or timing of malignant degeneration contraindicates surgical excision, the authors performed a PubMed literature search for any studies, excluding case reports, related to malignant change within nevus sebaceus since 1990. They then defined giant nevus sebaceus to consist of lesions greater than 20 cm or greater than 1 percent of the total body surface area and retrospectively examined their experience and outcomes treating giant nevus sebaceus.\n\n\nRESULTS\nData were pooled from six large retrospective institutional studies (2520 patients). The cumulative incidence of benign and malignant tumors was 6.1 and 0.5 percent, respectively. Of the authors' 195 patients with giant congenital nevi, only six (3.0 percent) met the definition of giant nevus sebaceus. All patients required tissue expansion for reconstruction, and two patients required concomitant skin grafting. Two complications required operative intervention.\n\n\nCONCLUSIONS\nEarly malignant degeneration within nevus sebaceus is rare. Management, however, must account for complex monitoring, particularly for lesions within the scalp, associated alopecia, involvement of multiple facial aesthetic subunits, and postpubertal transformation affecting both appearance and monitoring of the lesions. The latter considerations, rather than the reported incidence of malignant transformation, should form the bases for surgical intervention in giant nevus sebaceus.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.", "title": "" }, { "docid": "97aab319e3d38d755860b141c5a4fa38", "text": "Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.", "title": "" }, { "docid": "f2f43e7087d3506a848849b64b062954", "text": "We present an Adaptive User Interface (AUI) for online courses in higher education as a method for solving the challenges posed by the different knowledge levels in a heterogeneous group of students. The scenario described in this paper is an online beginners' course in Mathematics which is extended by an adaptive course layout to better fit the needs of every individual student. The course offers an entry-level test to check each student's prior knowledge and skills. The results are used to automatically determine which parts of the course are relevant for the student and which ones can be hidden, based on parameters set by the course teachers. Initial results are promising; the new adaptive learning platform in mathematics is leading to higher student satisfaction and better performance.", "title": "" }, { "docid": "1e25eeed661e6b9ed6e74ee8ae5beec7", "text": "In this paper, different electric motors are studied and compared to see the benefits of each motor and the one that is more suitable to be used in the electric vehicle (EV) applications. There are five main electric motor types, DC, induction, permanent magnet synchronous, switched reluctance and brushless DC motors are studied. It is concluded that although the induction motors technology is more mature than others, for the EV applications the brushless DC and permanent magnet motors are more suitable than others. The use of these motors will result in less pollution, less fuel consumption and higher power to volume ratio. The reducing prices of the permanent magnet materials and the trend of increasing efficiency in the permanent magnet and brushless DC motors make them more and more attractive for the EV applications.", "title": "" }, { "docid": "f262e911b5254ad4d4419ed7114b8a4f", "text": "User Satisfaction is one of the most extensively used dimensions for Information Systems (IS) success evaluation with a large body of literature and standardized instruments of User Satisfaction. Despite the extensive literature on User Satisfaction, there exist much controversy over the measures of User Satisfaction and the adequacy of User Satisfaction measures to gauge the level of success in complex, contemporary IS. Recent studies in IS have suggested treating User Satisfaction as an overarching construct of success, rather than a measure of success. Further perplexity is introduced over the alleged overlaps between User Satisfaction measures and the measures of IS success (e.g. system quality, information quality) suggested in the literature. The following study attempts to clarify the aforementioned confusions by gathering data from 310 Enterprise System users and analyzing 16 User Satisfaction instruments. The statistical analysis of the 310 responses and the content analysis of the 16 instruments suggest the appropriateness of treating User Satisfaction as an overarching measure of success rather a dimension of success.", "title": "" }, { "docid": "5d2ab1a4f28aa9286a3ef19c2c822af1", "text": "Stray current control is essential in direct current (DC) mass transit systems where the rail insulation is not of sufficient quality to prevent a corrosion risk to the rails, supporting and third-party infrastructure. This paper details the principles behind the need for stray current control and examines the relationship between the stray current collection system design and its efficiency. The use of floating return rails is shown to provide a reduction in stray current level in comparison to a grounded system, significantly reducing the corrosion level of the traction system running rails. An increase in conductivity of the stray current collection system or a reduction in the soil resistivity surrounding the traction system is shown to decrease the corrosion risk to the supporting and third party infrastructure.", "title": "" }, { "docid": "73bfd9bca8a111e66a35f5d94dcbaa98", "text": "Most previous work in household energy conservation has focused on rule-based home automation to achieve energy savings, with relatively few researchers focusing on context-aware technologies. As a result, user comfort is often disregarded and few solutions handle decision conflicts caused by multiple activities undertaken by multiple users. The main contribution of this work is twofold. First, a comprehensive human-centric and context-aware comfort index is proposed to evaluate how users feel under particular environmental conditions with regard to thermal, illumination, and appliance-usage preferences. Second, the energy savings is formulated into an optimization problem to minimize the total energy consumption, even under multiple user comfort constraints. Short-term evaluation in our simulated home environment resulted in energy savings of at least 28.98%. Long-term evaluation using a home simulator resulted in energy savings of 33.7%. Most importantly, the energy savings in both situations was achieved under multiple user comfort constraints, representing a truly human-centric living environment.", "title": "" }, { "docid": "f140a58cc600916b9b272491e0e65d79", "text": "Person identification across nonoverlapping cameras, also known as person reidentification, aims to match people at different times and locations. Reidentifying people is of great importance in crucial applications such as wide-area surveillance and visual tracking. Due to the appearance variations in pose, illumination, and occlusion in different camera views, person reidentification is inherently difficult. To address these challenges, a reference-based method is proposed for person reidentification across different cameras. Instead of directly matching people by their appearance, the matching is conducted in a reference space where the descriptor for a person is translated from the original color or texture descriptors to similarity measures between this person and the exemplars in the reference set. A subspace is first learned in which the correlations of the reference data from different cameras are maximized using regularized canonical correlation analysis (RCCA). For reidentification, the gallery data and the probe data are projected onto this RCCA subspace and the reference descriptors (RDs) of the gallery and probe are generated by computing the similarity between them and the reference data. The identity of a probe is determined by comparing the RD of the probe and the RDs of the gallery. A reranking step is added to further improve the results using a saliency-based matching scheme. Experiments on publicly available datasets show that the proposed method outperforms most of the state-of-the-art approaches.", "title": "" }, { "docid": "a245aca07bd707ee645cf5cb283e7c5e", "text": "The paradox of blunted parathormone (PTH) secretion in patients with severe hypomagnesemia has been known for more than 20 years, but the underlying mechanism is not deciphered. We determined the effect of low magnesium on in vitro PTH release and on the signals triggered by activation of the calcium-sensing receptor (CaSR). Analogous to the in vivo situation, PTH release from dispersed parathyroid cells was suppressed under low magnesium. In parallel, the two major signaling pathways responsible for CaSR-triggered block of PTH secretion, the generation of inositol phosphates, and the inhibition of cAMP were enhanced. Desensitization or pertussis toxin-mediated inhibition of CaSR-stimulated signaling suppressed the effect of low magnesium, further confirming that magnesium acts within the axis CaSR-G-protein. However, the magnesium binding site responsible for inhibition of PTH secretion is not identical with the extracellular ion binding site of the CaSR, because the magnesium deficiency-dependent signal enhancement was not altered on CaSR receptor mutants with increased or decreased affinity for calcium and magnesium. By contrast, when the magnesium affinity of the G alpha subunit was decreased, CaSR activation was no longer affected by magnesium. Thus, the paradoxical block of PTH release under magnesium deficiency seems to be mediated through a novel mechanism involving an increase in the activity of G alpha subunits of heterotrimeric G-proteins.", "title": "" }, { "docid": "94f8ebb84705e0d6c7a87bb6515fd710", "text": "We describe here our approaches and results on the WAT 2017 shared translation tasks. Motivated by the good results we obtained with Neural Machine Translation in the previous shared task, we continued to explore this approach this year, with incremental improvements in models and training methods. We focused on the ASPEC dataset and could improve the stateof-the-art results for Chinese-to-Japanese and Japanese-to-Chinese translations.", "title": "" }, { "docid": "9cb28706a45251e3d2fb5af64dd9351f", "text": "This article proposes an informational perspective on comparison consequences in social judgment. It is argued that to understand the variable consequences of comparison, one has to examine what target knowledge is activated during the comparison process. These informational underpinnings are conceptualized in a selective accessibility model that distinguishes 2 fundamental comparison processes. Similarity testing selectively makes accessible knowledge indicating target-standard similarity, whereas dissimilarity testing selectively makes accessible knowledge indicating target-standard dissimilarity. These respective subsets of target knowledge build the basis for subsequent target evaluations, so that similarity testing typically leads to assimilation whereas dissimilarity testing typically leads to contrast. The model is proposed as a unifying conceptual framework that integrates diverse findings on comparison consequences in social judgment.", "title": "" } ]
scidocsrr
d205f14331a9113a5eadee7947a3254e
Building Better Quality Predictors Using "$\epsilon$-Dominance"
[ { "docid": "6e675e8a57574daf83ab78cea25688f5", "text": "Collecting quality data from software projects can be time-consuming and expensive. Hence, some researchers explore “unsupervised” approaches to quality prediction that does not require labelled data. An alternate technique is to use “supervised” approaches that learn models from project data labelled with, say, “defective” or “not-defective”. Most researchers use these supervised models since, it is argued, they can exploit more knowledge of the projects. \nAt FSE’16, Yang et al. reported startling results where unsupervised defect predictors outperformed supervised predictors for effort-aware just-in-time defect prediction. If confirmed, these results would lead to a dramatic simplification of a seemingly complex task (data mining) that is widely explored in the software engineering literature. \nThis paper repeats and refutes those results as follows. (1) There is much variability in the efficacy of the Yang et al. predictors so even with their approach, some supervised data is required to prune weaker predictors away. (2) Their findings were grouped across N projects. When we repeat their analysis on a project-by-project basis, supervised predictors are seen to work better. \nEven though this paper rejects the specific conclusions of Yang et al., we still endorse their general goal. In our our experiments, supervised predictors did not perform outstandingly better than unsupervised ones for effort-aware just-in-time defect prediction. Hence, they may indeed be some combination of unsupervised learners to achieve comparable performance to supervised ones. We therefore encourage others to work in this promising area.", "title": "" }, { "docid": "66bf2c7d6af4e2e7eec279888df23125", "text": "Defect prediction on projects with limited historical data has attracted great interest from both researchers and practitioners. Cross-project defect prediction has been the main area of progress by reusing classifiers from other projects. However, existing approaches require some degree of homogeneity (e.g., a similar distribution of metric values) between the training projects and the target project. Satisfying the homogeneity requirement often requires significant effort (currently a very active area of research).\n An unsupervised classifier does not require any training data, therefore the heterogeneity challenge is no longer an issue. In this paper, we examine two types of unsupervised classifiers: a) distance-based classifiers (e.g., k-means); and b) connectivity-based classifiers. While distance-based unsupervised classifiers have been previously used in the defect prediction literature with disappointing performance, connectivity-based classifiers have never been explored before in our community.\n We compare the performance of unsupervised classifiers versus supervised classifiers using data from 26 projects from three publicly available datasets (i.e., AEEEM, NASA, and PROMISE). In the cross-project setting, our proposed connectivity-based classifier (via spectral clustering) ranks as one of the top classifiers among five widely-used supervised classifiers (i.e., random forest, naive Bayes, logistic regression, decision tree, and logistic model tree) and five unsupervised classifiers (i.e., k-means, partition around medoids, fuzzy C-means, neural-gas, and spectral clustering). In the within-project setting (i.e., models are built and applied on the same project), our spectral classifier ranks in the second tier, while only random forest ranks in the first tier. Hence, connectivity-based unsupervised classifiers offer a viable solution for cross and within project defect predictions.", "title": "" }, { "docid": "752e6d6f34ffc638e9a0d984a62db184", "text": "Defect prediction models are classifiers that are trained to identify defect-prone software modules. Such classifiers have configurable parameters that control their characteristics (e.g., the number of trees in a random forest classifier). Recent studies show that these classifiers may underperform due to the use of suboptimal default parameter settings. However, it is impractical to assess all of the possible settings in the parameter spaces. In this paper, we investigate the performance of defect prediction models where Caret --- an automated parameter optimization technique --- has been applied. Through a case study of 18 datasets from systems that span both proprietary and open source domains, we find that (1) Caret improves the AUC performance of defect prediction models by as much as 40 percentage points; (2) Caret-optimized classifiers are at least as stable as (with 35% of them being more stable than) classifiers that are trained using the default settings; and (3) Caret increases the likelihood of producing a top-performing classifier by as much as 83%. Hence, we conclude that parameter settings can indeed have a large impact on the performance of defect prediction models, suggesting that researchers should experiment with the parameters of the classification techniques. Since automated parameter optimization techniques like Caret yield substantially benefits in terms of performance improvement and stability, while incurring a manageable additional computational cost, they should be included in future defect prediction studies.", "title": "" }, { "docid": "f1cbd60e1bd721e185bbbd12c133ad91", "text": "Defect prediction models are a well-known technique for identifying defect-prone files or packages such that practitioners can allocate their quality assurance efforts (e.g., testing and code reviews). However, once the critical files or packages have been identified, developers still need to spend considerable time drilling down to the functions or even code snippets that should be reviewed or tested. This makes the approach too time consuming and impractical for large software systems. Instead, we consider defect prediction models that focus on identifying defect-prone (“risky”) software changes instead of files or packages. We refer to this type of quality assurance activity as “Just-In-Time Quality Assurance,” because developers can review and test these risky changes while they are still fresh in their minds (i.e., at check-in time). To build a change risk model, we use a wide range of factors based on the characteristics of a software change, such as the number of added lines, and developer experience. A large-scale study of six open source and five commercial projects from multiple domains shows that our models can predict whether or not a change will lead to a defect with an average accuracy of 68 percent and an average recall of 64 percent. Furthermore, when considering the effort needed to review changes, we find that using only 20 percent of the effort it would take to inspect all changes, we can identify 35 percent of all defect-inducing changes. Our findings indicate that “Just-In-Time Quality Assurance” may provide an effort-reducing way to focus on the most risky changes and thus reduce the costs of developing high-quality software.", "title": "" }, { "docid": "2b010823e217e64e8e56b835cef40a1a", "text": "Software defect prediction, which predicts defective code regions, can help developers find bugs and prioritize their testing efforts. To build accurate prediction models, previous studies focus on manually designing features that encode the characteristics of programs and exploring different machine learning algorithms. Existing traditional features often fail to capture the semantic differences of programs, and such a capability is needed for building accurate prediction models.\n To bridge the gap between programs' semantics and defect prediction features, this paper proposes to leverage a powerful representation-learning algorithm, deep learning, to learn semantic representation of programs automatically from source code. Specifically, we leverage Deep Belief Network (DBN) to automatically learn semantic features from token vectors extracted from programs' Abstract Syntax Trees (ASTs).\n Our evaluation on ten open source projects shows that our automatically learned semantic features significantly improve both within-project defect prediction (WPDP) and cross-project defect prediction (CPDP) compared to traditional features. Our semantic features improve WPDP on average by 14.7% in precision, 11.5% in recall, and 14.2% in F1. For CPDP, our semantic features based approach outperforms the state-of-the-art technique TCA+ with traditional features by 8.9% in F1.", "title": "" } ]
[ { "docid": "a6459555eb54297f623800bcdf10dcc6", "text": "Phishing causes billions of dollars in damage every year and poses a serious threat to the Internet economy. Email is still the most commonly used medium to launch phishing attacks [1]. In this paper, we present a comprehensive natural language based scheme to detect phishing emails using features that are invariant and fundamentally characterize phishing. Our scheme utilizes all the information present in an email, namely, the header, the links and the text in the body. Although it is obvious that a phishing email is designed to elicit an action from the intended victim, none of the existing detection schemes use this fact to identify phishing emails. Our detection protocol is designed specifically to distinguish between “actionable” and “informational” emails. To this end, we incorporate natural language techniques in phishing detection. We also utilize contextual information, when available, to detect phishing: we study the problem of phishing detection within the contextual confines of the user’s email box and demonstrate that context plays an important role in detection. To the best of our knowledge, this is the first scheme that utilizes natural language techniques and contextual information to detect phishing. We show that our scheme outperforms existing phishing detection schemes. Finally, our protocol detects phishing at the email level rather than detecting masqueraded websites. This is crucial to prevent the victim from clicking any harmful links in the email. Our implementation called PhishNet-NLP, operates between a user’s mail transfer agent (MTA) and mail user agent (MUA) and processes each arriving email for phishing attacks even before reaching the", "title": "" }, { "docid": "3b2376110b0e6949379697b7ba6730b5", "text": "............................................................................................................................... i Acknowledgments............................................................................................................... ii Table of", "title": "" }, { "docid": "c9748c67c2ab17cfead44fe3b486883d", "text": "Entropy coding is an integral part of most data compression systems. Huffman coding (HC) and arithmetic coding (AC) are two of the most widely used coding methods. HC can process a large symbol alphabet at each step allowing for fast encoding and decoding. However, HC typically provides suboptimal data rates due to its inherent approximation of symbol probabilities to powers of 1 over 2. In contrast, AC uses nearly accurate symbol probabilities, hence generally providing better compression ratios. However, AC relies on relatively slow arithmetic operations making the implementation computationally demanding. In this paper we discuss asymmetric numeral systems (ANS) as a new approach to entropy coding. While maintaining theoretical connections with AC, the proposed ANS-based coding can be implemented with much less computational complexity. While AC operates on a state defined by two numbers specifying a range, an ANS-based coder operates on a state defined by a single natural number such that the x ∈ ℕ state contains ≈ log2(x) bits of information. This property allows to have the entire behavior for a large alphabet summarized in the form of a relatively small table (e.g. a few kilobytes for a 256 size alphabet). The proposed approach can be interpreted as an equivalent to adding fractional bits to a Huffman coder to combine the speed of HC and the accuracy offered by AC. Additionally, ANS can simultaneously encrypt a message encoded this way. Experimental results demonstrate effectiveness of the proposed entropy coder.", "title": "" }, { "docid": "a986826041730d953dfbf9fbc1b115a6", "text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.", "title": "" }, { "docid": "f83a16d393c78d6ba0e65a4659446e7e", "text": "Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segment-level classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel Convolutional-De-Convolutional (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. Source code and trained models are available online at https://bitbucket.org/columbiadvmm/cdc.", "title": "" }, { "docid": "f6dd10d4b400234a28b221d0527e71c0", "text": "Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English–German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English– Romanian.", "title": "" }, { "docid": "68420190120449343006879e23be8789", "text": "Recent findings suggest that consolidation of emotional memories is influenced by menstrual phase in women. In contrast to other phases, in the mid-luteal phase when progesterone levels are elevated, cortisol levels are increased and correlated with emotional memory. This study examined the impact of progesterone on cortisol and memory consolidation of threatening stimuli under stressful conditions. Thirty women were recruited for the high progesterone group (in the mid-luteal phase) and 26 for the low progesterone group (in non-luteal phases of the menstrual cycle). Women were shown a series of 20 neutral or threatening images followed immediately by either a stressor (cold pressor task) or control condition. Participants returned two days later for a surprise free recall test of the images and salivary cortisol responses were monitored. High progesterone levels were associated with higher baseline and stress-evoked cortisol levels, and enhanced memory of negative images when stress was received. A positive correlation was found between stress-induced cortisol levels and memory recall of threatening images. These findings suggest that progesterone mediates cortisol responses to stress and subsequently predicts memory recall for emotionally arousing stimuli.", "title": "" }, { "docid": "ec52b4c078c14a0d564577438846f178", "text": "Millions of students across the United States cannot benefit fully from a traditional educational program because they have a disability that impairs their ability to participate in a typical classroom environment. For these students, computer-based technologies can play an especially important role. Not only can computer technology facilitate a broader range of educational activities to meet a variety of needs for students with mild learning disorders, but adaptive technology now exists than can enable even those students with severe disabilities to become active learners in the classroom alongside their peers who do not have disabilities. This article provides an overview of the role computer technology can play in promoting the education of children with special needs within the regular classroom. For example, use of computer technology for word processing, communication, research, and multimedia projects can help the three million students with specific learning and emotional disorders keep up with their nondisabled peers. Computer technology has also enhanced the development of sophisticated devices that can assist the two million students with more severe disabilities in overcoming a wide range of limitations that hinder classroom participation--from speech and hearing impairments to blindness and severe physical disabilities. However, many teachers are not adequately trained on how to use technology effectively in their classrooms, and the cost of the technology is a serious consideration for all schools. Thus, although computer technology has the potential to act as an equalizer by freeing many students from their disabilities, the barriers of inadequate training and cost must first be overcome before more widespread use can become a reality.", "title": "" }, { "docid": "ef787cfc1b00c9d05ec9293ff802f172", "text": "High Definition (HD) maps play an important role in modern traffic scenes. However, the development of HD maps coverage grows slowly because of the cost limitation. To efficiently model HD maps, we proposed a convolutional neural network with a novel prediction layer and a zoom module, called LineNet. It is designed for state-of-the-art lane detection in an unordered crowdsourced image dataset. And we introduced TTLane, a dataset for efficient lane detection in urban road modeling applications. Combining LineNet and TTLane, we proposed a pipeline to model HD maps with crowdsourced data for the first time. And the maps can be constructed precisely even with inaccurate crowdsourced data.", "title": "" }, { "docid": "2f2291baa6c8a74744a16f27df7231d2", "text": "Malicious programs, such as viruses and worms, are frequently related to previous programs through evolutionary relationships. Discovering those relationships and constructing a phylogeny model is expected to be helpful for analyzing new malware and for establishing a principled naming scheme. Matching permutations of code may help build better models in cases where malware evolution does not keep things in the same order. We describe methods for constructing phylogeny models that uses features called n-perms to match possibly permuted codes. An experiment was performed to compare the relative effectiveness of vector similarity measures using n-perms and n-grams when comparing permuted variants of programs. The similarity measures using n-perms maintained a greater separation between the similarity scores of permuted families of specimens versus unrelated specimens. A subsequent study using a tree generated through n-perms suggests that phylogeny models based on n-perms may help forensic analysts investigate new specimens, and assist in reconciling malware naming inconsistencies Škodlivé programy, jako viry a červy (malware), jsou zřídka psány narychlo, jen tak. Obvykle jsou výsledkem svých evolučních vztahů. Zjištěním těchto vztahů a tvorby v přesné fylogenezi se předpokládá užitečná pomoc v analýze nového malware a ve vytvoření zásad pojmenovacího schématu. Porovnávání permutací kódu uvnitř malware mů že nabídnout výhody pro fylogenní generování, protože evoluční kroky implementované autory malware nemohou uchovat posloupnosti ve sdíleném kódu. Popisujeme rodinu fylogenních generátorů, které provádějí clustering pomocí PQ stromově založených extrakčních vlastností. Byl vykonán experiment v němž výstup stromu z těchto generátorů byl vyhodnocen vzhledem k fylogenezím generovaným pomocí vážených n-gramů. Výsledky ukazují výhody přístupu založeného na permutacích ve fylogenním generování malware. Les codes malveillants, tels que les virus et les vers, sont rarement écrits de zéro; en conséquence, il existe des relations de nature évolutive entre ces différents codes. Etablir ces relations et construire une phylogénie précise permet d’espérer une meilleure capacité d’analyse de nouveaux codes malveillants et de disposer d’une méthode de fait de nommage de ces codes. La concordance de permutations de code avec des parties de codes malveillants sont susceptibles d’être très intéressante dans l’établissement d’une phylogénie, dans la mesure où les étapes évolutives réalisées par les auteurs de codes malveillants ne conservent généralement pas l’ordre des instructions présentes dans le code commun. Nous décrivons ici une famille de générateurs phylogénétiques réalisant des regroupements à l’aide de caractéristiques extraites d’arbres PQ. Une expérience a été réalisée, dans laquelle l’arbre produit par ces générateurs est évalué d’une part en le comparant avec les classificiations de références utilisées par les antivirus par scannage, et d’autre part en le comparant aux phylogénies produites à l’aide de polygrammes de taille n (n-grammes), pondérés. Les résultats démontrent l’intérêt de l’approche utilisant les permutations dans la génération phylogénétique des codes malveillants. Haitalliset ohjelmat, kuten tietokonevirukset ja -madot, kirjoitetaan harvoin alusta alkaen. Tämän seurauksena niistä on löydettävissä evoluution kaltaista samankaltaisuutta. Samankaltaisuuksien löytämisellä sekä rakentamalla tarkka evoluutioon perustuva malli voidaan helpottaa uusien haitallisten ohjelmien analysointia sekä toteuttaa nimeämiskäytäntöjä. Permutaatioiden etsiminen koodista saattaa antaa etuja evoluutiomallin muodostamiseen, koska haitallisten ohjelmien kirjoittajien evolutionääriset askeleet eivät välttämättä säilytä jaksoittaisuutta ohjelmakoodissa. Kuvaamme joukon evoluutiomallin muodostajia, jotka toteuttavat klusterionnin käyttämällä PQ-puuhun perustuvia ominaisuuksia. Teimme myös kokeen, jossa puun tulosjoukkoa verrattiin virustentorjuntaohjelman muodostamaan viitejoukkoon sekä evoluutiomalleihin, jotka oli muodostettu painotetuilla n-grammeilla. Tulokset viittaavat siihen, että permutaatioon perustuvaa lähestymistapaa voidaan menestyksekkäästi käyttää evoluutiomallien muodostamineen. Maliziöse Programme, wie z.B. Viren und Würmer, werden nur in den seltensten Fällen komplett neu geschrieben; als Ergebnis können zwischen verschiedenen maliziösen Codes Abhängigkeiten gefunden werden. Im Hinblick auf Klassifizierung und wissenschaftlichen Aufarbeitung neuer maliziöser Codes kann es sehr hilfreich erweisen, Abhängigkeiten zu bestehenden maliziösen Codes darzulegen und somit einen Stammbaum zu erstellen. In dem Artikel wird u.a. auf moderne Ansätze innerhalb der Staumbaumgenerierung anhand ausgewählter Win32 Viren eingegangen. I programmi maligni, quali virus e worm, sono raramente scritti da zero; questo significa che vi sono delle relazioni di evoluzione tra di loro. Scoprire queste relazioni e costruire una filogenia accurata puo’aiutare sia nell’analisi di nuovi programmi di questo tipo, sia per stabilire una nomenclatura avente una base solida. Cercare permutazioni di codice tra vari programmi puo’ dare un vantaggio per la generazione delle filogenie, dal momento che i passaggi evolutivi implementati dagli autori possono non aver preservato la sequenzialita’ del codice originario. In questo articolo descriviamo una famiglia di generatori di filogenie che effettuano clustering usando feature basate su alberi PQ. In un esperimento l’albero di output dei generatori viene confrontato con una classificazione di rifetimento ottenuta da un programma anti-virus, e con delle filogenie generate usando n-grammi pesati. I risultati indicano i risultati positivi dell’approccio basato su permutazioni nella generazione delle filogenie del malware. ", "title": "" }, { "docid": "63b2bc943743d5b8ef9220fd672df84f", "text": "In multiagent systems, we often have a set of agents each of which have a preference ordering over a set of items and one would like to know these preference orderings for various tasks, for example, data analysis, preference aggregation, voting etc. However, we often have a large number of items which makes it impractical to ask the agents for their complete preference ordering. In such scenarios, we usually elicit these agents’ preferences by asking (a hopefully small number of) comparison queries — asking an agent to compare two items. Prior works on preference elicitation focus on unrestricted domain and the domain of single peaked preferences and show that the preferences in single peaked domain can be elicited by much less number of queries compared to unrestricted domain. We extend this line of research and study preference elicitation for single peaked preferences on trees which is a strict superset of the domain of single peaked preferences. We show that the query complexity crucially depends on the number of leaves, the path cover number, and the distance from path of the underlying single peaked tree, whereas the other natural parameters like maximum degree, diameter, pathwidth do not play any direct role in determining query complexity. We then investigate the query complexity for finding a weak Condorcet winner for preferences single peaked on a tree and show that this task has much less query complexity than preference elicitation. Here again we observe that the number of leaves in the underlying single peaked tree and the path cover number of the tree influence the query complexity of the problem.", "title": "" }, { "docid": "86d58f4196ceb48e29cb143e6a157c22", "text": "In this paper, we challenge a form of paragraph-to-question generation task. We propose a question generation system which can generate a set of comprehensive questions from a body of text. Besides the tree kernel functions to assess the grammatically of the generated questions, our goal is to rank them by using community-based question answering systems to calculate the importance of the generated questions. The main assumption behind our work is that each body of text is related to a topic of interest and it has a comprehensive information about the topic.", "title": "" }, { "docid": "75b0a7b0fa0320a3666fb147471dd45f", "text": "Maximum power densities by air-driven microbial fuel cells (MFCs) are considerably influenced by cathode performance. We show here that application of successive polytetrafluoroethylene (PTFE) layers (DLs), on a carbon/PTFE base layer, to the air-side of the cathode in a single chamber MFC significantly improved coulombic efficiencies (CEs), maximum power densities, and reduced water loss (through the cathode). Electrochemical tests using carbon cloth electrodes coated with different numbers of DLs indicated an optimum increase in the cathode potential of 117 mV with four-DLs, compared to a <10 mV increase due to the carbon base layer alone. In MFC tests, four-DLs was also found to be the optimum number of coatings, resulting in a 171% increase in the CE (from 19.1% to 32%), a 42% increase in the maximum power density (from 538 to 766 mW m ), and measurable water loss was prevented. The increase in CE due is believed to result from the increased power output and the increased operation time (due to a reduction in aerobic degradation of substrate sustained by oxygen diffusion through the cathode). 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "615e43e2dc7c12c38c87a4a6649407c0", "text": "BACKGROUND\nThe management of chronic pain is a complex challenge worldwide. Cannabis-based medicines (CBMs) have proven to be efficient in reducing chronic pain, although the topic remains highly controversial in this field.\n\n\nOBJECTIVES\nThis study's aim is to conduct a conclusive review and meta-analysis, which incorporates all randomized controlled trials (RCTs) in order to update clinicians' and researchers' knowledge regarding the efficacy and adverse events (AEs) of CBMs for chronic and postoperative pain treatment.\n\n\nSTUDY DESIGN\nA systematic review and meta-analysis.\n\n\nMETHODS\nAn electronic search was conducted using Medline/Pubmed and Google Scholar with the use of Medical Subject Heading (MeSH) terms on all literature published up to July 2015. A follow-up manual search was conducted and included a complete cross-check of the relevant studies. The included studies were RCTs which compared the analgesic effects of CBMs to placebo. Hedges's g scores were calculated for each of the studies. A study quality assessment was performed utilizing the Jadad scale. A meta-analysis was performed utilizing random-effects models and heterogeneity between studies was statistically computed using I² statistic and tau² test.\n\n\nRESULTS\nThe results of 43 RCTs (a total of 2,437 patients) were included in this review, of which 24 RCTs (a total of 1,334 patients) were eligible for meta-analysis. This analysis showed limited evidence showing more pain reduction in chronic pain -0.61 (-0.78 to -0.43, P < 0.0001), especially by inhalation -0.93 (-1.51 to -0.35, P = 0.001) compared to placebo. Moreover, even though this review consisted of some RCTs that showed a clinically significant improvement with a decrease of pain scores of 2 points or more, 30% or 50% or more, the majority of the studies did not show an effect. Consequently, although the primary analysis showed that the results were favorable to CBMs over placebo, the clinical significance of these findings is uncertain. The most prominent AEs were related to the central nervous and the gastrointestinal (GI) systems.\n\n\nLIMITATIONS\nPublication limitation could have been present due to the inclusion of English-only published studies. Additionally, the included studies were extremely heterogeneous. Only 7 studies reported on the patients' history of prior consumption of CBMs. Furthermore, since cannabinoids are surrounded by considerable controversy in the media and society, cannabinoids have marked effects, so that inadequate blinding of the placebo could constitute an important source of limitation in these types of studies.\n\n\nCONCLUSIONS\nThe current systematic review suggests that CBMs might be effective for chronic pain treatment, based on limited evidence, primarily for neuropathic pain (NP) patients. Additionally, GI AEs occurred more frequently when CBMs were administered via oral/oromucosal routes than by inhalation.Key words: Cannabis, CBMs, chronic pain, postoperative pain, review, meta-analysis.", "title": "" }, { "docid": "d1aa525575e33c587d86e89566c21a49", "text": "This paper investigates the problem of fault detection for nonlinear discrete-time networked systems under an event-triggered scheme. A polynomial fuzzy fault detection filter is designed to generate a residual signal and detect faults in the system. A novel polynomial event-triggered scheme is proposed to determine the transmission of the signal. A fault detection filter is designed to guarantee that the residual system is asymptotically stable and satisfies the desired performance. Polynomial approximated membership functions obtained by Taylor series are employed for filtering analysis. Furthermore, sufficient conditions are represented in terms of sum of squares (SOSs) and can be solved by SOS tools in MATLAB environment. A numerical example is provided to demonstrate the effectiveness of the proposed results.", "title": "" }, { "docid": "7f47253095756d9640e8286a08ce3b74", "text": "A speaker’s intentions can be represented by domain actions (domainindependent speech act and domain-dependent concept sequence pairs). Therefore, it is essential that domain actions be determined when implementing dialogue systems because a dialogue system should determine users’ intentions from their utterances and should create counterpart intentions to the users’ intentions. In this paper, a neural network model is proposed for classifying a user’s domain actions and planning a system’s domain actions. An integrated neural network model is proposed for simultaneously determining user and system domain actions using the same framework. The proposed model performed better than previous non-integrated models in an experiment using a goal-oriented dialogue corpus. This result shows that the proposed integration method contributes to improving domain action determination performance. Keywords—Domain Action, Speech Act, Concept Sequence, Neural Network", "title": "" }, { "docid": "af3e8e26ec6f56a8cd40e731894f5993", "text": "Probiotic bacteria are sold mainly in fermented foods, and dairy products play a predominant role as carriers of probiotics. These foods are well suited to promoting the positive health image of probiotics for several reasons: 1) fermented foods, and dairy products in particular, already have a positive health image; 2) consumers are familiar with the fact that fermented foods contain living microorganisms (bacteria); and 3) probiotics used as starter organisms combine the positive images of fermentation and probiotic cultures. When probiotics are added to fermented foods, several factors must be considered that may influence the ability of the probiotics to survive in the product and become active when entering the consumer's gastrointestinal tract. These factors include 1) the physiologic state of the probiotic organisms added (whether the cells are from the logarithmic or the stationary growth phase), 2) the physical conditions of product storage (eg, temperature), 3) the chemical composition of the product to which the probiotics are added (eg, acidity, available carbohydrate content, nitrogen sources, mineral content, water activity, and oxygen content), and 4) possible interactions of the probiotics with the starter cultures (eg, bacteriocin production, antagonism, and synergism). The interactions of probiotics with either the food matrix or the starter culture may be even more intensive when probiotics are used as a component of the starter culture. Some of these aspects are discussed in this article, with an emphasis on dairy products such as milk, yogurt, and cheese.", "title": "" }, { "docid": "8eb0f822b4e8288a6b78abf0bf3aecbb", "text": "Cloud computing enables access to the widespread services and resources in cloud datacenters for mitigating resource limitations in low-potential client devices. Computational cloud is an attractive platform for computational offloading due to the attributes of scalability and availability of resources. Therefore, mobile cloud computing (MCC) leverages the application processing services of computational clouds for enabling computational-intensive and ubiquitous mobile applications on smart mobile devices (SMDs). Computational offloading frameworks focus on offloading intensive mobile applications at different granularity levels which involve resource-intensive mechanism of application profiling and partitioning at runtime. As a result, the energy consumption cost (ECC) and turnaround time of the application is increased. This paper proposes an active service migration (ASM) framework for computational offloading to cloud datacenters, which employs lightweight procedure for the deployment of runtime distributed platform. The proposed framework employs coarse granularity level and simple developmental and deployment procedures for computational offloading in MCC. ASM is evaluated by benchmarking prototype application on the Android devices in the real MCC environment. It is found that the turnaround time of the application reduces up to 45 % and ECC of the application reduces up to 33 % in ASM-based computational offloading as compared to traditional offloading techniques which shows the lightweight nature of the proposed framework for computational offloading.", "title": "" }, { "docid": "e731c10f822aa74b37263bee92a73be2", "text": "Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems.", "title": "" }, { "docid": "815f6ee1be0244b3815903d97742bf5f", "text": "To evaluate the short- and long-term results after a modified Chevrel technique for midline incisional hernia repair, regarding surgical technique, hospital stay, wound complications, recurrence rate, and postoperative quality of life. These results will be compared to the literature derived reference values regarding the original and modified Chevrel techniques. In this large retrospective, single surgeon, single centre cohort all modified Chevrel hernia repairs between 2000 and 2012 were identified. Results were obtained by reviewing patients’ medical charts. Postoperative quality of life was measured using the Carolina Comfort Scale. A multi-database literature search was conducted to compare the results of our series to the literature based reference values. One hundred and fifty-five patients (84 male, 71 female) were included. Eighty patients (52%) had a large incisional hernia (width ≥ 10 cm) according the definition of the European Hernia Society. Fourteen patients (9%) underwent a concomitant procedure. Median length-of-stay was 5 days. Within 30 days postoperative 36 patients (23.2%) had 39 postoperative complications of which 30 were mild (CDC I–II), and nine severe (CDC III–IV). Thirty-one surgical site occurrences were observed in thirty patients (19.4%) of which the majority were seroma (16 patients 10.3%). There was no hernia-related mortality during follow-up. Recurrence rate was 1.8% after a median follow-up of 52 months (12–128 months). Postoperative quality of life was rated excellent. The modified Chevrel technique for midline ventral hernias results in a moderate complication rate, low recurrence rate and high rated postoperative quality of life.", "title": "" } ]
scidocsrr
b5058bb2c8ad7534f010c04fa0032c83
SurroundSense: mobile phone localization via ambience fingerprinting
[ { "docid": "ed9e22167d3e9e695f67e208b891b698", "text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.", "title": "" }, { "docid": "8718d91f37d12b8ff7658723a937ea84", "text": "We consider the problem of monitoring road and traffic conditions in a city. Prior work in this area has required the deployment of dedicated sensors on vehicles and/or on the roadside, or the tracking of mobile phones by service providers. Furthermore, prior work has largely focused on the developed world, with its relatively simple traffic flow patterns. In fact, traffic flow in cities of the developing regions, which comprise much of the world, tends to be much more complex owing to varied road conditions (e.g., potholed roads), chaotic traffic (e.g., a lot of braking and honking), and a heterogeneous mix of vehicles (2-wheelers, 3-wheelers, cars, buses, etc.).\n To monitor road and traffic conditions in such a setting, we present Nericell, a system that performs rich sensing by piggybacking on smartphones that users carry with them in normal course. In this paper, we focus specifically on the sensing component, which uses the accelerometer, microphone, GSM radio, and/or GPS sensors in these phones to detect potholes, bumps, braking, and honking. Nericell addresses several challenges including virtually reorienting the accelerometer on a phone that is at an arbitrary orientation, and performing honk detection and localization in an energy efficient manner. We also touch upon the idea of triggered sensing, where dissimilar sensors are used in tandem to conserve energy. We evaluate the effectiveness of the sensing functions in Nericell based on experiments conducted on the roads of Bangalore, with promising results.", "title": "" } ]
[ { "docid": "a5ed1ebf973e3ed7ea106e55795e3249", "text": "The variable reluctance (VR) resolver is generally used instead of an optical encoder as a position sensor on motors for hybrid electric vehicles or electric vehicles owing to its reliability, low cost, and ease of installation. The commonly used conventional winding method for the VR resolver has disadvantages, such as complicated winding and unsuitability for mass production. This paper proposes an improved winding method that leads to simpler winding and better suitability for mass production than the conventional method. In this paper, through the design and finite element analysis for two types of output winding methods, the advantages and disadvantages of each method are presented, and the validity of the proposed winding method is verified. In addition, experiments with the VR resolver using the proposed winding method have been performed to verify its performance.", "title": "" }, { "docid": "cbde86d9b73371332a924392ae1f10d0", "text": "The difficulty to solve multiple objective combinatorial optimization problems with traditional techniques has urged researchers to look for alternative, better performing approaches for them. Recently, several algorithms have been proposed which are based on the Ant Colony Optimization metaheuristic. In this contribution, the existing algorithms of this kind are reviewed and experimentally tested in several instances of the bi-objective traveling salesman problem, comparing their performance with that of two well-known multi-objective genetic algorithms.", "title": "" }, { "docid": "446af0ad077943a77ac4a38fd84df900", "text": "We investigate the manufacturability of 20-nm double-gate and FinFET devices in integrated circuits by projecting process tolerances. Two important factors affecting the sensitivity of device electrical parameters to physical variations were quantitatively considered. The quantum effect was computed using the density gradient method and the sensitivity of threshold voltage to random dopant fluctuation was studied by Monte Carlo simulation. Our results show the 3 value ofVT variation caused by discrete impurity fluctuation can be greater than 100%. Thus, engineering the work function of gate materials and maintaining a nearly intrinsic channel is more desirable. Based on a design with an intrinsic channel and ideal gate work function, we analyzed the sensitivity of device electrical parameters to several important physical fluctuations such as the variations in gate length, body thickness, and gate dielectric thickness. We found that quantum effects have great impact on the performance of devices. As a result, the device electrical behavior is sensitive to small variations of body thickness. The effect dominates over the effects produced by other physical fluctuations. To achieve a relative variation of electrical parameters comparable to present practice in industry, we face a challenge of fin width control (less than 1 nm 3 value of variation) for the 20-nm FinFET devices. The constraint of the gate length variation is about 10 15%. We estimate a tolerance of 1 2 A 3 value of oxide thickness variation and up to 30% front-back oxide thickness mismatch.", "title": "" }, { "docid": "de0d2808f949723f1c0ee8e87052f889", "text": "The notion of Cloud computing has not only reshaped the field of distributed systems but also fundamentally changed how businesses utilize computing today. While Cloud computing provides many advanced features, it still has some shortcomings such as the relatively high operating cost for both public and private Clouds. The area of Green computing is also becoming increasingly important in a world with limited energy resources and an ever-rising demand for more computational power. In this paper a new framework is presented that provides efficient green enhancements within a scalable Cloud computing architecture. Using power-aware scheduling techniques, variable resource management, live migration, and a minimal virtual machine design, overall system efficiency will be vastly improved in a data center based Cloud with minimal performance overhead.", "title": "" }, { "docid": "e0d6212e77cbd54b54db5d38eca29814", "text": "Summarization aims to represent source documents by a shortened passage. Existing methods focus on the extraction of key information, but often neglect coherence. Hence the generated summaries suffer from a lack of readability. To address this problem, we have developed a graph-based method by exploring the links between text to produce coherent summaries. Our approach involves finding a sequence of sentences that best represent the key information in a coherent way. In contrast to the previous methods that focus only on salience, the proposed method addresses both coherence and informativeness based on textual linkages. We conduct experiments on the DUC2004 summarization task data set. A performance comparison reveals that the summaries generated by the proposed system achieve comparable results in terms of the ROUGE metric, and show improvements in readability by human evaluation.", "title": "" }, { "docid": "d9b19dd523fd28712df61384252d331c", "text": "Purpose – The purpose of this paper is to examine the ways in which governments build social media and information and communication technologies (ICTs) into e-government transparency initiatives, to promote collaboration with members of the public and the ways in members of the public are able to employ the same social media to monitor government activities. Design/methodology/approach – This study used an iterative strategy that involved conducting a literature review, content analysis, and web site analysis, offering multiple perspectives on government transparency efforts, the role of ICTs and social media in these efforts, and the ability of e-government initiatives to foster collaborative transparency through embedded ICTs and social media. Findings – The paper identifies key initiatives, potential impacts, and future challenges for collaborative e-government as a means of transparency. Originality/value – The paper is one of the first to examine the interrelationships between ICTs, social media, and collaborative e-government to facilitate transparency.", "title": "" }, { "docid": "d7c27413eb3f379618d1aafd85a43d3f", "text": "This paper presents a tool Altair that automatically generates API function cross-references, which emphasizes reliable structural measures and does not depend on specific client code. Altair ranks related API functions for a given query according to pair-wise overlap, i.e., how they share state, and clusters tightly related ones into meaningful modules.\n Experiments against several popular C software packages show that Altair recommends related API functions for a given query with remarkably more precise and complete results than previous tools, that it can extract modules from moderate-sized software (e.g., Apache with 1000+ functions) at high precision and recall rates (e.g., both exceeding 70% for two modules in Apache), and that the computation can finish within a few seconds.", "title": "" }, { "docid": "44b7ed6c8297b6f269c8b872b0fd6266", "text": "vii", "title": "" }, { "docid": "ee18a820614aac64d26474796464b518", "text": "Recommender systems have already proved to be valuable for coping with the information overload problem in several application domains. They provide people with suggestions for items which are likely to be of interest for them; hence, a primary function of recommender systems is to help people make good choices and decisions. However, most previous research has focused on recommendation techniques and algorithms, and less attention has been devoted to the decision making processes adopted by the users and possibly supported by the system. There is still a gap between the importance that the community gives to the assessment of recommendation algorithms and the current range of ongoing research activities concerning human decision making. Different decision-psychological phenomena can influence the decision making of users of recommender systems, and research along these lines is becoming increasingly important and popular. This special issue highlights how the coupling of recommendation algorithms with the understanding of human choice and decision making theory has the potential to benefit research and practice on recommender systems and to enable users to achieve a good balance between decision accuracy and decision effort.", "title": "" }, { "docid": "dd1e7bb3ba33c5ea711c0d066db53fa9", "text": "This paper presents the development and test of a flexible control strategy for an 11-kW wind turbine with a back-to-back power converter capable of working in both stand-alone and grid-connection mode. The stand-alone control is featured with a complex output voltage controller capable of handling nonlinear load and excess or deficit of generated power. Grid-connection mode with current control is also enabled for the case of isolated local grid involving other dispersed power generators such as other wind turbines or diesel generators. A novel automatic mode switch method based on a phase-locked loop controller is developed in order to detect the grid failure or recovery and switch the operation mode accordingly. A flexible digital signal processor (DSP) system that allows user-friendly code development and online tuning is used to implement and test the different control strategies. The back-to-back power conversion configuration is chosen where the generator converter uses a built-in standard flux vector control to control the speed of the turbine shaft while the grid-side converter uses a standard pulse-width modulation active rectifier control strategy implemented in a DSP controller. The design of the longitudinal conversion loss filter and of the involved PI-controllers are described in detail. Test results show the proposed methods works properly.", "title": "" }, { "docid": "79287d0ca833605430fefe4b9ab1fd92", "text": "Passwords are frequently used in data encryption and user authentication. Since people incline to choose meaningful words or numbers as their passwords, lots of passwords are easy to guess. This paper introduces a password guessing method based on Long Short-Term Memory recurrent neural networks. After training our LSTM neural network with 30 million passwords from leaked Rockyou dataset, the generated 3.35 billion passwords could cover 81.52% of the remaining Rockyou dataset. Compared with PCFG and Markov methods, this method shows higher coverage rate.", "title": "" }, { "docid": "27ffdb0d427d2e281ffe84e219e6ed72", "text": "UNLABELLED\nHitherto, noncarious cervical lesions (NCCLs) of teeth have been generally ascribed to either toothbrush-dentifrice abrasion or acid \"erosion.\" The last two decades have provided a plethora of new studies concerning such lesions. The most significant studies are reviewed and integrated into a practical approach to the understanding and designation of these lesions. A paradigm shift is suggested regarding use of the term \"biocorrosion\" to supplant \"erosion\" as it continues to be misused in the United States and many other countries of the world. Biocorrosion embraces the chemical, biochemical, and electrochemical degradation of tooth substance caused by endogenous and exogenous acids, proteolytic agents, as well as the piezoelectric effects only on dentin. Abfraction, representing the microstructural loss of tooth substance in areas of stress concentration, should not be used to designate all NCCLs because these lesions are commonly multifactorial in origin. Appropriate designation of a particular NCCL depends upon the interplay of the specific combination of three major mechanisms: stress, friction, and biocorrosion, unique to that individual case. Modifying factors, such as saliva, tongue action, and tooth form, composition, microstructure, mobility, and positional prominence are elucidated.\n\n\nCLINICAL SIGNIFICANCE\nBy performing a comprehensive medical and dental history, using precise terms and concepts, and utilizing the Revised Schema of Pathodynamic Mechanisms, the dentist may successfully identify and treat the etiology of root surface lesions. Preventive measures may be instituted if the causative factors are detected and their modifying factors are considered.", "title": "" }, { "docid": "598dd39ec35921242b94f17e24b30389", "text": "In this paper, we present a study on the characterization and the classification of textures. This study is performed using a set of values obtained by the computation of indexes. To obtain these indexes, we extract a set of data with two techniques: the computation of matrices which are statistical representations of the texture and the computation of \"measures\". These matrices and measures are subsequently used as parameters of a function bringing real or discrete values which give information about texture features. A model of texture characterization is built based on this numerical information, for example to classify textures. An application is proposed to classify cells nuclei in order to diagnose patients affected by the Progeria disease.", "title": "" }, { "docid": "159e040b0e74ad1b6124907c28e53daf", "text": "People (pedestrians, drivers, passengers in public transport) use different services on small mobile gadgets on a daily basis. So far, mobile applications don't react to context changes. Running services should adapt to the changing environment and new services should be installed and deployed automatically. We propose a classification of context elements that influence the behavior of the mobile services, focusing on the challenges of the transportation domain. Malware Detection on Mobile Devices Asaf Shabtai*, Ben-Gurion University, Israel Abstract: We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. Dynamic Approximative Data Caching in Wireless Sensor Networks Nils Hoeller*, IFIS, University of Luebeck Abstract: Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations for processing queries by using adaptive caching structures are discussed. Results can be retrieved from caches that are placed nearer to the query source. As a result the communication demand is reduced and hence energy is saved by using the cached results. To verify cache coherence in networks with non-reliable communication channels, an approximate update policy is presented. A degree of result quality can be defined for a query to find the adequate cache adaptively. Gossip-based Data Fusion Framework for Radio Resource Map Jin Yang*, Ilmenau University of Technology Abstract: In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. Dynamic Social Grouping Based Routing in a Mobile Ad-Hoc Network Roy Cabaniss*, Missouri S&T Abstract: Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. MobileSOA framework for Context-Aware Mobile Applications Aaratee Shrestha*, University of Leipzig Abstract: Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Performance Analysis of Secure Hierarchical Data Aggregation in Wireless Sensor Networks Vimal Kumar*, Missouri S&T Abstract: Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. A Communication Efficient Framework for Finding Outliers in Wireless Sensor Networks Dylan McDonald*, MS&T Abstract: Outlier detection is a well studied problem in various fields. The unique challenges of wireless sensor networks make this problem especially challenging. Sensors can detect outliers for a plethora of reasons and these reasons need to be inferred in real time. Here, we present a new communication technique to find outliers in a wireless sensor network. Communication is minimized through controlling sensor when sensors are allowed to communicate. At the same time, minimal assumptions are made about the nature of the data set as to ", "title": "" }, { "docid": "b142873eed364bd471fbe231cd19c27d", "text": "Robotics have long sought an actuation technology comparable to or as capable as biological muscle tissue. Natural muscles exhibit a high power-to-weight ratio, inherent compliance and damping, fast action, and a high dynamic range. They also produce joint displacements and forces without the need for gearing or additional hardware. Recently, supercoiled commercially available polymer threads (sewing thread or nylon fishing lines) have been used to create significant mechanical power in a muscle-like form factor. Heating and cooling the polymer threads causes contraction and expansion, which can be utilized for actuation. In this paper, we describe the working principle of supercoiled polymer (SCP) actuation and explore the controllability and properties of these threads. We show that under appropriate environmental conditions, the threads are suitable as a building block for a controllable artificial muscle. We leverage off-the-shelf silver-coated threads to enable rapid electrical heating while the low thermal mass allows for rapid cooling. We utilize both thermal and thermomechanical models for feed-forward and feedback control. The resulting SCP actuator regulates to desired force levels in as little as 28 ms. Together with its inherent stiffness and damping, this is sufficient for a position controller to execute large step movements in under 100 ms. This controllability, high performance, the mechanical properties, and the extremely low material cost are indicative of a viable artificial muscle.", "title": "" }, { "docid": "2cb0c74e57dea6fead692d35f8a8fac6", "text": "Matching local image descriptors is a key step in many computer vision applications. For more than a decade, hand-crafted descriptors such as SIFT have been used for this task. Recently, multiple new descriptors learned from data have been proposed and shown to improve on SIFT in terms of discriminative power. This paper is dedicated to an extensive experimental evaluation of learned local features to establish a single evaluation protocol that ensures comparable results. In terms of matching performance, we evaluate the different descriptors regarding standard criteria. However, considering matching performance in isolation only provides an incomplete measure of a descriptors quality. For example, finding additional correct matches between similar images does not necessarily lead to a better performance when trying to match images under extreme viewpoint or illumination changes. Besides pure descriptor matching, we thus also evaluate the different descriptors in the context of image-based reconstruction. This enables us to study the descriptor performance on a set of more practical criteria including image retrieval, the ability to register images under strong viewpoint and illumination changes, and the accuracy and completeness of the reconstructed cameras and scenes. To facilitate future research, the full evaluation pipeline is made publicly available.", "title": "" }, { "docid": "7e4c00d8f17166cbfb3bdac8d5e5ad09", "text": "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility.", "title": "" }, { "docid": "e39ad8ee1d913cba1707b6aafafceefb", "text": "Thoracic Outlet Syndrome (TOS) is the constellation of symptoms caused by compression of neurovascular structures at the superior aperture of the thorax, properly the thoracic inlet! The diagnosis and treatment is contentious and some even question its existence. Symptoms are often confused with distal compression neuropathies or cervical", "title": "" }, { "docid": "f56c5a623b29b88f42bf5d6913b2823e", "text": "We describe a novel interface for composition of polygonal meshes based around two artist-oriented tools: Geometry Drag-and-Drop and Mesh Clone Brush. Our drag-and-drop interface allows a complex surface part to be selected and interactively dragged to a new location. We automatically fill the hole left behind and smoothly deform the part to conform to the target surface. The artist may increase the boundary rigidity of this deformation, in which case a fair transition surface is automatically computed. Our clone brush allows for transfer of surface details with precise spatial control. These tools support an interaction style that has not previously been demonstrated for 3D surfaces, allowing detailed 3D models to be quickly assembled from arbitrary input meshes. We evaluated this interface by distributing a basic tool to computer graphics hobbyists and professionals, and based on their feedback, describe potential workflows which could utilize our techniques.", "title": "" }, { "docid": "863ec0a6a06ce9b3cc46c85b09a7af69", "text": "Apollonian circle packings arise by repeatedly filling the interstices between mutually tangent circles with further tangent circles. It is possible for every circle in such a packing to have integer radius of curvature, and we call such a packing an integral Apollonian circle packing. This paper studies number-theoretic properties of the set of integer curvatures appearing in such packings. Each Descartes quadruple of four tangent circles in the packing gives an integer solution to the Descartes equation, which relates the radii of curvature of four mutually tangent circles: x2 + y2 + z2 + w2 = 12(x + y + z + w) 2. Each integral Apollonian circle packing is classified by a certain root quadruple of integers that satisfies the Descartes equation, and that corresponds to a particular quadruple of circles appearing in the packing. We express the number of root quadruples with fixed minimal element −n as a class number, and give an exact formula for it. We study which integers occur in a given integer packing, and determine congruence restrictions which sometimes apply. We present evidence suggesting that the set of integer radii of curvatures that appear in an integral Apollonian circle packing has positive density, and in fact represents all sufficiently large integers not excluded by congruence conditions. Finally, we discuss asymptotic properties of the set of curvatures obtained as the packing is recursively constructed from a root quadruple.", "title": "" } ]
scidocsrr
79cb18a6c5404c79bc16669f06645b43
Machine Learning algorithms : a study on noise sensitivity
[ { "docid": "cc1ae8daa1c1c4ee2b3b4a65ef48b6f5", "text": "The use of entropy as a distance measure has several benefits. Amongst other things it provides a consistent approach to handling of symbolic attributes, real valued attributes and missing values. The approach of taking all possible transformation paths is discussed. We describe K*, an instance-based learner which uses such a measure, and results are presented which compare favourably with several machine learning algorithms.", "title": "" } ]
[ { "docid": "1e7f603d0452cc81f2d460645a656ea2", "text": "5G, the next generation of wireless communications, is focusing on modern antenna technologies like massive MIMO, phased arrays and mm-wave band to obtain data rates up to 10 Gbps. In this paper, we have proposed a new 64 element, 8×8 phased series fed patch antenna array, for 28 GHz, mm-wave band 5G mobile base station antennas. The phased array steers its beam along the horizontal axis to provide the coverage to the users in different directions. The 8×8 array contains eight 8-element series fed arrays. The series fed array is designed in such a way that it shows a mixed standing wave and travelling wave behaviour. The initial seven elements of the series fed array show a standing wave behaviour, while the eighth element shows a travelling wave behaviour. It is due to the provision of the matched inset feeding, to effectively radiate the maximum power arriving at this element. The array is operating from 27.9 GHz to 28.4 GHz with a 500 MHz impedance bandwidth. The gain of the array is 24.25 dBi while the suppression of the side lobes is −14 dB. The half power beamwidth of the array is 11.5°. The beam steering is performed which shows a stable side lobe level. The array shows a stable array gain throughout the impedance bandwidth without any beam tilting due to the variations in the frequency. The simulations are performed using both CST Microwave Studio Suite 2015 and Ansys Electromagnetics Suite 17.0 simulation packages.", "title": "" }, { "docid": "34d0b8d4b1c25b4be30ad0c15435f407", "text": "Cranioplasty using alternate alloplastic bone substitutes instead of autologous bone grafting is inevitable in the clinical field. The authors present their experiences with cranial reshaping using methyl methacrylate (MMA) and describe technical tips that are keys to a successful procedure. A retrospective chart review of patients who underwent cranioplasty with MMA between April 2007 and July 2010 was performed. For 20 patients, MMA was used for cranioplasty after craniofacial trauma (n = 16), tumor resection (n = 2), and a vascular procedure (n = 2). The patients were divided into two groups. In group 1, MMA was used in full-thickness inlay fashion (n = 3), and in group 2, MMA was applied in partial-thickness onlay fashion (n = 17). The locations of reconstruction included the frontotemporal region (n = 5), the frontoparietotemporal region (n = 5), the frontal region (n = 9), and the vertex region (n = 1). The size of cranioplasty varied from 30 to 144 cm2. The amount of MMA used ranged from 20 to 70 g. This biomaterial was applied without difficulty, and no intraoperative complications were linked to the applied material. The patients were followed for 6 months to 4 years (mean, 2 years) after MMA implantation. None of the patients showed any evidence of implant infection, exposure, or extrusion. Moreover, the construct appeared to be structurally stable over time in all the patients. Methyl methacrylate is a useful adjunct for treating deficiencies of the cranial skeleton. It provides rapid and reliable correction of bony defects and contour deformities. Although MMA is alloplastic, appropriate surgical procedures can avoid problems such as infection and extrusion. An acceptable overlying soft tissue envelope should be maintained together with minimal contamination of the operative site. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .", "title": "" }, { "docid": "3b755e81c28ccb3dcd95d3cbdb84c736", "text": "The goal of rootkit is often to hide malicious software running on a compromised machine. While there has been significant amount of research done on different rootkits, we describe a new type of rootkit that is kernel-independent – i.e., no aspect of the kernel is modified and no code is added to the kernel address space to install the rootkit. In this work, we present PIkit – Processor-Interconnect rootkit that exploits the vulnerable hardware features within multi-socket servers that are commonly used in datacenters and high-performance computing. In particular, PIkit exploits the DRAM address mapping table structure that determines the destination node of a memory request packet in the processorinterconnect. By modifying this mapping table appropriately, PIkit enables access to victim’s memory address region without proper permission. Once PIkit is installed, only user-level code or payload is needed to carry out malicious activities. The malicious payload mostly consists of memory read and/or write instructions that appear like “normal” user-space memory accesses and it becomes very difficult to detect such malicious payload. We describe the design and implementation of PIkit on both an AMD and an Intel x86 multi-socket servers that are commonly used. We discuss different malicious activities possible with PIkit and limitations of PIkit, as well as possible software and hardware solutions to PIkit.", "title": "" }, { "docid": "a3227034d28c2f2a0f858e1a233ecbc4", "text": "With the persistent shift towards multi-sourcing, the complexity of service delivery is continuously increasing. This presents new challenges for clients who now have to integrate interdependent services from multiple providers. As other functions, service integration is subject to make-or-buy decisions: clients can either build the required capabilities themselves or delegate service integration to external functions. To define detailed organizational models, one requires understanding of specific tasks and how to allocate them. Based on a qualitative and quantitative expert study, we analyze generic organizational models, and identify key service integration tasks. The allocation of these tasks to clients or their providers generates a set of granular organizational structures. We analyze drivers for delegating these tasks, and develop typical allocations in practice. Our work contributes to expanding the theoretical foundations of service integration. Moreover, our findings will assist clients to design their service integration organization, and to build more effective multi-sourcing solutions.", "title": "" }, { "docid": "dbdc0a429784aa085c571b7c01e3399f", "text": "A large number of deaths are caused by Traffic accidents worldwide. The global crisis of road safety can be seen by observing the significant number of deaths and injuries that are caused by road traffic accidents. In many situations the family members or emergency services are not informed in time. This results in delayed emergency service response time, which can lead to an individual’s death or cause severe injury. The purpose of this work is to reduce the response time of emergency services in situations like traffic accidents or other emergencies such as fire, theft/robberies and medical emergencies. By utilizing onboard sensors of a smartphone to detect vehicular accidents and report it to the nearest emergency responder available and provide real time location tracking for responders and emergency victims, will drastically increase the chances of survival for emergency victims, and also help save emergency services time and resources. Keywords—Traffic accidents; accident detection; on-board sensor; accelerometer; android smartphones; real-time tracking; emergency services; emergency responder; emergency victim; SOSafe; SOSafe Go; firebase", "title": "" }, { "docid": "e2a1c8dfae27d56faf2fee494ffbae28", "text": "Quantitative structure-activity relationship (QSAR) modeling pertains to the construction of predictive models of biological activities as a function of structural and molecular information of a compound library. The concept of QSAR has typically been used for drug discovery and development and has gained wide applicability for correlating molecular information with not only biological activities but also with other physicochemical properties, which has therefore been termed quantitative structure-property relationship (QSPR). Typical molecular parameters that are used to account for electronic properties, hydrophobicity, steric effects, and topology can be determined empirically through experimentation or theoretically via computational chemistry. A given compilation of data sets is then subjected to data pre-processing and data modeling through the use of statistical and/or machine learning techniques. This review aims to cover the essential concepts and techniques that are relevant for performing QSAR/QSPR studies through the use of selected examples from our previous work.", "title": "" }, { "docid": "5f89dba01f03d4e7fbb2baa8877e0dff", "text": "The basic aim of a biometric identification system is to discriminate automatically between subjects in a reliable and dependable way, according to a specific-target application. Multimodal biometric identification systems aim to fuse two or more physical or behavioral traits to provide optimal False Acceptance Rate (FAR) and False Rejection Rate (FRR), thus improving system accuracy and dependability. In this paper, an innovative multimodal biometric identification system based on iris and fingerprint traits is proposed. The paper is a state-of-the-art advancement of multibiometrics, offering an innovative perspective on features fusion. In greater detail, a frequency-based approach results in a homogeneous biometric vector, integrating iris and fingerprint data. Successively, a hamming-distance-based matching algorithm deals with the unified homogenous biometric vector. The proposed multimodal system achieves interesting results with several commonly used databases. For example, we have obtained an interesting working point with FAR = 0% and FRR = 5.71% using the entire fingerprint verification competition (FVC) 2002 DB2B database and a randomly extracted same-size subset of the BATH database. At the same time, considering the BATH database and the FVC2002 DB2A database, we have obtained a further interesting working point with FAR = 0% and FRR = 7.28% ÷ 9.7%.", "title": "" }, { "docid": "dc94e340ceb76a0c9fda47bac4be9920", "text": "Mobile health (mHealth) apps are an ideal tool for monitoring and tracking long-term health conditions; they are becoming incredibly popular despite posing risks to personal data privacy and security. In this paper, we propose a testing method for Android mHealth apps which is designed using a threat analysis, considering possible attack scenarios and vulnerabilities specific to the domain. To demonstrate the method, we have applied it to apps for managing hypertension and diabetes, discovering a number of serious vulnerabilities in the most popular applications. Here we summarise the results of that case study, and discuss the experience of using a testing method dedicated to the domain, rather than out-of-the-box Android security testing methods. We hope that details presented here will help design further, more automated, mHealth security testing tools and methods.", "title": "" }, { "docid": "33ef3a8f8f218ef38dce647bf232a3a7", "text": "Network traffic monitoring and analysis-related research has struggled to scale for massive amounts of data in real time. Some of the vertical scaling solutions provide good implementation of signature based detection. Unfortunately these approaches treat network flows across different subnets and cannot apply anomaly-based classification if attacks originate from multiple machines at a lower speed, like the scenario of Peer-to-Peer Botnets. In this paper the authors build up on the progress of open source tools like Hadoop, Hive and Mahout to provide a scalable implementation of quasi-real-time intrusion detection system. The implementation is used to detect Peer-to-Peer Botnet attacks using machine learning approach. The contributions of this paper are as follows: (1) Building a distributed framework using Hive for sniffing and processing network traces enabling extraction of dynamic network features; (2) Using the parallel processing power of Mahout to build Random Forest based Decision Tree model which is applied to the problem of Peer-to-Peer Botnet detection in quasi-real-time. The implementation setup and performance metrics are presented as initial observations and future extensions are proposed. 2014 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "76ecd4ba20333333af4d09b894ff29fc", "text": "This study is an application of social identity theory to feminist consciousness and activism. For women, strong gender identifications may enhance support for equality struggles, whereas for men, they may contribute to backlashes against feminism. University students (N � 276), primarily Euroamerican, completed a measure of gender self-esteem (GSE, that part of one’s selfconcept derived from one’s gender), and two measures of feminism. High GSE in women and low GSE in men were related to support for feminism. Consistent with past research, women were more supportive of feminism than men, and in both genders, support for feminist ideas was greater than self-identification as a feminist.", "title": "" }, { "docid": "046c9eaa6fc9a6516982477e1a02f6d0", "text": "Imperfections in healthcare revenue cycle management systems cause discrepancies between submitted claims and received payments. This paper presents a method for deriving attributional rules that can be used to support the preparation and screening of claims prior to their submission to payers. The method starts with unsupervised analysis of past payments to determine normal levels of payments for services. Then, supervised machine learning is used to derive sets of attributional rules for predicting potential discrepancies in claims. New claims can be then classified using the created models. The method was tested on a subset of Obstetrics claims for payment submitted by one hospital to Medicaid. One year of data was used to create models, which were tested using the following year's data. Results indicate that rule-based models are able to detect abnormal claims prior to their submission.", "title": "" }, { "docid": "7148408c07e6caee0b8f7cb1ff95443b", "text": "Kefir is a fermented milk drink produced by the actions of bacteria and yeasts contained in kefir grains, and is reported to have a unique taste and unique properties. During fermentation, peptides and exopolysaccharides are formed that have been shown to have bioactive properties. Moreover, in vitro and animal trials have shown kefir and its constituents to have anticarcinogenic, antimutagenic, antiviral and antifungal properties. Although kefir has been produced and consumed in Eastern Europe for a long period of time, few clinical trials are found in the scientific literature to support the health claims attributed to kefir. The large number of microorganisms in kefir, the variety of possible bioactive compounds that could be formed during fermentation, and the long list of reputed benefits of eating kefir make this fermented dairy product a complex", "title": "" }, { "docid": "c67fd84601a528ea951fcf9952f46316", "text": "Electric vehicles make use of permanent-magnet (PM) synchronous traction motors for their high torque density and efficiency. A comparison between interior PM and surface-mounted PM (SPM) motors is carried out, in terms of performance at given inverter ratings. The results of the analysis, based on a simplified analytical model and confirmed by finite element (FE) analysis, show that the two motors have similar rated power but that the SPM motor has barely no overload capability, independently of the available inverter current. Moreover, the loss behavior of the two motors is rather different in the various operating ranges with the SPM one better at low speed due to short end connections but penalized at high speed by the need of a significant deexcitation current. The analysis is validated through FE simulation of two actual motor designs.", "title": "" }, { "docid": "0ec1a33be6e06b4dbff7c906ccf970f0", "text": "Free/Open Source Software (F/OSS) projects are people-oriented and knowledge intensive software development environments. Many researchers focused on mailing lists to study coding activities of software developers. How expert software developers interact with each other and with non-developers in the use of community products have received little attention. This paper discusses the altruistic sharing of knowledge between knowledge providers and knowledge seekers in the Developer and User mailing lists of the Debian project. We analyze the posting and replying activities of the participants by counting the number of email messages they posted to the lists and the number of replies they made to questions others posted. We found out that participants interact and share their knowledge a lot, their positing activity is fairly highly correlated with their replying activity, the characteristics of posting and replying activities are different for different kinds of lists, and the knowledge sharing activity of self-organizing Free/Open Source communities could best be explained in terms of what we called ‘‘Fractal Cubic Distribution’’ rather than the power-law distribution mostly reported in the literature. The paper also proposes what could be researched in knowledge sharing activities in F/OSS projects mailing list and for what purpose. The research findings add to our understanding of knowledge sharing activities in F/OSS projects. 2007 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "92374e1c04f36046669aacb2982324de", "text": "Nine totally sleep deprived (TSD) and nine control subjects were evaluated with a complete battery for attention and memory performance. Frontal and temporal EEGs (5 min, eyes closed) were also recorded before and after the night. TSD subjects exhibited three performance deficits: learning the Pursuit Rotor Task, implicit recall of paired words, and distractibility on the Brown-Peterson Test. Relative to evening recordings, control subjects showed decreased morning absolute powers in all electrodes for all frequencies except for Frontal delta; TSD subjects showed increased Frontal and Temporal theta and Frontal beta. These results show that motor procedural, implicit memory, and working memory are sensitive to one night of TSD, and that Frontal and Temporal theta spectral power seem to discriminate between a night with sleep from a night without.", "title": "" }, { "docid": "b49e61ecb2afbaa8c3b469238181ec26", "text": "Stylistic variations of language, such as formality, carry speakers’ intention beyond literal meaning and should be conveyed adequately in translation. We propose to use lexical formality models to control the formality level of machine translation output. We demonstrate the effectiveness of our approach in empirical evaluations, as measured by automatic metrics and human assessments.", "title": "" }, { "docid": "708a69b49a1a7f32ee5cc501c033d2ce", "text": "Differential auction–barter (DAB) model augments the well-known double auction (DA) model with barter bids so that besides the usual purchase and sale activities, bidders can also carry out direct bartering of items. The DAB model also provides a mechanism for making or receiving a differential money payment as part of the direct bartering of items, hence, allowing bartering of different valued items. In this paper, we propose an extension to the DAB model, called the multi-unit differential auction–barter (MUDAB) model for e-marketplaces in which multiple instances of commodities are exchanged. Furthermore, a more powerful and flexible bidding language is designed which allows bidders to express their complex preferences of purchase, sell and exchange requests, and hence increases the allocative efficiency of the market compared to the DAB. The winner determination problem of the MUDAB model is formally defined, and a fast polynomial-time network flow based algorithm is proposed for solving the problem. The fast performance of the algorithm is also demonstrated on various test cases containing up to one million bids. Thus, the proposed model can be used in large-scale online auctions without worrying about the running times of the solver. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "02781a25d8fb7ed69480f944d63b56ae", "text": "Technology-supported learning systems have proved to be helpful in many learning situations. These systems require an appropriate representation of the knowledge to be learned, the Domain Module. The authoring of the Domain Module is cost and labor intensive, but its development cost might be lightened by profiting from semiautomatic Domain Module authoring techniques and promoting knowledge reuse. DOM-Sortze is a system that uses natural language processing techniques, heuristic reasoning, and ontologies for the semiautomatic construction of the Domain Module from electronic textbooks. To determine how it might help in the Domain Module authoring process, it has been tested with an electronic textbook, and the gathered knowledge has been compared with the Domain Module that instructional designers developed manually. This paper presents DOM-Sortze and describes the experiment carried out.", "title": "" }, { "docid": "65cc9459269fb23dd97ec25ffad4f041", "text": "Most of the existing literature on CRM value chain creation has focused on the effect of customer satisfaction and customer loyalty on customer profitability. In contrast, little has been studied about the CRM value creation chain at individual customer level and the role of self-construal (i.e., independent self-construal and interdependent self-construal) in such a chain. This research aims to construct the chain from customer value to organization value (i.e., customer satisfaction ? customer loyalty ? patronage behavior) and investigate the moderating effect of self-construal. To test the hypotheses suggested by our conceptual framework, we collected 846 data points from China in the context of mobile data services. The results show that customer’s self-construal can moderate the relationship chain from customer satisfaction to customer loyalty to relationship maintenance and development. This implies firms should tailor their customer strategies based on different self-construal features. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "678a4872dfe753bac26bff2b29ac26b0", "text": "Cyber-physical systems (CPS), such as automotive systems, are starting to include sophisticated machine learning (ML) components. Their correctness, therefore, depends on properties of the inner ML modules. While learning algorithms aim to generalize from examples, they are only as good as the examples provided, and recent efforts have shown that they can produce inconsistent output under small adversarial perturbations. This raises the question: can the output from learning components can lead to a failure of the entire CPS? In this work, we address this question by formulating it as a problem of falsifying signal temporal logic (STL) specifications for CPS with ML components. We propose a compositional falsification framework where a temporal logic falsifier and a machine learning analyzer cooperate with the aim of finding falsifying executions of the considered model. The efficacy of the proposed technique is shown on an automatic emergency braking system model with a perception component based on deep neural networks.", "title": "" } ]
scidocsrr
ffc920437de019647b81d41ec4a699b4
Whole Brain Segmentation Automated Labeling of Neuroanatomical Structures in the Human Brain
[ { "docid": "d529b4f1992f438bb3ce4373090f8540", "text": "One conventional tool for interpolating surfaces over scattered data, the thin-plate spline, has an elegant algebra expressing the dependence of the physical bending energy of a thin metal plate on point constraints. For interpolation of a surface over a fixed set of nodes in the plane, the bending energy is a quadratic form in the heights assigned to the surface. The spline is the superposition of eigenvectors of the bending energy matrix, of successively larger physical scales, over a tilted flat plane having no bending energy at all. When these splines are paired, one representing the x-coordinate of another form and the other the y-coordinate, they aid greatly in the modeling of biological shape change as deformation. In this context, the pair becomes an interpolation map from RZ to R' relating two sets of landmark points. The spline maps decompose, in the same way as the spline surfaces, into a linear part (an affine transformation) together with the superposition of principal warps, which are geometrically independent, affine-free deformations of progressively smaller geometrical scales. The warps decompose an empirical deformation into orthogonal features more or less as a conventional orthogonal functional analysis decomposes the single scene. This paper demonstrates the decomposition of deformations by principal warps, extends the method to deal with curving edges between landmarks, relates this formalism to other applications of splines current in computer vision, and indicates how they might aid in the extraction of features for analysis, comparison, and diagnosis of biological and medical images.", "title": "" }, { "docid": "772fc1cf2dd2837227facd31f897dba3", "text": "Eighty-three brains obtained at autopsy from nondemented and demented individuals were examined for extracellular amyloid deposits and intraneuronal neurofibrillary changes. The distribution pattern and packing density of amyloid deposits turned out to be of limited significance for differentiation of neuropathological stages. Neurofibrillary changes occurred in the form of neuritic plaques, neurofibrillary tangles and neuropil threads. The distribution of neuritic plaques varied widely not only within architectonic units but also from one individual to another. Neurofibrillary tangles and neuropil threads, in contrast, exhibited a characteristic distribution pattern permitting the differentiation of six stages. The first two stages were characterized by an either mild or severe alteration of the transentorhinal layer Pre-α (transentorhinal stages I–II). The two forms of limbic stages (stages III–IV) were marked by a conspicuous affection of layer Pre-α in both transentorhinal region and proper entorhinal cortex. In addition, there was mild involvement of the first Ammon's horn sector. The hallmark of the two isocortical stages (stages V–VI) was the destruction of virtually all isocortical association areas. The investigation showed that recognition of the six stages required qualitative evaluation of only a few key preparations.", "title": "" } ]
[ { "docid": "8147143579de86a5eeb668037c2b8c5d", "text": "In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the bias/variance tradeoff. The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any learning procedure adheres to its training data. At one end of the scale (high variance), models can entertain very complex hypotheses, allowing them to fit a wide variety of data very closely--but as a result can generalize poorly, a phenomenon called overfitting. At the other end of the scale (high bias), models make relatively simple and inflexible assumptions, and as a result may fit the data poorly, called underfitting. Exemplar and prototype models of category formation are at opposite ends of this scale: prototype models are highly biased, in that they assume a simple, standard conceptual form (the prototype), while exemplar models have very little bias but high variance, allowing them to fit virtually any combination of training data. We investigated human learners' position on this spectrum by confronting them with category structures at variable levels of intrinsic complexity, ranging from simple prototype-like categories to much more complex multimodal ones. The results show that human learners adopt an intermediate point on the bias/variance continuum, inconsistent with either of the poles occupied by most conventional approaches. We present a simple model that adjusts (regularizes) the complexity of its hypotheses in order to suit the training data, which fits the experimental data better than representative exemplar and prototype models.", "title": "" }, { "docid": "409b257d38faef216a1056fd7c548587", "text": "Reservoir computing systems utilize dynamic reservoirs having short-term memory to project features from the temporal inputs into a high-dimensional feature space. A readout function layer can then effectively analyze the projected features for tasks, such as classification and time-series analysis. The system can efficiently compute complex and temporal data with low-training cost, since only the readout function needs to be trained. Here we experimentally implement a reservoir computing system using a dynamic memristor array. We show that the internal ionic dynamic processes of memristors allow the memristor-based reservoir to directly process information in the temporal domain, and demonstrate that even a small hardware system with only 88 memristors can already be used for tasks, such as handwritten digit recognition. The system is also used to experimentally solve a second-order nonlinear task, and can successfully predict the expected output without knowing the form of the original dynamic transfer function. Reservoir computing facilitates the projection of temporal input signals onto a high-dimensional feature space via a dynamic system, known as the reservoir. Du et al. realise this concept using metal-oxide-based memristors with short-term memory to perform digit recognition tasks and solve non-linear problems.", "title": "" }, { "docid": "42b6c55e48f58e3e894de84519cb6feb", "text": "What social value do Likes on Facebook hold? This research examines people’s attitudes and behaviors related to receiving one-click feedback in social media. Likes and other kinds of lightweight affirmation serve as social cues of acceptance and maintain interpersonal relationships, but may mean different things to different people. Through surveys and de-identified, aggregated behavioral Facebook data, we find that in general, people care more about who Likes their posts than how many Likes they receive, desiring feedback most from close friends, romantic partners, and family members other than their parents. While most people do not feel strongly that receiving “enough” Likes is important, roughly two-thirds of posters regularly receive more than “enough.” We also note a “Like paradox,” a phenomenon in which people’s friends receive more Likes because their friends have more friends to provide those Likes. Individuals with lower levels of self-esteem and higher levels of self-monitoring are more likely to think that Likes are important and to feel bad if they do not receive “enough” Likes. The results inform product design and our understanding of how lightweight interactions shape our experiences online.", "title": "" }, { "docid": "c0ba7119eaf77c6815f43ff329457e5e", "text": "In Utility Computing business model, the owners of the computing resources negotiate with their potential clients to sell computing power. The terms of the Quality of Service (QoS) and the economic conditions are established in a Service-Level Agreement (SLA). There are many scenarios in which the agreed QoS cannot be provided because of errors in the service provisioning or failures in the system. Since providers have usually different types of clients, according to their relationship with the provider or by the fee that they pay, it is important to minimize the impact of the SLA violations in preferential clients. This paper proposes a set of policies to provide better QoS to preferential clients in such situations. The criterion to classify clients is established according to the relationship between client and provider (external user, internal or another privileged relationship) and the QoS that the client purchases (cheap contracts or extra QoS by paying an extra fee). Most of the policies use key features of virtualization: Selective Violation of the SLAs, Dynamic Scaling of the Allocated Resources, and Runtime Migration of Tasks. The validity of the policies is demonstrated through exhaustive experiments.", "title": "" }, { "docid": "6cacb8cdc5a1cc17c701d4ffd71bdab1", "text": "Phishing costs Internet users billions of dollars a year. Using various data sets collected in real-time, this paper analyzes various aspects of phisher modi operandi. We examine the anatomy of phishing URLs and domains, registration of phishing domains and time to activation, and the machines used to host the phishing sites. Our findings can be used as heuristics in filtering phishing-related emails and in identifying suspicious domain registrations.", "title": "" }, { "docid": "b6e62590995a41adb1128703060e0e2d", "text": "Consumer-grade digital fabrication such as 3D printing is on the rise, and we believe it can be leveraged to great benefit in the arena of special education. Although 3D printing is beginning to infiltrate mainstream education, little to no research has explored 3D printing in the context of students with special support needs. We present a formative study exploring the use of 3D printing at three locations serving populations with varying ability, including individuals with cognitive, motor, and visual impairments. We found that 3D design and printing performs three functions in special education: developing 3D design and printing skills encourages STEM engagement; 3D printing can support the creation of educational aids for providing accessible curriculum content; and 3D printing can be used to create custom adaptive devices. In addition to providing opportunities to students, faculty, and caregivers in their efforts to integrate 3D printing in special education settings, our investigation also revealed several concerns and challenges. We present our investigation at three diverse sites as a case study of 3D printing in the realm of special education, discuss obstacles to efficient 3D printing in this context, and offer suggestions for designers and technologists.", "title": "" }, { "docid": "63262d2a9abdca1d39e31d9937bb41cf", "text": "A structural model is presented for synthesizing binaural sound from a monaural source. The model produces well-controlled vertical as well as horizontal effects. The model is based on a simplified time-domain description of the physics of wave propagation and diffraction. The components of the model have a one-to-one correspondence with the physical sources of sound diffraction, delay, and reflection. The simplicity of the model permits efficient implementation in DSP hardware, and thus facilitates real-time operation. Additionally, the parameters in the model can be adjusted to fit a particular individual’s characteristics, thereby producing individualized head-related transfer functions. Experimental tests verify the perceptual effectiveness of the approach.", "title": "" }, { "docid": "5487dd1976a164447c821303b53ebdf8", "text": "Rapid and pervasive digitization of innovation processes and outcomes has upended extant theories on innovation management by calling into question fundamental assumptions about the definitional boundaries for innovation, agency for innovation, and the relationship between innovation processes and outcomes. There is a critical need for novel theorizing on digital innovation management that does not rely on such assumptions and draws on the rich and rapidly emerging research on digital technologies. We offer suggestions for such theorizing in the form of four new theorizing logics, or elements, that are likely to be valuable in constructing more accurate explanations of innovation processes and outcomes in an increasingly digital world. These logics can open new avenues for researchers to contribute to this important area. Our suggestions in this paper, coupled with the six research notes included in the special issue on digital innovation management, seek to offer a broader foundation for reinventing innovation management research in a digital world.", "title": "" }, { "docid": "a959b14468625cb7692de99a986937c4", "text": "In this paper, we describe a novel method for searching and comparing 3D objects. The method encodes the geometric and topological information in the form of a skeletal graph and uses graph matching techniques to match the skeletons and to compare them. The skeletal graphs can be manually annotated to refine or restructure the search. This helps in choosing between a topological similarity and a geometric (shape) similarity. A feature of skeletal matching is the ability to perform part-matching, and its inherent intuitiveness, which helps in defining the search and in visualizing the results. Also, the matching results, which are presented in a per-node basis can be used for driving a number of registration algorithms, most of which require a good initial guess to perform registration. In this paper, we also describe a visualization tool to aid in the selection and specification of the matched objects.", "title": "" }, { "docid": "afd1bc554857a1857ac4be5ee37cc591", "text": "0953-5438/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.intcom.2011.04.007 ⇑ Corresponding author. E-mail addresses: m.cole@rutgers.edu (M.J. Co (J. Gwizdka), changl@eden.rutgers.edu (C. Liu), ralf@b rutgers.edu (N.J. Belkin), xiangminz@gmail.com (X. Zh We report on an investigation into people’s behaviors on information search tasks, specifically the relation between eye movement patterns and task characteristics. We conducted two independent user studies (n = 32 and n = 40), one with journalism tasks and the other with genomics tasks. The tasks were constructed to represent information needs of these two different users groups and to vary in several dimensions according to a task classification scheme. For each participant we classified eye gaze data to construct models of their reading patterns. The reading models were analyzed with respect to the effect of task types and Web page types on reading eye movement patterns. We report on relationships between tasks and individual reading behaviors at the task and page level. Specifically we show that transitions between scanning and reading behavior in eye movement patterns and the amount of text processed may be an implicit indicator of the current task type facets. This may be useful in building user and task models that can be useful in personalization of information systems and so address design demands driven by increasingly complex user actions with information systems. One of the contributions of this research is a new methodology to model information search behavior and investigate information acquisition and cognitive processing in interactive information tasks. 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0046aca3e98d75f9d3c414a6de42e017", "text": "Fast Downward is a classical planning system based on heuris tic search. It can deal with general deterministic planning problems encoded in the propos itional fragment of PDDL2.2, including advanced features like ADL conditions and effects and deriv ed predicates (axioms). Like other well-known planners such as HSP and FF, Fast Downward is a pro gression planner, searching the space of world states of a planning task in the forward direct ion. However, unlike other PDDL planning systems, Fast Downward does not use the propositional P DDL representation of a planning task directly. Instead, the input is first translated into an alternative representation called multivalued planning tasks , which makes many of the implicit constraints of a propositi nal planning task explicit. Exploiting this alternative representatio n, Fast Downward uses hierarchical decompositions of planning tasks for computing its heuristic fun ction, called thecausal graph heuristic , which is very different from traditional HSP-like heuristi cs based on ignoring negative interactions of operators. In this article, we give a full account of Fast Downward’s app roach to solving multi-valued planning tasks. We extend our earlier discussion of the caus al graph heuristic to tasks involving axioms and conditional effects and present some novel techn iques for search control that are used within Fast Downward’s best-first search algorithm: preferred operatorstransfer the idea of helpful actions from local search to global best-first search, deferred evaluationof heuristic functions mitigates the negative effect of large branching factors on earch performance, and multi-heuristic best-first searchcombines several heuristic evaluation functions within a s ingle search algorithm in an orthogonal way. We also describe efficient data structu es for fast state expansion ( successor generatorsandaxiom evaluators ) and present a new non-heuristic search algorithm called focused iterative-broadening search , which utilizes the information encoded in causal graphs in a ovel way. Fast Downward has proven remarkably successful: It won the “ classical” (i. e., propositional, non-optimising) track of the 4th International Planning Co mpetition at ICAPS 2004, following in the footsteps of planners such as FF and LPG. Our experiments show that it also performs very well on the benchmarks of the earlier planning competitions a d provide some insights about the usefulness of the new search enhancements.", "title": "" }, { "docid": "7647993815a13899e60fdc17f91e270d", "text": "of Dissertation presented to COPPE/UFRJ as a partial fulfillment of the requirements for the degree of Master of Science (M.Sc.) WHEN AUTOENCODERS MEET RECOMMENDER SYSTEMS: COFILS APPROACH Julio César Barbieri Gonzalez de Almeida", "title": "" }, { "docid": "71a76b562681450b23c512d4710c9f00", "text": "The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.", "title": "" }, { "docid": "c70383b0a3adb6e697932ef4b02877ac", "text": "Betweenness centrality (BC) is a crucial graph problem that measures the significance of a vertex by the number of shortest paths leading through it. We propose Maximal Frontier Betweenness Centrality (MFBC): a succinct BC algorithm based on novel sparse matrix multiplication routines that performs a factor of p1/3 less communication on p processors than the best known alternatives, for graphs with n vertices and average degree k = n/p2/3. We formulate, implement, and prove the correctness of MFBC for weighted graphs by leveraging monoids instead of semirings, which enables a surprisingly succinct formulation. MFBC scales well for both extremely sparse and relatively dense graphs. It automatically searches a space of distributed data decompositions and sparse matrix multiplication algorithms for the most advantageous configuration. The MFBC implementation outperforms the well-known CombBLAS library by up to 8x and shows more robust performance. Our design methodology is readily extensible to other graph problems.", "title": "" }, { "docid": "826e01210bb9ce8171ed72043b4a304d", "text": "Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.", "title": "" }, { "docid": "1886f5d95b1db7c222bc23770835e2b7", "text": "Signature files and inverted files are well-known index structures. In this paper we undertake a direct comparison of the two for searching for partially-specified queries in a large lexicon stored in main memory. Using n-grams to index lexicon terms, a bit-sliced signature file can be compressed to a smaller size than an inverted file if each n-gram sets only one bit in the term signature. With a signature width less than half the number of unique n-grams in the lexicon, the signature file method is about as fast as the inverted file method, and significantly smaller. Greater flexibility in memory usage and faster index generation time make signature files appropriate for searching large lexicons or other collections in an environment where memory is at a premium.", "title": "" }, { "docid": "95514c6f357115ef181b652eedd780fd", "text": "Application Programming Interfaces (APIs) are a tremendous resource—that is, when they are stable. Several studies have shown that this is unfortunately not the case. Of those, a large-scale study of API changes in the Pharo Smalltalk ecosystem documented several findings about API deprecations and their impact on API clients. We conduct a partial replication of this study, considering more than 25,000 clients of five popular Java APIs on GitHub. This work addresses several shortcomings of the previous study, namely: a study of several distinct API clients in a popular, statically-typed language, with more accurate version information. We compare and contrast our findings with the previous study and highlight new ones, particularly on the API client update practices and the startling similarities between reaction behavior in Smalltalk and Java.", "title": "" }, { "docid": "70f1f5de73c3a605b296299505fd4e61", "text": "Dropout is a popular stochastic regularization technique for deep neural networks that works by randomly dropping (i.e. zeroing) units from the network during training. This randomization process allows to implicitly train an ensemble of exponentially many networks sharing the same parametrization, which should be averaged at test time to deliver the final prediction. A typical workaround for this intractable averaging operation consists in scaling the layers undergoing dropout randomization. This simple rule called “standard dropout” is efficient, but might degrade the accuracy of the prediction. In this work we introduce a novel approach, coined “dropout distillation”, that allows us to train a predictor in a way to better approximate the intractable, but preferable, averaging process, while keeping under control its computational efficiency. We are thus able to construct models that are as efficient as standard dropout, or even more efficient, while being more accurate. Experiments on standard benchmark datasets demonstrate the validity of our method, yielding consistent improvements over conventional dropout.", "title": "" }, { "docid": "0932bc0e6eafeeb8b64d7b41ca820ac8", "text": "A novel, non-invasive, imaging methodology, based on the photoacoustic effect, is introduced in the context of artwork diagnostics with emphasis on the uncovering of hidden features such as underdrawings or original sketch lines in paintings. Photoacoustic microscopy, a rapidly growing imaging method widely employed in biomedical research, exploits the ultrasonic acoustic waves, generated by light from a pulsed or intensity modulated source interacting with a medium, to map the spatial distribution of absorbing components. Having over three orders of magnitude higher transmission through strongly scattering media, compared to light in the visible and near infrared, the photoacoustic signal offers substantially improved detection sensitivity and achieves excellent optical absorption contrast at high spatial resolution. Photoacoustic images, collected from miniature oil paintings on canvas, illuminated with a nanosecond pulsed Nd:YAG laser at 1064 nm on their reverse side, reveal clearly the presence of pencil sketch lines coated over by several paint layers, exceeding 0.5 mm in thickness. By adjusting the detection bandwidth of the optically induced ultrasonic waves, photoacoustic imaging can be used for looking into a broad variety of artefacts having diverse optical properties and geometrical profiles, such as manuscripts, glass objects, plastic modern art or even stone sculpture.", "title": "" } ]
scidocsrr
cfa8312d2b9c69d3d5ae6b445350708c
The Application of Data Mining to Build Classification Model for Predicting Graduate Employment
[ { "docid": "bfae60b46b97cf2491d6b1136c60f6a6", "text": "Educational data mining concerns with developing methods for discovering knowledge from data that come from educational domain. In this paper we used educational data mining to improve graduate students’ performance, and overcome the problem of low grades of graduate students. In our case study we try to extract useful knowledge from graduate students data collected from the college of Science and Technology – Khanyounis. The data include fifteen years period [1993-2007]. After preprocessing the data, we applied data mining techniques to discover association, classification, clustering and outlier detection rules. In each of these four tasks, we present the extracted knowledge and describe its importance in educational domain.", "title": "" }, { "docid": "2693030e6575cb7faec59aaec6387e2c", "text": "Human Resource (HR) applications can be used to provide fair and consistent decisions, and to improve the effectiveness of decision making processes. Besides that, among the challenge for HR professionals is to manage organization talents, especially to ensure the right person for the right job at the right time. For that reason, in this article, we attempt to describe the potential to implement one of the talent management tasks i.e. identifying existing talent by predicting their performance as one of HR application for talent management. This study suggests the potential HR system architecture for talent forecasting by using past experience knowledge known as Knowledge Discovery in Database (KDD) or Data Mining. This article consists of three main parts; the first part deals with the overview of HR applications, the prediction techniques and application, the general view of Data mining and the basic concept of talent management in HRM. The second part is to understand the use of Data Mining technique in order to solve one of the talent management tasks, and the third part is to propose the potential HR system architecture for talent forecasting. Keywords—HR Application, Knowledge Discovery in Database (KDD), Talent Forecasting.", "title": "" }, { "docid": "e390d922f802267ac4e7bd336080e2ca", "text": "Assessment as a dynamic process produces data that reasonable conclusions are derived by stakeholders for decision making that expectedly impact on students' learning outcomes. The data mining methodology while extracting useful, valid patterns from higher education database environment contribute to proactively ensuring students maximize their academic output. This paper develops a methodology by the derivation of performance prediction indicators to deploying a simple student performance assessment and monitoring system within a teaching and learning environment by mainly focusing on performance monitoring of students' continuous assessment (tests) and examination scores in order to predict their final achievement status upon graduation. Based on various data mining techniques (DMT) and the application of machine learning processes, rules are derived that enable the classification of students in their predicted classes. The deployment of the prototyped solution, integrates measuring, 'recycling' and reporting procedures in the new system to optimize prediction accuracy.", "title": "" } ]
[ { "docid": "11a28e11ba6e7352713b8ee63291cd9c", "text": "This review focuses on discussing the main changes on the upcoming fourth edition of the WHO Classification of Tumors of the Pituitary Gland emphasizing histopathological and molecular genetics aspects of pituitary neuroendocrine (i.e., pituitary adenomas) and some of the non-neuroendocrine tumors involving the pituitary gland. Instead of a formal review, we introduced the highlights of the new WHO classification by answering select questions relevant to practising pathologists. The revised classification of pituitary adenomas, in addition to hormone immunohistochemistry, recognizes the role of other immunohistochemical markers including but not limited to pituitary transcription factors. Recognizing this novel approach, the fourth edition of the WHO classification has abandoned the concept of \"a hormone-producing pituitary adenoma\" and adopted a pituitary adenohypophyseal cell lineage designation of the adenomas with subsequent categorization of histological variants according to hormone content and specific histological and immunohistochemical features. This new classification does not require a routine ultrastructural examination of these tumors. The new definition of the Null cell adenoma requires the demonstration of immunonegativity for pituitary transcription factors and adenohypophyseal hormones Moreover, the term of atypical pituitary adenoma is no longer recommended. In addition to the accurate tumor subtyping, assessment of the tumor proliferative potential by mitotic count and Ki-67 index, and other clinical parameters such as tumor invasion, is strongly recommended in individual cases for consideration of clinically aggressive adenomas. This classification also recognizes some subtypes of pituitary neuroendocrine tumors as \"high-risk pituitary adenomas\" due to the clinical aggressive behavior; these include the sparsely granulated somatotroph adenoma, the lactotroph adenoma in men, the Crooke's cell adenoma, the silent corticotroph adenoma, and the newly introduced plurihormonal Pit-1-positive adenoma (previously known as silent subtype III pituitary adenoma). An additional novel aspect of the new WHO classification was also the definition of the spectrum of thyroid transcription factor-1 expressing pituitary tumors of the posterior lobe as representing a morphological spectrum of a single nosological entity. These tumors include the pituicytoma, the spindle cell oncocytoma, the granular cell tumor of the neurohypophysis, and the sellar ependymoma.", "title": "" }, { "docid": "ade486df9ce338e0760f357db2340e55", "text": "The aim of the present study was to evaluate the effects of a 12-week home-based strength, explosive and plyometric (SEP) training on the cost of running (Cr) in well-trained ultra-marathoners and to assess the main mechanical parameters affecting changes in Cr. Twenty-five male runners (38.2 ± 7.1 years; body mass index: 23.0 ± 1.1 kg·m-2; V˙O2max: 55.4 ± 4.0 mlO2·kg-1·min-1) were divided into an exercise (EG = 13) and control group (CG = 12). Before and after a 12-week SEP training, Cr, spring-mass model parameters at four speeds (8, 10, 12, 14 km·h-1) were calculated and maximal muscle power (MMP) of the lower limbs was measured. In EG, Cr decreased significantly (p < .05) at all tested running speeds (-6.4 ± 6.5% at 8 km·h-1; -3.5 ± 5.3% at 10 km·h-1; -4.0 ± 5.5% at 12 km·h-1; -3.2 ± 4.5% at 14 km·h-1), contact time (tc) increased at 8, 10 and 12 km·h-1 by mean +4.4 ± 0.1% and ta decreased by -25.6 ± 0.1% at 8 km·h-1 (p < .05). Further, inverse relationships between changes in Cr and MMP at 10 (p = .013; r = -0.67) and 12 km·h-1 (p < .001; r = -0.86) were shown. Conversely, no differences were detected in the CG in any of the studied parameters. Thus, 12-week SEP training programme lower the Cr in well-trained ultra-marathoners at submaximal speeds. Increased tc and an inverse relationship between changes in Cr and changes in MMP could be in part explain the decreased Cr. Thus, adding at least three sessions per week of SEP exercises in the normal endurance-training programme may decrease the Cr.", "title": "" }, { "docid": "0be3178ff2f412952934a49084ee8edc", "text": "This article introduces the physics of information in the context of molecular biology and genomics. Entropy and information, the two central concepts of Shannon’s theory of information and communication, are often confused with each other but play transparent roles when applied to statistical ensembles (i.e., identically prepared sets) of symbolic sequences. Such an approach can distinguish between entropy and information in genes, predict the secondary structure of ribozymes, and detect the covariation between residues in folded proteins. We also review applications to molecular sequence and structure analysis, and introduce new tools in the characterization of resistance mutations, and in drug design. In a curious twist of history, the dawn of the age of genomics has both seen the rise of the science of bioinformatics as a tool to cope with the enormous amounts of data being generated daily, and the decline of the theory of information as applied to molecular biology. Hailed as a harbinger of a “new movement” (Quastler 1953) along with Cybernetics, the principles of information theory were thought to be applicable to the higher functions of living organisms, and able to analyze such functions as metabolism, growth, and differentiation (Quastler 1953). Today, the metaphors and the jargon of information theory are still widely used (Maynard Smith 1999a, 1999b), as opposed to the mathematical formalism, which is too often considered to be inapplicable to biological information. Clearly, looking back it appears that too much hope was laid upon this theory’s relevance for biology. However, there was well-founded optimism that information theory ought to be able to address the complex issues associated with the storage of information in the genetic code, only to be repeatedly questioned and rebuked (see, e.g., Vincent 1994, Sarkar 1996). In this article, I outline the concepts of entropy and information (as defined by Shannon) in the context of molecular biology. We shall see that not only are these terms well-defined and useful, they also coincide precisely with what we intuitively mean when we speak about information stored in genes, for example. I then present examples of applications of the theory to measuring the information content of biomolecules, the identification of polymorphisms, RNA and protein secondary structure prediction, the prediction and analysis of molecular interactions, and drug design. 1 Entropy and Information Entropy and information are often used in conflicting manners in the literature. A precise understanding, both mathematical and intuitive, of the notion of information (and its relationship to entropy) is crucial for applications in molecular biology. Therefore, let us begin by outlining Shannon’s original entropy concept (Shannon, 1948). 1.1 Shannon’s Uncertainty Measure Entropy in Shannon’s theory (defined mathematically below) is a measure of uncertainty about the identity of objects in an ensemble. Thus, while “en-", "title": "" }, { "docid": "5464889be41072ecff03355bf45c289f", "text": "Grid map registration is an important field in mobile robotics. Applications in which multiple robots are involved benefit from multiple aligned grid maps as they provide an efficient exploration of the environment in parallel. In this paper, a normal distribution transform (NDT)-based approach for grid map registration is presented. For simultaneous mapping and localization approaches on laser data, the NDT is widely used to align new laser scans to reference scans. The original grid quantization-based NDT results in good registration performances but has poor convergence properties due to discontinuities of the optimization function and absolute grid resolution. This paper shows that clustering techniques overcome disadvantages of the original NDT by significantly improving the convergence basin for aligning grid maps. A multi-scale clustering method results in an improved registration performance which is shown on real world experiments on radar data.", "title": "" }, { "docid": "c0610eab7d3825d6b12959fedd9656ea", "text": "We introduce a new deep convolutional neural network, CrescendoNet, by stacking simple building blocks without residual connections. Each Crescendo block contains independent convolution paths with increased depths. The numbers of convolution layers and parameters are only increased linearly in Crescendo blocks. In experiments, CrescendoNet with only 15 layers outperforms almost all networks without residual connections on benchmark datasets, CIFAR10, CIFAR100, and SVHN. Given sufficient amount of data as in SVHN dataset, CrescendoNet with 15 layers and 4.1M parameters can match the performance of DenseNet-BC with 250 layers and 15.3M parameters. CrescendoNet provides a new way to construct high performance deep convolutional neural networks with simple network architecture. Moreover, by investigating a various combination of subnetworks in CrescendoNet, we note that the high performance of CrescendoNet may come from its implicit ensemble behavior, which gives CrescendoNet an anytime classification property. Furthermore, the independence between paths in CrescendoNet allows us to introduce a new path-wise training procedure, which can reduce the memory needed for training.", "title": "" }, { "docid": "6767096adc28681387c77a68a3468b10", "text": "This study investigates fifty small and medium enterprises by using a survey approach to find out the key factors that are determinants to EDI adoption. Based upon the existing model, the study uses six factors grouped into three categories, namely organizational, environmental and technological aspects. The findings indicate that factors such as perceived benefits government support and management support are significant determinants of EDI adoption. The remaining factors like organizational culture, motivation to use EDI and task variety remain insignificant. Based upon the analysis of data, recommendations are made.", "title": "" }, { "docid": "6989ae9a7e6be738d0d2e8261251a842", "text": "A single-feed reconfigurable square-ring patch antenna with pattern diversity is presented. The antenna structure has four shorting walls placed respectively at each edge of the square-ring patch, in which two shorting walls are directly connected to the patch and the others are connected to the patch via pin diodes. By controlling the states of the pin diodes, the antenna can be operated at two different modes: monopolar plat-patch and normal patch modes; moreover, the 10 dB impedance bandwidths of the two modes are overlapped. Consequently, the proposed antenna allows its radiation pattern to be switched electrically between conical and broadside radiations at a fixed frequency. Detailed design considerations of the proposed antenna are described. Experimental and simulated results are also shown and discussed", "title": "" }, { "docid": "2c1de0ee482b3563c6b0b49bfdbbe508", "text": "The paper summarizes our research in the area of unsupervised categorization of Wikipedia articles. As a practical result of our research, we present an application of spectral clustering algorithm used for grouping Wikipedia search results. The main contribution of the paper is a representation method for Wikipedia articles that has been based on combination of words and links and used for categoriation of search results in this repository. We evaluate the proposed approach with Primary Component projections and show, on the test data, how usage of cosine transformation to create combined representations influence data variability. On sample test datasets, we also show how combined representation improves the data separation that increases overall results of data categorization. To implement the system, we review the main spectral clustering methods and we test their usability for text categorization. We give a brief description of the system architecture that groups online Wikipedia articles retrieved with user-specified keywords. Using the system, we show how clustering increases information retrieval effectiveness for Wikipedia data repository.", "title": "" }, { "docid": "cb70ab2056242ca739adde4751fbca2c", "text": "In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-ofwords and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations. 1", "title": "" }, { "docid": "2c2dee4689e48f1a7c0061ac7d60a16b", "text": "Transfer learning algorithms are used when one has sufficient training data for one supervised learning task (the source task) but only very limited training data for a second task (the target task) that is similar but not identical to the first. These algorithms use varying assumptions about the similarity between the tasks to carry information from the source to the target task. Common assumptions are that only certain specific marginal or conditional distributions have changed while all else remains the same. Moreover, not much work on transfer learning has considered the case when a few labels in the test domain are available. Alternatively, if one has only the target task, but also has the ability to choose a limited amount of additional training data to collect, then active learning algorithms are used to make choices which will most improve performance on the target task. These algorithms may be combined into active transfer learning, but previous efforts have had to apply the two methods in sequence or use restrictive transfer assumptions. This thesis focuses on active transfer learning under the model shift assumption. We start by proposing two transfer learning algorithms that allow changes in all marginal and conditional distributions but assume the changes are smooth in order to achieve transfer between the tasks. We then propose an active learning algorithm for the second method that yields a combined active transfer learning algorithm. By analyzing the risk bounds for the proposed transfer learning algorithms, we show that when the conditional distribution changes, we are able to obtain a generalization error bound of O( 1 λ∗ √ nl ) with respect to the labeled target sample size nl, modified by the smoothness of the change (λ∗) across domains. Our analysis also sheds light on conditions when transfer learning works better than no-transfer learning (learning by labeled target data only). Furthermore, we consider a general case where both the support and the model change across domains. We transform both X (features) and Y (labels) by a parameterized-location-scale shift to achieve transfer between tasks. On the other hand, multi-task learning attempts to simultaneously leverage data from multiple domains in order to estimate related functions on each domain. Similar to transfer learning, multi-task problems are also solved by imposing some kind of “smooth” relationship among/between tasks. We study how different smoothness assumptions on task relations affect the upper bounds of algorithms proposed for these problems under different settings. Finally, we propose methods to predict the entire distribution P (Y ) and P (Y |X) by transfer, while allowing both marginal and conditional distributions to change. Moreover, we extend this framework to multi-source distribution transfer. We demonstrate the effectiveness of our methods on both synthetic examples and real-world applications, including yield estimation on the grape image dataset, predicting air-quality from Weibo posts for cities, predicting whether a robot successfully climbs over an obstacle, examination score prediction for schools, and location prediction for taxis. Acknowledgments First and foremost, I would like to express my sincere gratitude to my advisor Jeff Schneider, who has been the biggest help during my whole PhD life. His brilliant insights have helped me formulate the problems of this thesis, brainstorm on new ideas and exciting algorithms. I have learnt many things about research from him, including how to organize ideas in a paper, how to design experiments, and how to give a good academic talk. This thesis would not have been possible without his guidance, advice, patience and encouragement. I would like to thank my thesis committee members Christos Faloutsos, Geoff Gordon and Jerry Zhu for providing great insights and feedbacks on my thesis. Christos has been very nice and he always finds time to talk to me even if he is very busy. Geoff has provided great insights on extending my work to classification and helped me clarified many notations/descriptions in my thesis. Jerry has been very helpful in extending my work on the text data and providing me the air quality dataset. I feel very fortunate to have them as my committee members. I would also like to thank Professor Barnabás Póczos, Professor Roman Garnett and Professor Artur Dubrawski, for providing very helpful suggestions and collaborations during my PhD. I am very grateful to many of the faculty members at Carnegie Mellon. Eric Xing’s Machine Learning course has been my introduction course for Machine Learning at Carnegie Mellon and it has taught me a lot about the foundations of machine learning, including all the inspiring machine learning algorithms and the theories behind them. Larry Wasserman’s Intermediate Statistics and Statistical Machine Learning are both wonderful courses and have been keys to my understanding of the statistical perspective of many machine learning algorithms. Geoff Gordon and Ryan Tibshirani’s Convex Optimization course has been a great tutorial for me to develop all the efficient optimizing techniques for the algorithms I have proposed. Further I want to thank all my colleagues and friends at Carnegie Mellon, especially people from the Auton Lab and the Computer Science Department at CMU. I would like to thank Dougal Sutherland, Yifei Ma, Junier Oliva, Tzu-Kuo Huang for insightful discussions and advices for my research. I would also like to thank all my friends who have provided great support and help during my stay at Carnegie Mellon, and to name a few, Nan Li, Junchen Jiang, Guangyu Xia, Zi Yang, Yixin Luo, Lei Li, Lin Xiao, Liu Liu, Yi Zhang, Liang Xiong, Ligia Nistor, Kirthevasan Kandasamy, Madalina Fiterau, Donghan Wang, Yuandong Tian, Brian Coltin. I would also like to thank Prof. Alon Halevy, who has been a great mentor during my summer internship at google research and also has been a great help in my job searching process. Finally I would like to thank my family, my parents Sisi and Tiangui, for their unconditional love, endless support, and unwavering faith in me. I truly thank them for shaping who I am, for teaching me to be a person who would never lose hope and give up.", "title": "" }, { "docid": "8245472f3dad1dce2f81e21b53af5793", "text": "Butanol is an aliphatic saturated alcohol having the molecular formula of C(4)H(9)OH. Butanol can be used as an intermediate in chemical synthesis and as a solvent for a wide variety of chemical and textile industry applications. Moreover, butanol has been considered as a potential fuel or fuel additive. Biological production of butanol (with acetone and ethanol) was one of the largest industrial fermentation processes early in the 20th century. However, fermentative production of butanol had lost its competitiveness by 1960s due to increasing substrate costs and the advent of more efficient petrochemical processes. Recently, increasing demand for the use of renewable resources as feedstock for the production of chemicals combined with advances in biotechnology through omics, systems biology, metabolic engineering and innovative process developments is generating a renewed interest in fermentative butanol production. This article reviews biotechnological production of butanol by clostridia and some relevant fermentation and downstream processes. The strategies for strain improvement by metabolic engineering and further requirements to make fermentative butanol production a successful industrial process are also discussed.", "title": "" }, { "docid": "f6362a62b69999bdc3d9f681b68842fc", "text": "Women with breast cancer, whether screen detected or symptomatic, have both mammography and ultrasound for initial imaging assessment. Unlike X-ray or magnetic resonance, which produce an image of the whole breast, ultrasound provides comparatively limited 2D or 3D views located around the lesions. Combining different modalities is an essential task for accurate diagnosis and simulating ultrasound images based on whole breast data could be a way toward correlating different information about the same lesion. Very few studies have dealt with such a simulation framework since the breast undergoes large scale deformation between the prone position of magnetic resonance imaging and the largely supine or lateral position of ultrasound. We present a framework for the realistic simulation of 3D ultrasound images based on prone magnetic resonance images from which a supine position is generated using a biomechanical model. The simulation parameters are derived from a real clinical infrastructure and from transducers that are used for routine scans, leading to highly realistic ultrasound images of any region of the breast.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "a2196e1ace9469ed1408f34ea67ee510", "text": "Most current virtual reality (VR) interactions are mediated by hand-held input devices or hand gestures and they usually display only a partial representation of the user in the synthetic environment. We believe, representing the user as a full avatar that is controlled by natural movements of the person in the real world will lead to a greater sense of presence in VR. Possible applications exist in various domains such as entertainment, therapy, travel, real estate, education, social interaction and professional assistance. In this demo, we present MetaSpace, a virtual reality system that allows co-located users to explore a VR world together by walking around in physical space. Each user's body is represented by an avatar that is dynamically controlled by their body movements. We achieve this by tracking each user's body with a Kinect device such that their physical movements are mirrored in the virtual world. Users can see their own avatar and the other person's avatar allowing them to perceive and act intuitively in the virtual environment.", "title": "" }, { "docid": "6f3bfd9b592654ca451eb5850e5684bc", "text": "Mammals and birds have evolved three primary, discrete, interrelated emotion-motivation systems in the brain for mating, reproduction, and parenting: lust, attraction, and male-female attachment. Each emotion-motivation system is associated with a specific constellation of neural correlates and a distinct behavioral repertoire. Lust evolved to initiate the mating process with any appropriate partner; attraction evolved to enable individuals to choose among and prefer specific mating partners, thereby conserving their mating time and energy; male-female attachment evolved to enable individuals to cooperate with a reproductive mate until species-specific parental duties have been completed. The evolution of these three emotion-motivation systems contribute to contemporary patterns of marriage, adultery, divorce, remarriage, stalking, homicide and other crimes of passion, and clinical depression due to romantic rejection. This article defines these three emotion-motivation systems. Then it discusses an ongoing project using functional magnetic resonance imaging of the brain to investigate the neural circuits associated with one of these emotion-motivation systems, romantic attraction.", "title": "" }, { "docid": "a2fc7b5fbb88e45c84400b1fe15368ee", "text": "There is increasing evidence from functional magnetic resonance imaging (fMRI) that visual awareness is not only associated with activity in ventral visual cortex but also with activity in the parietal cortex. However, due to the correlational nature of neuroimaging, it remains unclear whether this parietal activity plays a causal role in awareness. In the experiment presented here we disrupted activity in right or left parietal cortex by applying repetitive transcranial magnetic stimulation (rTMS) over these areas while subjects attempted to detect changes between two images separated by a brief interval (i.e. 1-shot change detection task). We found that rTMS applied over right parietal cortex but not left parietal cortex resulted in longer latencies to detect changes and a greater rate of change blindness compared with no TMS. These results suggest that the right parietal cortex plays a critical role in conscious change detection.", "title": "" }, { "docid": "6f0ffda347abfd11dc78c0b76ceb11f8", "text": "A previous study of 22 medical patients with DSM-III-R-defined anxiety disorders showed clinically and statistically significant improvements in subjective and objective symptoms of anxiety and panic following an 8-week outpatient physician-referred group stress reduction intervention based on mindfulness meditation. Twenty subjects demonstrated significant reductions in Hamilton and Beck Anxiety and Depression scores postintervention and at 3-month follow-up. In this study, 3-year follow-up data were obtained and analyzed on 18 of the original 22 subjects to probe long-term effects. Repeated measures analysis showed maintenance of the gains obtained in the original study on the Hamilton [F(2,32) = 13.22; p < 0.001] and Beck [F(2,32) = 9.83; p < 0.001] anxiety scales as well as on their respective depression scales, on the Hamilton panic score, the number and severity of panic attacks, and on the Mobility Index-Accompanied and the Fear Survey. A 3-year follow-up comparison of this cohort with a larger group of subjects from the intervention who had met criteria for screening for the original study suggests generalizability of the results obtained with the smaller, more intensively studied cohort. Ongoing compliance with the meditation practice was also demonstrated in the majority of subjects at 3 years. We conclude that an intensive but time-limited group stress reduction intervention based on mindfulness meditation can have long-term beneficial effects in the treatment of people diagnosed with anxiety disorders.", "title": "" }, { "docid": "0d65394a132dba6d4d6827be8afda33e", "text": "PHYSICIANS’ ABILITY TO PROVIDE high-quality care can be adversely affected by many factors, including sleep deprivation. Concerns about the danger of physicians who are sleep deprived and providing care have led state legislatures and academic institutions to try to constrain the work hours of physicians in training (house staff). Unlike commercial aviation, for example, medicine is an industry in which public safety is directly at risk but does not have mandatory restrictions on work hours. Legislation before the US Congress calls for limiting resident work hours to 80 hours per week and no more than 24 hours of continuous work. Shifts of residents working in the emergency department would be limited to 12 hours. The proposed legislation, which includes public disclosure and civil penalties for hospitals that violate the work hour restrictions, does not address extended duty shifts of attending or private practice physicians. There is still substantial controversy within the medical community about the magnitude and significance of the clinical impairment resulting from work schedules that aggravate sleep deprivation. There is extensive literature on the adverse effects of sleep deprivation in laboratory and nonmedical settings. However, studies on sleep deprivation of physicians performing clinically relevant tasks have been less conclusive. Opinions have been further influenced by the potential adverse impact of reduced work schedules on the economics of health care, on continuity of care, and on quality of care. This review focuses on the consequences of sleep loss both in controlled laboratory environments and in clinical studies involving medical personnel.", "title": "" }, { "docid": "2b8ca8be8d5e468d4cd285ecc726eceb", "text": "These days, large-scale graph processing becomes more and more important. Pregel, inspired by Bulk Synchronous Parallel, is one of the highly used systems to process large-scale graph problems. In Pregel, each vertex executes a function and waits for a superstep to communicate its data to other vertices. Superstep is a very time-consuming operation, used by Pregel, to synchronize distributed computations in a cluster of computers. However, it may become a bottleneck when the number of communications increases in a graph with million vertices. Superstep works like a barrier in Pregel that increases the side effect of skew problem in distributed computing environment. ExPregel is a Pregel-like model that is designed to reduce the number of communication messages between two vertices resided on two different computational nodes. We have proven that ExPregel reduces the number of exchanged messages as well as the number of supersteps for all graph topologies. Enhancing parallelism in our new computational model is another important feature that manifolds the speed of graph analysis programs. More interestingly, ExPregel uses the same model of programming as Pregel. Our experiments on large-scale real-world graphs show that ExPregel can reduce network traffic as well as number of supersteps from 45% to 96%. Runtime speed up in the proposed model varies from 1.2× to 30×. Copyright © 2015 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "9b1f40687d0c9b78efdf6d1e19769294", "text": "The ideal cell type to be used for cartilage therapy should possess a proven chondrogenic capacity, not cause donor-site morbidity, and should be readily expandable in culture without losing their phenotype. There are several cell sources being investigated to promote cartilage regeneration: mature articular chondrocytes, chondrocyte progenitors, and various stem cells. Most recently, stem cells isolated from joint tissue, such as chondrogenic stem/progenitors from cartilage itself, synovial fluid, synovial membrane, and infrapatellar fat pad (IFP) have gained great attention due to their increased chondrogenic capacity over the bone marrow and subcutaneous adipose-derived stem cells. In this review, we first describe the IFP anatomy and compare and contrast it with other adipose tissues, with a particular focus on the embryological and developmental aspects of the tissue. We then discuss the recent advances in IFP stem cells for regenerative medicine. We compare their properties with other stem cell types and discuss an ontogeny relationship with other joint cells and their role on in vivo cartilage repair. We conclude with a perspective for future clinical trials using IFP stem cells.", "title": "" } ]
scidocsrr
e9b3ef7873e59a6e0ed2e5fb0631237f
Large-scale image classification: Fast feature extraction and SVM training
[ { "docid": "1a462dd716d6eb565fa03e0518e8d6eb", "text": "For large scale learning problems, it is desirable if we can obtain the optimal model parameters by going through the data in only one pass. Polyak and Juditsky (1992) showed that asymptotically the test performance of the simple average of the parameters obtained by stochastic gradient descent (SGD) is as good as that of the parameters which minimize the empirical cost. However, to our knowledge, despite its optimal asymptotic convergence rate, averaged SGD (ASGD) received little attention in recent research on large scale learning. One possible reason is that it may take a prohibitively large number of training samples for ASGD to reach its asymptotic region for most real problems. In this paper, we present a finite sample analysis for the method of Polyak and Juditsky (1992). Our analysis shows that it indeed usually takes a huge number of samples for ASGD to reach its asymptotic region for improperly chosen learning rate. More importantly, based on our analysis, we propose a simple way to properly set learning rate so that it takes a reasonable amount of data for ASGD to reach its asymptotic region. We compare ASGD using our proposed learning rate with other well known algorithms for training large scale linear classifiers. The experiments clearly show the superiority of ASGD.", "title": "" }, { "docid": "64e93cfb58b7cf331b4b74fadb4bab74", "text": "Support Vector Machines (SVMs) suffer from a widely recognized scalability problem in both memory use and computational time. To improve scalability, we have developed a parallel SVM algorithm (PSVM), which reduces memory use through performing a row-based, approximate matrix factorization, and which loads only essential data to each machine to perform parallel computation. Let n denote the number of training instances, p the reduced matrix dimension after factorization (p is significantly smaller than n), and m the number of machines. PSVM reduces the memory requirement from O(n2) to O(np/m), and improves computation time to O(np2/m). Empirical study shows PSVM to be effective. PSVM Open Source is available for download at http://code.google.com/p/psvm/.", "title": "" } ]
[ { "docid": "73bf9a956ea7a10648851c85ef740db0", "text": "Printed atmospheric spark gaps as ESD-protection on PCBs are examined. At first an introduction to the physic behind spark gaps. Afterward the time lag (response time) vs. voltage is measured with high load impedance. The dependable clamp voltage (will be defined later) is measured as a function of the load impedance and the local field in the air gap is simulated with FIT simulation software. At last the observed results are discussed on the basic of the physic and the simulations.", "title": "" }, { "docid": "ee3b9382afc9455e53dd41d3725eb74a", "text": "Deep convolutional neural networks have liberated its extraordinary power on various tasks. However, it is still very challenging to deploy stateof-the-art models into real-world applications due to their high computational complexity. How can we design a compact and effective network without massive experiments and expert knowledge? In this paper, we propose a simple and effective framework to learn and prune deep models in an end-to-end manner. In our framework, a new type of parameter – scaling factor is first introduced to scale the outputs of specific structures, such as neurons, groups or residual blocks. Then we add sparsity regularizations on these factors, and solve this optimization problem by a modified stochastic Accelerated Proximal Gradient (APG) method. By forcing some of the factors to zero, we can safely remove the corresponding structures, thus prune the unimportant parts of a CNN. Comparing with other structure selection methods that may need thousands of trials or iterative fine-tuning, our method is trained fully end-to-end in one training pass without bells and whistles. We evaluate our method, Sparse Structure Selection with several state-of-the-art CNNs, and demonstrate very promising results with adaptive depth and width selection. Code is available at: https://github.com/huangzehao/ sparse-structure-selection.", "title": "" }, { "docid": "31fc886990140919aabce17aa7774910", "text": "Today, at the low end of the communication protocols we find the inter-integrated circuit (I2C) and the serial peripheral interface (SPI) protocols. Both protocols are well suited for communications between integrated circuits for slow communication with on-board peripherals. The two protocols coexist in modern digital electronics systems, and they probably will continue to compete in the future, as both I2C and SPI are actually quite complementary for this kind of communication.", "title": "" }, { "docid": "48631b74c184f554f9c5692ed703d398", "text": "Simultaneously and accurately forecasting the behavior of many interacting agents is imperative for computer vision applications to be widely deployed (e.g., autonomous vehicles, security, surveillance, sports). In this paper, we present a technique using conditional variational autoencoder which learns a model that “personalizes” prediction to individual agent behavior within a group representation. Given the volume of data available and its adversarial nature, we focus on the sport of basketball and show that our approach efficiently predicts context-specific agent motions. We find that our model generates results that are three times as accurate as previous state of the art approaches (5.74 ft vs. 17.95 ft).", "title": "" }, { "docid": "dec9d40902f10c7eb5e627b7674fbd9f", "text": "In the paper we propose a genetic algorithm based on insertion heuristics for the vehicle routing problem with constraints. A random insertion heuristic is used to construct initial solutions and to reconstruct the existing ones. The location where a randomly chosen node will be inserted is selected by calculating an objective function. The process of random insertion preserves stochastic characteristics of the genetic algorithm and preserves feasibility of generated individuals. The defined crossover and mutation operators incorporate random insertion heuristics, analyse individuals and select which parts should be reinserted. Additionally, the second population is used in the mutation process. The second population increases the probability that the solution, obtained in the mutation process, will survive in the first population and increase the probability to find the global optimum. The result comparison shows that the solutions, found by the proposed algorithm, are similar to the optimal solutions obtained by other genetic algorithms. However, in most cases the proposed algorithm finds the solution in a shorter time and it makes this algorithm competitive with others.", "title": "" }, { "docid": "313271f587afe3224eaafc4243ab522f", "text": "Treatment-induced chronic vaginal changes after definitive radio(chemo)therapy for locally advanced cervical cancer patients are reported as one of the most distressing consequences of treatment, with major impact on quality of life. Although these vaginal changes are regularly documented during gynecological follow-up examinations, the classic radiation morbidity grading scales are not concise in their reporting. The aim of the study was therefore to identify and qualitatively describe, on the basis of vaginoscopies, morphological changes in the vagina after definitive radio(chemo)therapy and to establish a classification system for their detailed and reproducible documentation. Vaginoscopy with photodocumentation was performed prospectively in 22 patients with locally advanced cervical cancer after definitive radio(chemo)therapy at 3–24 months after end of treatment. All patients were in complete remission and without severe grade 3/4 morbidity outside the vagina. Five morphological parameters, which occurred consistently after treatment, were identified: mucosal pallor, telangiectasia, fragility of the vaginal wall, ulceration, and adhesions/occlusion. The symptoms in general were observed at different time points in individual patients; their quality was independent of the time of assessment. Based on the morphological findings, a comprehensive descriptive and semiquantitative scoring system was developed, which allows for classification of vaginal changes. A photographic atlas to illustrate the morphology of the alterations is presented. Vaginoscopy is an easily applicable, informative, and well-tolerated procedure for the objective assessment of morphological vaginal changes after radio(chemo)therapy and provides comprehensive and detailed information. This allows for precise classification of the severity of individual changes. Veränderungen der Vagina werden von Patientinnen nach definitiver Radio(chemo)therapie bei lokal fortgeschrittenem Zervixkarzinom als äußerst belastende chronische Morbidität beschrieben, welche die Lebensqualität signifikant beeinträchtigen kann. Obwohl diese vaginalen Nebenwirkungen routinemäßig in den gynäkologischen Nachsorgeuntersuchungen erfasst werden, werden sie in den klassischen Dokumentationssystemen für Nebenwirkungen der Strahlentherapie nur sehr unpräzise abgebildet. Ziele der vorliegenden Studie waren daher die Identifikation und qualitative Beschreibung morphologischer Veränderungen der Vagina nach definitiver Radio(chemo)therapie anhand von vaginoskopischen Bildern und die Etablierung eines spezifischen Klassifikationssystems für eine detaillierte und reproduzierbare Dokumentation. Von 22 Patientinnen mit lokal fortgeschrittenem Zervixkarzinom wurden vaginoskopische Bilder in einem Zeitraum von 3–24 Monaten nach definitiver Radio(chemo)therapie angefertigt. Alle Patientinnen waren in kompletter Remission und wiesen keine schwerwiegenden G3- oder G4-Nebenwirkungen außerhalb der Vagina auf. Es wurden regelhaft 5 morphologische Parameter bei den Patientinnen nach Radio(chemo)therapie beobachtet: Blässe der Schleimhaut, Teleangiektasien, Fragilität der Vaginalwand, Ulzerationen und Verklebungen bzw. Okklusionen. Diese Endpunkte wurden in den einzelnen Patientinnen zu verschiedenen Zeitpunkten gefunden, waren in ihrer Qualität aber unabhängig vom Zeitpunkt der Erhebung. Aufbauend auf diesen morphologischen Befunden wurde ein umfassendes deskriptives und semiquantitatives Beurteilungssystem für die Klassifikation vaginaler Nebenwirkungen entwickelt. Ein Bildatlas, der die Morphologie der Veränderungen illustriert, wird präsentiert. Die Vaginoskopie ist eine einfach anzuwendende, informative und von den Patientinnen gut tolerierte Untersuchungsmethode. Sie ist geeignet, morphologische Veränderungen der Vagina nach Radio(chemo)therapie objektiv zu erheben, und liefert umfassende und detaillierte Informationen, die eine präzise und reproduzierbare Klassifikation erlauben.", "title": "" }, { "docid": "d95fb46b3857b55602af2cf271300f5a", "text": "This paper proposes a new active interphase transformer for 24-pulse diode rectifier. The proposed scheme injects a compensation current into the secondary winding of either of the two first-stage interphase transformers. For only one of the first-stage interphase transformers being active, the inverter conducted the injecting current is with a lower kVA rating [1.26% pu (Po)] compared to conventional active interphase transformers. Moreover, the proposal scheme draws near sinusoidal input currents and the simulated and the experimental total harmonic distortion of overall line currents are only 1.88% and 2.27% respectively. When the inverter malfunctions, the input line current still can keep in the conventional 24-pulse situation. A digital-signal-processor (DSP) based digital controller is employed to calculate the desired compensation current and deals with the trigger signals needed for the inverter. Moreover, a 6kW prototype is built for test. Both simulation and experimental results demonstrate the validity of the proposed scheme.", "title": "" }, { "docid": "59693182ac2803d821c508e92383d499", "text": "We introduce the notion of image-driven simplification, a framework that uses images to decide which portions of a model to simplify. This is a departure from approaches that make polygonal simplification decisions based on geometry. As with many methods, we use the edge collapse operator to make incremental changes to a model. Unique to our approach, however, is the use at comparisons between images of the original model against those of a simplified model to determine the cost of an ease collapse. We use common graphics rendering hardware to accelerate the creation of the required images. As expected, this method produces models that are close to the original model according to image differences. Perhaps more surprising, however, is that the method yields models that have high geometric fidelity as well. Our approach also solves the quandary of how to weight the geometric distance versus appearance properties such as normals, color, and texture. All of these trade-offs are balanced by the image metric. Benefits of this approach include high fidelity silhouettes, extreme simplification of hidden portions of a model, attention to shading interpolation effects, and simplification that is sensitive to the content of a texture. In order to better preserve the appearance of textured models, we introduce a novel technique for assigning texture coordinates to the new vertices of the mesh. This method is based on a geometric heuristic that can be integrated with any edge collapse algorithm to produce high quality textured surfaces.", "title": "" }, { "docid": "984dc75b97243e448696f2bf0ba3c2aa", "text": "Background: Predicting credit card payment default is critical for the successful business model of a credit card company. An accurate predictive model can help the company identify customers who might default their payment in the future so that the company can get involved earlier to manage risk and reduce loss. It is even better if a model can assist the company on credit card application approval to minimize the risk at upfront. However, credit card default prediction is never an easy task. It is dynamic. A customer who paid his/her payment on time in the last few months may suddenly default his/her next payment. It is also unbalanced given the fact that default payment is rare compared to non-default payments. Unbalanced dataset will easily fail using most machine learning techniques if the dataset is not treated properly.", "title": "" }, { "docid": "323c217fa6e4b0c097779379d8ca8561", "text": "Photosynthetic antenna complexes capture and concentrate solar radiation by transferring the excitation to the reaction center that stores energy from the photon in chemical bonds. This process occurs with near-perfect quantum efficiency. Recent experiments at cryogenic temperatures have revealed that coherent energy transfer--a wave-like transfer mechanism--occurs in many photosynthetic pigment-protein complexes. Using the Fenna-Matthews-Olson antenna complex (FMO) as a model system, theoretical studies incorporating both incoherent and coherent transfer as well as thermal dephasing predict that environmentally assisted quantum transfer efficiency peaks near physiological temperature; these studies also show that this mechanism simultaneously improves the robustness of the energy transfer process. This theory requires long-lived quantum coherence at room temperature, which never has been observed in FMO. Here we present evidence that quantum coherence survives in FMO at physiological temperature for at least 300 fs, long enough to impact biological energy transport. These data prove that the wave-like energy transfer process discovered at 77 K is directly relevant to biological function. Microscopically, we attribute this long coherence lifetime to correlated motions within the protein matrix encapsulating the chromophores, and we find that the degree of protection afforded by the protein appears constant between 77 K and 277 K. The protein shapes the energy landscape and mediates an efficient energy transfer despite thermal fluctuations.", "title": "" }, { "docid": "d81fd54d3f1d005e71c7a5da1679b04a", "text": "This report illustrates photographically the adverse influence of short wavelength-induced light-scattering and autofluorescence on image quality, and the improvement of image quality that results by filtering out light wavelengths shorter than 480 nm. It provides additional data on the improvement in human vision (under conditions of excessive intraocular light-scattering and fluorescence) by filters that prevent short wavelength radiant energy from entering the eye.", "title": "" }, { "docid": "e1e74832bdc8a675c342b868b80bf1e4", "text": "Many network phenomena are well modeled as spreads of epidemics through a network. Prominent examples include the spread of worms and email viruses, and, more generally, faults. Many types of information dissemination can also be modeled as spreads of epidemics. In this paper we address the question of what makes an epidemic either weak or potent. More precisely, we identify topological properties of the graph that determine the persistence of epidemics. In particular, we show that if the ratio of cure to infection rates is larger than the spectral radius of the graph, then the mean epidemic lifetime is of order log n, where n is the number of nodes. Conversely, if this ratio is smaller than a generalization of the isoperimetric constant of the graph, then the mean epidemic lifetime is of order e/sup na/, for a positive constant a. We apply these results to several network topologies including the hypercube, which is a representative connectivity graph for a distributed hash table, the complete graph, which is an important connectivity graph for BGP, and the power law graph, of which the AS-level Internet graph is a prime example. We also study the star topology and the Erdos-Renyi graph as their epidemic spreading behaviors determine the spreading behavior of power law graphs.", "title": "" }, { "docid": "76715b342c0b0a475ba6db06a0345c7b", "text": "Generalized linear mixed models are a widely used tool for modeling longitudinal data. However , their use is typically restricted to few covariates, because the presence of many predictors yields unstable estimates. The presented approach to the fitting of generalized linear mixed models includes an L 1-penalty term that enforces variable selection and shrinkage simultaneously. A gradient ascent algorithm is proposed that allows to maximize the penalized log-likelihood yielding models with reduced complexity. In contrast to common procedures it can be used in high-dimensional settings where a large number of potentially influential explanatory variables is available. The method is investigated in simulation studies and illustrated by use of real data sets.", "title": "" }, { "docid": "701ddde2a7ff66c6767a2978ce7293f2", "text": "Epigenetics is the study of heritable changesin gene expression that does not involve changes to theunderlying DNA sequence, i.e. a change in phenotype notinvolved by a change in genotype. At least three mainfactor seems responsible for epigenetic change including DNAmethylation, histone modification and non-coding RNA, eachone sharing having the same property to affect the dynamicof the chromatin structure by acting on Nucleosomes position. A nucleosome is a DNA-histone complex, where around150 base pairs of double-stranded DNA is wrapped. Therole of nucleosomes is to pack the DNA into the nucleusof the Eukaryote cells, to form the Chromatin. Nucleosomepositioning plays an important role in gene regulation andseveral studies shows that distinct DNA sequence featureshave been identified to be associated with nucleosomepresence. Starting from this suggestion, the identificationof nucleosomes on a genomic scale has been successfullyperformed by DNA sequence features representation andclassical supervised classification methods such as SupportVector Machines, Logistic regression and so on. Taking inconsideration the successful application of the deep neuralnetworks on several challenging classification problems, inthis paper we want to study how deep learning network canhelp in the identification of nucleosomes.", "title": "" }, { "docid": "a6cf86ffa90c74b7d7d3254c7d33685a", "text": "Graph-based methods are known to be successful in many machine learning and pattern classification tasks. These methods consider semistructured data as graphs where nodes correspond to primitives (parts, interest points, and segments) and edges characterize the relationships between these primitives. However, these nonvectorial graph data cannot be straightforwardly plugged into off-the-shelf machine learning algorithms without a preliminary step of--explicit/implicit--graph vectorization and embedding. This embedding process should be resilient to intraclass graph variations while being highly discriminant. In this paper, we propose a novel high-order stochastic graphlet embedding that maps graphs into vector spaces. Our main contribution includes a new stochastic search procedure that efficiently parses a given graph and extracts/samples unlimitedly high-order graphlets. We consider these graphlets, with increasing orders, to model local primitives as well as their increasingly complex interactions. In order to build our graph representation, we measure the distribution of these graphlets into a given graph, using particular hash functions that efficiently assign sampled graphlets into isomorphic sets with a very low probability of collision. When combined with maximum margin classifiers, these graphlet-based representations have a positive impact on the performance of pattern comparison and recognition as corroborated through extensive experiments using standard benchmark databases.", "title": "" }, { "docid": "73a8c38d820e204c6993974fb352d33f", "text": "Many continuous control tasks have bounded action spaces. When policy gradient methods are applied to such tasks, out-of-bound actions need to be clipped before execution, while policies are usually optimized as if the actions are not clipped. We propose a policy gradient estimator that exploits the knowledge of actions being clipped to reduce the variance in estimation. We prove that our estimator, named clipped action policy gradient (CAPG), is unbiased and achieves lower variance than the conventional estimator that ignores action bounds. Experimental results demonstrate that CAPG generally outperforms the conventional estimator, indicating that it is a better policy gradient estimator for continuous control tasks. The source code is available at https: //github.com/pfnet-research/capg.", "title": "" }, { "docid": "e6f75423017585cf7e65b316fd20c3f0", "text": "Blockchain, as a mechanism to decentralize services, security, and verifiability, offers a peer-to-peer system in which distributed nodes collaboratively affirm transaction provenance. In particular, blockchain enforces continuous storage of transaction history, secured via digital signature, and affirmed through consensus. In this study, we consider the recent surge in blockchain interest as an alternative to traditional centralized systems, and consider the emerging applications thereof. In particular, we assess the key techniques required for blockchain implementation, offering a primer to guide research practitioners. We first outline the blockchain framework in general, and then provide a detailed review of the component data and network structures. Additionally, we consider the breadth of applications to which blockchain has been applied, broadly implicating Internet of Things (IoT), Big Data, and Cloud and Edge computing paradigms, along with many other emerging applications. Finally, we assess the various challenges to blockchain implementation for widespread practical use, considering the security vulnerabilities to majority attacks, selfish mining, and privacy leakage, as well as performance limitations of blockchain platforms in terms of scalability and availability.", "title": "" }, { "docid": "3ee39231fc2fbf3b6295b1b105a33c05", "text": "We address a text regression problem: given a piece of text, predict a real-world continuous quantity associated with the text’s meaning. In this work, the text is an SEC-mandated financial report published annually by a publiclytraded company, and the quantity to be predicted is volatility of stock returns, an empirical measure of financial risk. We apply wellknown regression techniques to a large corpus of freely available financial reports, constructing regression models of volatility for the period following a report. Our models rival past volatility (a strong baseline) in predicting the target variable, and a single model that uses both can significantly outperform past volatility. Interestingly, our approach is more accurate for reports after the passage of the Sarbanes-Oxley Act of 2002, giving some evidence for the success of that legislation in making financial reports more informative.", "title": "" }, { "docid": "2a1f1576ab73e190dce400dedf80df36", "text": "No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading motivation reconsidered the concept of competence is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.", "title": "" }, { "docid": "db3b14f6298771b44506a17da57c21ae", "text": "Virtuosos are human beings who exhibit exceptional performance in their field of activity. In particular, virtuosos are interesting for creativity studies because they are exceptional problem solvers. However, virtuosity is an under-studied field of human behaviour. Little is known about the processes involved to become a virtuoso, and in how they distinguish themselves from normal performers. Virtuosos exist in virtually all domains of human activities, and we focus in this chapter on the specific case of virtuosity in jazz improvisation. We first introduce some facts about virtuosos coming from physiology, and then focus on the case of jazz. Automatic generation of improvisation has long been a subject of study for computer science, and many techniques have been proposed to generate music improvisation in various genres. The jazz style in particular abounds with programs that create improvisations of a reasonable level. However, no approach so far exhibits virtuosolevel performance. We describe an architecture for the generation of virtuoso bebop phrases which integrates novel music generation mechanisms in a principled way. We argue that modelling such outstanding phenomena can contribute substantially to the understanding of creativity in humans and machines. 5.1 Virtuosos as Exceptional Humans 5.1.1 Virtuosity in Art There is no precise definition of virtuosity, but only a commonly accepted view that virtuosos are human beings that excel in their practice to the point of exhibiting exceptional performance. Virtuosity exists in virtually all forms of human activity. In painting, several artists use virtuosity as a means to attract the attention of their audience. Felice Varini paints on urban spaces in such a way that there is a unique viewpoint from which a spectator sees the painting as a perfect geometrical figure. The F. Pachet ( ) Sony CSL-Paris, 6, rue Amyot, 75005 Paris, France e-mail: pachet@csl.sony.fr J. McCormack, M. d’Inverno (eds.), Computers and Creativity, DOI 10.1007/978-3-642-31727-9_5, © Springer-Verlag Berlin Heidelberg 2012 115", "title": "" } ]
scidocsrr
17d4fe83d25d370f44377d095e8258c0
Make it our time: In class multitaskers have lower academic performance
[ { "docid": "fc3b087bd2c0bd4e12f3cb86f6346c96", "text": "This study investigated whether changes in the technological/social environment in the United States over time have resulted in concomitant changes in the multitasking skills of younger generations. One thousand, three hundred and nineteen Americans from three generations were queried to determine their at-home multitasking behaviors. An anonymous online questionnaire asked respondents to indicate which everyday and technology-based tasks they choose to combine for multitasking and to indicate how difficult it is to multitask when combining the tasks. Combining tasks occurred frequently, especially while listening to music or eating. Members of the ‘‘Net Generation” reported more multitasking than members of ‘‘Generation X,” who reported more multitasking than members of the ‘‘Baby Boomer” generation. The choices of which tasks to combine for multitasking were highly correlated across generations, as were difficulty ratings of specific multitasking combinations. The results are consistent with a greater amount of general multitasking resources in younger generations, but similar mental limitations in the types of tasks that can be multitasked. 2008 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "91e32e80a6a2f2a504776b9fd86425ca", "text": "We propose a method for semi-supervised semantic segmentation using an adversarial network. While most existing discriminators are trained to classify input images as real or fake on the image level, we design a discriminator in a fully convolutional manner to differentiate the predicted probability maps from the ground truth segmentation distribution with the consideration of the spatial resolution. We show that the proposed discriminator can be used to improve semantic segmentation accuracy by coupling the adversarial loss with the standard cross entropy loss of the proposed model. In addition, the fully convolutional discriminator enables semi-supervised learning through discovering the trustworthy regions in predicted results of unlabeled images, thereby providing additional supervisory signals. In contrast to existing methods that utilize weakly-labeled images, our method leverages unlabeled images to enhance the segmentation model. Experimental results on the PASCAL VOC 2012 and Cityscapes datasets demonstrate the effectiveness of the proposed algorithm.", "title": "" }, { "docid": "083b21b5d9feccf0f03350fab3af7fc1", "text": "Abstraction without regret refers to the vision of using high-level programming languages for systems development without experiencing a negative impact on performance. A database system designed according to this vision offers both increased productivity and high performance instead of sacrificing the former for the latter as is the case with existing, monolithic implementations that are hard to maintain and extend.\n In this article, we realize this vision in the domain of analytical query processing. We present LegoBase, a query engine written in the high-level programming language Scala. The key technique to regain efficiency is to apply generative programming: LegoBase performs source-to-source compilation and optimizes database systems code by converting the high-level Scala code to specialized, low-level C code. We show how generative programming allows to easily implement a wide spectrum of optimizations, such as introducing data partitioning or switching from a row to a column data layout, which are difficult to achieve with existing low-level query compilers that handle only queries. We demonstrate that sufficiently powerful abstractions are essential for dealing with the complexity of the optimization effort, shielding developers from compiler internals and decoupling individual optimizations from each other.\n We evaluate our approach with the TPC-H benchmark and show that (a) with all optimizations enabled, our architecture significantly outperforms a commercial in-memory database as well as an existing query compiler. (b) Programmers need to provide just a few hundred lines of high-level code for implementing the optimizations, instead of complicated low-level code that is required by existing query compilation approaches. (c) These optimizations may potentially come at the cost of using more system memory for improved performance. (d) The compilation overhead is low compared to the overall execution time, thus making our approach usable in practice for compiling query engines.", "title": "" }, { "docid": "392d8c758d9e50ea416c3802dbddda5a", "text": "Enhancing the effectiveness of city services and assisting on a more sustainable development of cities are two of the crucial drivers of the smart city concept. This paper portrays a field trial that leverages an internet of things (IoT) platform intended for bringing value to existing and future smart city infrastructures. The paper highlights how IoT creates the basis permitting integration of current vertical city services into an all-encompassing system, which opens new horizons for the progress of the effectiveness and sustainability of our cities. Additionally, the paper describes a field trial on provisioning of real time data about available parking places both indoor and outdoor. The trial has been carried out at Santander’s (Spain) downtown area. The trial takes advantage of both available open data sets as well as of a large-scale IoT infrastructure. The trial is a showcase on how added-value services can be created on top of the proposed architecture.", "title": "" }, { "docid": "a008e9f817c6c4658c9c739d0d7fb6a4", "text": "BI (Business Intelligence) is an important discipline for companies and the challenges it faces are strategic. A central concept in BI is the data warehouse, which is a set of consolidated data from heterogeneous sources (usually databases in 3NF). To model the data warehouse, the Inmon and Kimball approaches are the most used. Both solutions monopolize the BI market However, a third modeling approach called “Data Vault” of its creator Linstedt, is gaining ground from year to year. It allows building a data warehouse of raw (unprocessed) data from heterogeneous sources. The purpose of this paper is to present a comparative study of the three precedent approaches. First, we study each approach separately and then we draw a comparison between them. Finally, we include recommendations for selecting the best approach before concluding this paper.", "title": "" }, { "docid": "7c0b7d55abdd6cce85730dbf1cd02109", "text": "Suppose fx, h , ■ • ■ , fk are polynomials in one variable with all coefficients integral and leading coefficients positive, their degrees being h\\ , h2, •• -, A* respectively. Suppose each of these polynomials is irreducible over the field of rational numbers and no two of them differ by a constant factor. Let Q(fx ,f2, • • • ,fk ; N) denote the number of positive integers n between 1 and N inclusive such that /i(n), f2(n), • ■ ■ , fk(n) are all primes. (We ignore the finitely many values of n for which some /,(n) is negative.) Then heuristically we would expect to have for N large", "title": "" }, { "docid": "d931f6f9960e8688c2339a27148efe74", "text": "Most knowledge on the Web is encoded as natural language text, which is convenient for human users but very difficult for software agents to understand. Even with increased use of XML-encoded information, software agents still need to process the tags and literal symbols using application dependent semantics. The Semantic Web offers an approach in which knowledge can be published by and shared among agents using symbols with a well defined, machine-interpretable semantics. The Semantic Web is a “web of data” in that (i) both ontologies and instance data are published in a distributed fashion; (ii) symbols are either ‘literals’ or universally addressable ‘resources’ (URI references) each of which comes with unique semantics; and (iii) information is semi-structured. The Friend-of-a-Friend (FOAF) project (http://www.foafproject.org/) is a good application of the Semantic Web in which users publish their personal profiles by instantiating the foaf:Personclass and adding various properties drawn from any number of ontologies. The Semantic Web’s distributed nature raises significant data access problems – how can an agent discover, index, search and navigate knowledge on the Semantic Web? Swoogle (Dinget al. 2004) was developed to facilitate webscale semantic web data access by providing these services to both human and software agents. It focuses on two levels of knowledge granularity: URI based semantic web vocabulary andsemantic web documents (SWDs), i.e., RDF and OWL documents encoded in XML, NTriples or N3. Figure 1 shows Swoogle’s architecture. The discovery component automatically discovers and revisits SWDs using a set of integrated web crawlers. The digest component computes metadata for SWDs and semantic web terms (SWTs) as well as identifies relations among them, e.g., “an SWD instantiates an SWT class”, and “an SWT class is the domain of an SWT property”. The analysiscomponent uses cached SWDs and their metadata to derive analytical reports, such as classifying ontologies among SWDs and ranking SWDs by their importance. The s rvicecomponent sup-", "title": "" }, { "docid": "0a7a2cfe41f1a04982034ef9cb42c3d4", "text": "The biocontrol agent Torymus sinensis has been released into Japan, the USA, and Europe to suppress the Asian chestnut gall wasp, Dryocosmus kuriphilus. In this study, we provide a quantitative assessment of T. sinensis effectiveness for suppressing gall wasp infestations in Northwest Italy by annually evaluating the percentage of chestnuts infested by D. kuriphilus (infestation rate) and the number of T. sinensis adults that emerged per 100 galls (emergence index) over a 9-year period. We recorded the number of T. sinensis adults emerging from a total of 64,000 galls collected from 23 sampling sites. We found that T. sinensis strongly reduced the D. kuriphilus population, as demonstrated by reduced galls and an increased T. sinensis emergence index. Specifically, in Northwest Italy, the infestation rate was nearly zero 9 years after release of the parasitoid with no evidence of resurgence in infestation levels. In 2012, the number of T. sinensis females emerging per 100 galls was approximately 20 times higher than in 2009. Overall, T. sinensis proved to be an outstanding biocontrol agent, and its success highlights how the classical biological control approach may represent a cost-effective tool for managing an exotic invasive pest.", "title": "" }, { "docid": "6cce56cb38936894981a03c3f84c6353", "text": "Recently there is a resurgence of interest in the analysis of job satisfaction variables. Job satisfaction is correlated with labor market behavior such as productivity, quits and absenteeism. Recent work examined job satisfaction in relation to various factors. In this paper four different measures of job satisfaction are related to a variety of personal and job characteristics. We use a unique data of 28 240 British employees Workplace Employee Relations Survey (WERS97). Our data set is larger and more recent than in the previous studies. The four measures of job satisfaction considered are satisfaction with influence over job, satisfaction with amount of pay, satisfaction with sense of achievement and satisfaction with respect from supervisors. Although the job satisfaction measures we use are somewhat different than those that are previously used in the literature, a number of results that are commonly obtained with international data are found to hold in our data set as well.", "title": "" }, { "docid": "8ec9a57e096e05ad57e3421b67dc1b27", "text": "I review the literature on equity market momentum, a seminal and intriguing finding in finance. This phenomenon is the ability of returns over the past one to four quarters to predict future returns over the same period in the cross-section of equities. I am able to document about ten different theories for momentum, and a large volume of empirical work on the topic. I find, however, that after a quarter century following the discovery of momentum by Jegadeesh and Titman (1993), we are still no closer to finding a discernible cause for this phenomenon, in spite of the extensive work on the topic. More needs to be done to develop tests that are focused not so much on testing one specific theory, but on ruling out alternative", "title": "" }, { "docid": "98f246414ecd65785be73b6b95fbd2b4", "text": "The past few years have seen an enormous progress in the performance of Boolean satisfiability (SAT) solvers. Despite the worst-case exponential run time of all known algorithms, satisfiability solvers are increasingly leaving their mark as a general-purpose tool in areas as diverse as software and hardware verification [29–31, 228], automatic test pattern generation [138, 221], planning [129, 197], scheduling [103], and even challenging problems from algebra [238]. Annual SAT competitions have led to the development of dozens of clever implementations of such solvers [e. and the creation of an extensive suite of real-world instances as well as challenging hand-crafted benchmark problems [cf. 115]. Modern SAT solvers provide a \" black-box \" procedure that can often solve hard structured problems with over a million variables and several million constraints. In essence, SAT solvers provide a generic combinatorial reasoning and search platform. The underlying representational formalism is propositional logic. However, the full potential of SAT solvers only becomes apparent when one considers their use in applications that are not normally viewed as propositional reasoning tasks. For example, consider AI planning, which is a PSPACE-complete problem. By restricting oneself to polynomial size plans, one obtains an NP-complete reasoning problem , easily encoded as a Boolean satisfiability problem, which can be given to a SAT solver [128, 129]. In hardware and software verification, a similar strategy leads one to consider bounded model checking, where one places a bound on the length of possible error traces one is willing to consider [30]. Another example of a recent application of SAT solvers is in computing stable models used in the answer set programming paradigm, a powerful knowledge representation and reasoning approach [81]. In these applications—planning, verification, and answer set programming—the translation into a propositional representation (the \" SAT encoding \") is done automatically", "title": "" }, { "docid": "ca5c80d0af4c617bd0501ffaf003d6a9", "text": "Complex numbers are a fundamental aspect of the mathematical formalism of quantum physics. Quantum-like models developed outside physics often overlooked the role of complex numbers. Specifically, previous models in Information Retrieval (IR) ignored complex numbers. We argue that to advance the use of quantum models of IR, one has to lift the constraint of real-valued representations of the information space, and package more information within the representation by means of complex numbers. As a first attempt, we propose a complex-valued representation for IR, which explicitly uses complex valued Hilbert spaces, and thus where terms, documents and queries are represented as complex-valued vectors. The proposal consists of integrating distributional semantics evidence within the real component of a term vector; whereas, ontological information is encoded in the imaginary component. Our proposal has the merit of lifting the role of complex numbers from a computational byproduct of the model to the very mathematical texture that unifies different levels of semantic information. An empirical instantiation of our proposal is tested in the TREC Medical Record task of retrieving cohorts for clinical studies.", "title": "" }, { "docid": "83ed915556df1c00f6448a38fb3b7ec3", "text": "Wandering liver or hepatoptosis is a rare entity in medical practice. It is also known as floating liver and hepatocolonic vagrancy. It describes the unusual finding of, usually through radiology, the alternate appearance of the liver on the right and left side, respectively. . The first documented case of wandering liver was presented by Heister in 1754 Two centuries later In 1958, Grayson recognized and described the association of wandering liver and tachycardia. In his paper, Grayson details the classical description of wandering liver documented by French in his index of differential diagnosis. In 2010 Jan F. Svensson et al described the first report of a wandering liver in a neonate, reviewed and a discussed the possible treatment strategies. When only displaced, it may wrongly be thought to be enlarged liver", "title": "" }, { "docid": "3d55304ec02e868d19314ee57239ded2", "text": "In the mammalian olfactory system, information from approximately 1000 different odorant receptor types is organized in the nose into four spatial zones. Each zone is a mosaic of randomly distributed neurons expressing different receptor types. In these studies, we have obtained evidence that information highly distributed in the nose is transformed in the olfactory bulb of the brain into a highly organized spatial map. We find that specific odorant receptor gene probes hybridize in situ to small, and distinct, subsets of olfactory bulb glomeruli. The spatial and numerical characteristics of the patterns of hybridization that we observe with different receptor probes indicate that, in the olfactory bulb, olfactory information undergoes a remarkable organization into a fine, and perhaps stereotyped, spatial map. In our view, this map is in essence an epitope map, whose approximately 1000 distinct components are used in a multitude of different combinations to discriminate a vast array of different odors.", "title": "" }, { "docid": "e018139a38e5b1b3a3299626dd2c5295", "text": "The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform ∫ Rd K(x, y)g(y)dy at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most O ( r2 N d p logN + ( βr d p + α ) log p ) time using p processes. This parallel algorithm was then instantiated in the form of the opensource DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms and an analogue of a 3D generalized Radon transform were respectively observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. These experiments at least partially support the theoretical argument that, given p = O(Nd) processes, the running-time of the parallel algorithm is O ( (r2 + βr + α) logN ) .", "title": "" }, { "docid": "c4b4c647e13d0300845bed2b85c13a3c", "text": "Several end-to-end deep learning approaches have been recently presented which extract either audio or visual features from the input images or audio signals and perform speech recognition. However, research on end-to-end audiovisual models is very limited. In this work, we present an end-to-end audiovisual model based on residual networks and Bidirectional Gated Recurrent Units (BGRUs). To the best of our knowledge, this is the first audiovisual fusion model which simultaneously learns to extract features directly from the image pixels and audio waveforms and performs within-context word recognition on a large publicly available dataset (LRW). The model consists of two streams, one for each modality, which extract features directly from mouth regions and raw waveforms. The temporal dynamics in each stream/modality are modeled by a 2-layer BGRU and the fusion of multiple streams/modalities takes place via another 2-layer BGRU. A slight improvement in the classification rate over an end-to-end audio-only and MFCC-based model is reported in clean audio conditions and low levels of noise. In presence of high levels of noise, the end-to-end audiovisual model significantly outperforms both audio-only models.", "title": "" }, { "docid": "dc6d342a2bc0caaa0ede564c85993dc0", "text": "Exoticism is the charm of the unfamiliar, it often means unusual, mystery, and it can evoke the atmosphere of remote lands. Although it has received interest in different arts, like painting and music, no study has been conducted on understanding exoticism from a computational perspective. To the best of our knowledge, this work is the first to explore the problem of exoticism-aware image classification, aiming at automatically measuring the amount of exoticism in images and investigating the significant aspects of the task. The estimation of image exoticism could be applied in fields like advertising and travel suggestion, as well as to increase serendipity and diversity of recommendations and search results. We propose a Fusion-based Deep Neural Network (FDNN) for this task, which combines image representations learned by Deep Neural Networks with visual and semantic hand-crafted features. Comparisons with other Machine Learning models show that our proposed architecture is the best performing one, reaching accuracy over 83% and 91% on two different datasets. Moreover, experiments with classifiers exploiting both visual and semantic features allow to analyze what are the most important aspects for identifying exotic content. Ground truth has been gathered by retrieving exotic and not exotic images through a web search engine by posing queries with exotic and not exotic semantics, and then assessing the exoticism of the retrieved images via a crowdsourcing evaluation. The dataset is publicly released to promote advances in this novel field.", "title": "" }, { "docid": "71bafd4946377eaabff813bffd5617d7", "text": "Autumn-seeded winter cereals acquire tolerance to freezing temperatures and become vernalized by exposure to low temperature (LT). The level of accumulated LT tolerance depends on the cold acclimation rate and factors controlling timing of floral transition at the shoot apical meristem. In this study, genomic loci controlling the floral transition time were mapped in a winter wheat (T. aestivum L.) doubled haploid (DH) mapping population segregating for LT tolerance and rate of phenological development. The final leaf number (FLN), days to FLN, and days to anthesis were determined for 142 DH lines grown with and without vernalization in controlled environments. Analysis of trait data by composite interval mapping (CIM) identified 11 genomic regions that carried quantitative trait loci (QTLs) for the developmental traits studied. CIM analysis showed that the time for floral transition in both vernalized and non-vernalized plants was controlled by common QTL regions on chromosomes 1B, 2A, 2B, 6A and 7A. A QTL identified on chromosome 4A influenced floral transition time only in vernalized plants. Alleles of the LT-tolerant parent, Norstar, delayed floral transition at all QTLs except at the 2A locus. Some of the QTL alleles delaying floral transition also increased the length of vegetative growth and delayed flowering time. The genes underlying the QTLs identified in this study encode factors involved in regional adaptation of cold hardy winter wheat.", "title": "" }, { "docid": "3d2c2c59aa365d91b4c09d5726e9f9dc", "text": "The facial depressor muscles are an essential component of a full denture smile. In addition, the depressor muscles are actively used to express other human emotions such as sadness, anger, depression, and sorrow. Despite advances in microsurgical techniques, it is surprising how little effort has been directed toward reanimation of the lower lip. This article presents innovative approaches used in 74 patients by the senior author since 1981 for the dynamic reanimation of depressor muscle function. The surgical techniques include transfer of the anterior belly of the digastric muscle (n 5 22) and transfer of the platysma muscle (n 5 26) as a pedicled muscle to the corner of the mouth. Other surgical interventions used are the mini-hypoglossal nerve transfer to the cervicofacial branch of the ipsilateral facial nerve (n 5 20), direct neurotization of the depressor muscles (n 5 6), and facialto-facial nerve transfer. The depressor muscle function was graded by four observers after reviewing standard preoperative and postoperative videotapes. Rating of the functional and aesthetic results was done according to the following arbitrary scale: excellent (2), good (1.5), moderate (1), fair (0.5), and poor (0). Sixty-nine percent of the patients who had a digastric muscle transfer displayed good to excellent results, and 24 percent showed moderate restoration of the depressor mechanism postoperatively. Eighty-three percent of patients who had platysma transfer to the lower lip demonstrated good to excellent outcome, and 11 percent had moderate depressor muscle function. In the hypoglossal nerve transfer group, 72 percent of the patients achieved good to excellent results and 15 percent had moderate function of the depressor mechanism. Of the patients who underwent direct neurotization, 34 percent showed good to excellent depressor muscle function postoperatively and 66 percent achieved fair depressor muscle function. Excellent outcome was noted in the patient with VII to VII nerve transfer. In conclusion, this article presents innovative approaches to restore dynamic depressor muscle function, which so far has been a neglected area of facial reanimation. (Plast. Reconstr. Surg. 105: 1917, 2000.) Human facial expression is vital for social interaction; humans exhibit their feelings through complex contractions of the facial musculature with or without verbal articulation. Facial paralysis leaves a person severely debilitated. A smile is perhaps one of the most important human facial expressions. Rubin1 analyzed in detail facial movements in relation to regional muscle forces and categorized smiles into three types: (1) “Mona Lisa” or zygomaticus major dominant smile, (2) “canine” or levator labii superioris dominant smile, and (3) “full denture” or all muscles dominant smile.1 Depressor muscle function is an important component of the full denture smile. In addition, the depressor muscles are actively used to express other human expressions such as sadness, anger, rage, depression, and sorrow. The drooping of the lower lip by the depressor labii inferioris, depressor angularis, and the mentalis can denote disappointment, sorrow, crying, and, in the extreme, rage and hate.1 The lower lip is animated through a complex interaction of orbicularis oris, depressor labii inferioris, depressor anguli oris, mentalis, and platysma muscles. Measurement of lower facial excursion has shown that the lower lip moves about 5.6 mm in the direction of depression.2 Damage to the mandibular branch of the facial nerve results in an inability to draw the lower lip downward and laterally or to evert the vermilion border. Thus, the resultant deformity is", "title": "" }, { "docid": "6b46bdafd8d29d31e2aeacc386654f0e", "text": "An extended subdivision surface (ESub) is a generalization of Catmull Clark and NURBS surfaces. Depending on the knot intervals and valences of the vertices and faces, Catmull Clark as well as NURBS patches can be generated using the extended subdivision rules. Moreover, an arbitrary choice of the knot intervals and the topology is possible. Special features like sharp edges and corners are consistently supported by setting selected knot intervals to zero or by applying special rules. Compared to the prior nonuniform rational subdivision surfaces (NURSS), the ESubs offer limit-point rules which are indispensable in many applications, for example, for computer-aided design or in adaptive visualization. The refinement and limit-point rules for our nonuniform, nonstationary scheme are obtained via a new method using local Bézier control points. With our new surface, it is possible to start with existing Catmull Clark as well as NURBS models and to continue the modeling process using the extended subdivision options.", "title": "" }, { "docid": "03070cb9afce1c35d5ca30b325203eec", "text": "This paper describes a novel optimum path planning strategy for long duration AUV operations in environments with time-varying ocean currents. These currents can exceed the maximum achievable speed of the AUV, as well as temporally expose obstacles. In contrast to most other path planning strategies, paths have to be defined in time as well as space. The solution described here exploits ocean currents to achieve mission goals with minimal energy expenditure, or a tradeoff between mission time and required energy. The proposed algorithm uses a parallel swarm search as a means to reduce the susceptibility to large local minima on the complex cost surface. The performance of the optimisation algorithms is evaluated in simulation and experimentally with the Starbug AUV using a validated ocean model of Brisbane’s Moreton Bay.", "title": "" } ]
scidocsrr
3737e986774433d1b0de441a39607086
Money Laundering Detection using Synthetic Data
[ { "docid": "e67dc912381ebbae34d16aad0d3e7d92", "text": "In this paper, we study the problem of applying data mining to facilitate the investigation of money laundering crimes (MLCs). We have identified a new paradigm of problems --- that of automatic community generation based on uni-party data, the data in which there is no direct or explicit link information available. Consequently, we have proposed a new methodology for Link Discovery based on Correlation Analysis (LDCA). We have used MLC group model generation as an exemplary application of this problem paradigm, and have focused on this application to develop a specific method of automatic MLC group model generation based on timeline analysis using the LDCA methodology, called CORAL. A prototype of CORAL method has been implemented, and preliminary testing and evaluations based on a real MLC case data are reported. The contributions of this work are: (1) identification of the uni-party data community generation problem paradigm, (2) proposal of a new methodology LDCA to solve for problems in this paradigm, (3) formulation of the MLC group model generation problem as an example of this paradigm, (4) application of the LDCA methodology in developing a specific solution (CORAL) to the MLC group model generation problem, and (5) development, evaluation, and testing of the CORAL prototype in a real MLC case data.", "title": "" }, { "docid": "51eb8e36ffbf5854b12859602f7554ef", "text": "Fraud is increasing dramatically with the expansion of modern technology and the global superhighways of communication, resulting in the loss of billions of dollars worldwide each year. Although prevention technologies are the best way to reduce fraud, fraudsters are adaptive and, given time, will usually find ways to circumvent such measures. Methodologies for the detection of fraud are essential if we are to catch fraudsters once fraud prevention has failed. Statistics and machine learning provide effective technologies for fraud detection and have been applied successfully to detect activities such as money laundering, e-commerce credit card fraud, telecommunications fraud and computer intrusion, to name but a few. We describe the tools available for statistical fraud detection and the areas in which fraud detection technologies are most used.", "title": "" } ]
[ { "docid": "f069501007d4c9d1ada190353d01c7e9", "text": "A discrimination theory of selective perception was used to predict that a given trait would be spontaneously salient in a person's self-concept to the exten that this trait was distinctive for the person within her or his social groups. Sixth-grade students' general and physical spontaneous self-concepts were elicited in their classroom settings. The distinctiveness within the classroom of each student's characteristics on each of a variety of dimensions was determined, and it was found that in a majority of cases the dimension was significantly more salient in the spontaneous self-concepts of those students whose characteristic on thedimension was more distinctive. Also reported are incidental findings which include a description of the contents of spontaneous self-comcepts as well as determinants of their length and of the spontaneous mention of one's sex as part of one's self-concept.", "title": "" }, { "docid": "87eab42827061426dfc9b335530e7037", "text": "OBJECTIVES\nHealth behavior theories focus on the role of conscious, reflective factors (e.g., behavioral intentions, risk perceptions) in predicting and changing behavior. Dual-process models, on the other hand, propose that health actions are guided not only by a conscious, reflective, rule-based system but also by a nonconscious, impulsive, associative system. This article argues that research on health decisions, actions, and outcomes will be enriched by greater consideration of nonconscious processes.\n\n\nMETHODS\nA narrative review is presented that delineates research on implicit cognition, implicit affect, and implicit motivation. In each case, we describe the key ideas, how they have been taken up in health psychology, and the possibilities for behavior change interventions, before outlining directions that might profitably be taken in future research.\n\n\nRESULTS\nCorrelational research on implicit cognitive and affective processes (attentional bias and implicit attitudes) has recently been supplemented by intervention studies using implementation intentions and practice-based training that show promising effects. Studies of implicit motivation (health goal priming) have also observed encouraging findings. There is considerable scope for further investigations of implicit affect control, unconscious thought, and the automatization of striving for health goals.\n\n\nCONCLUSION\nResearch on nonconscious processes holds significant potential that can and should be developed by health psychologists. Consideration of impulsive as well as reflective processes will engender new targets for intervention and should ultimately enhance the effectiveness of behavior change efforts.", "title": "" }, { "docid": "ea49e4a74c165f3819e24d48df4777f2", "text": "BACKGROUND\nThe fatty tissue of the face is divided into compartments. The structures delimiting these compartments help shape the face, are involved in aging, and are encountered during surgical procedures.\n\n\nOBJECTIVE\nTo study the border between the lateral-temporal and the middle cheek fat compartments of the face.\n\n\nMETHODS & MATERIALS\nWe studied 40 human cadaver heads with gross dissections and macroscopic and histological sections. Gelatin was injected into the subcutaneous tissues of 35 heads.\n\n\nRESULTS\nA sheet of connective tissue, comparable to a septum, was consistently found between the lateral-temporal and the middle compartments. We call this structure the septum subcutaneum parotideomassetericum.\n\n\nCONCLUSION\nThere is a distinct septum between the lateral-temporal and the middle fat compartments of the face.", "title": "" }, { "docid": "3f9bcd99eac46264ee0920ddcc866d33", "text": "The advent of easy to use blogging tools is increasing the number of bloggers leading to more diversity in the quality blogspace. The blog search technologies that help users to find “good” blogs are thus more and more important. This paper proposes a new algorithm called “EigenRumor” that scores each blog entry by weighting the hub and authority scores of the bloggers based on eigenvector calculations. This algorithm enables a higher score to be assigned to the blog entries submitted by a good blogger but not yet linked to by any other blogs based on acceptance of the blogger's prior work. General Terms Algorithms, Management, Experimentation", "title": "" }, { "docid": "b527ade4819e314a723789de58280724", "text": "Securing collaborative filtering systems from malicious attack has become an important issue with increasing popularity of recommender Systems. Since recommender systems are entirely based on the input provided by the users or customers, they tend to become highly vulnerable to outside attacks. Prior research has shown that attacks can significantly affect the robustness of the systems. To prevent such attacks, researchers proposed several unsupervised detection mechanisms. While these approaches produce satisfactory results in detecting some well studied attacks, they are not suitable for all types of attacks studied recently. In this paper, we show that the unsupervised clustering can be used effectively for attack detection by computing detection attributes modeled on basic descriptive statistics. We performed extensive experiments and discussed different approaches regarding their performances. Our experimental results showed that attribute-based unsupervised clustering algorithm can detect spam users with a high degree of accuracy and fewer misclassified genuine users regardless of attack strategies.", "title": "" }, { "docid": "86177ff4fbc089fde87d1acd8452d322", "text": "Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of grammaticality judgement decreases as AoA increases, until around age 8, thus showing the unique effect of AoA on grammatical judgement in early learners. No such effects were found in those who acquired BSL after age 8. These late learners appear to have first language proficiency in English instead, which may have been used to scaffold learning of BSL as a second language later in life.", "title": "" }, { "docid": "39fc05dfc0faeb47728b31b6053c040a", "text": "Attempted and completed self-enucleation, or removal of one's own eyes, is a rare but devastating form of self-mutilation behavior. It is often associated with psychiatric disorders, particularly schizophrenia, substance induced psychosis, and bipolar disorder. We report a case of a patient with a history of bipolar disorder who gouged his eyes bilaterally as an attempt to self-enucleate himself. On presentation, the patient was manic with both psychotic features of hyperreligous delusions and command auditory hallucinations of God telling him to take his eyes out. On presentation, the patient had no light perception vision in both eyes and his exam displayed severe proptosis, extensive conjunctival lacerations, and visibly avulsed extraocular muscles on the right side. An emergency computed tomography scan of the orbits revealed small and irregular globes, air within the orbits, and intraocular hemorrhage. He was taken to the operating room for surgical repair of his injuries. Attempted and completed self-enucleation is most commonly associated with schizophrenia and substance induced psychosis, but can also present in patients with bipolar disorder. Other less commonly associated disorders include obsessive-compulsive disorder, depression, mental retardation, neurosyphilis, Lesch-Nyhan syndrome, and structural brain lesions.", "title": "" }, { "docid": "6f84dbe3cf41906b66a7b1d9fe8b0ff1", "text": "We show that the credit quality of corporate debt issuers deteriorates during credit booms, and that this deterioration forecasts low excess returns to corporate bondholders. The key insight is that changes in the pricing of credit risk disproportionately affect the financing costs faced by low quality firms, so the debt issuance of low quality firms is particularly useful for forecasting bond returns. We show that a significant decline in issuer quality is a more reliable signal of credit market overheating than rapid aggregate credit growth. We use these findings to investigate the forces driving time-variation in expected corporate bond returns.  For helpful suggestions, we are grateful to Malcolm Baker, Effi Benmelech, Dan Bergstresser, John Campbell, Sergey Chernenko, Lauren Cohen, Ian Dew-Becker, Martin Fridson, Victoria Ivashina, Chris Malloy, Andrew Metrick, Jun Pan, Erik Stafford, Luis Viceira, Jeff Wurgler, seminar participants at the 2012 AEA Annual Meetings, Columbia GSB, Dartmouth Tuck, Federal Reserve Bank of New York, Federal Reserve Board of Governors, Harvard Business School, MIT Sloan, NYU Stern, Ohio State Fisher, University of Chicago Booth, University of Pennsylvania Wharton, Washington University Olin, Yale SOM, and especially David Scharfstein, Andrei Shleifer, Jeremy Stein, and Adi Sunderam. We thank Annette Larson and Morningstar for data on bond returns and Mara Eyllon and William Lacy for research assistance. The Division of Research at the Harvard Business School provided funding.", "title": "" }, { "docid": "c2d0e11e37c8f0252ce77445bf583173", "text": "This paper describes a method to obtain accurate 3D body models and texture of arbitrary people from a single, monocular video in which a person is moving. Based on a parametric body model, we present a robust processing pipeline to infer 3D model shapes including clothed people with 4.5mm reconstruction accuracy. At the core of our approach is the transformation of dynamic body pose into a canonical frame of reference. Our main contribution is a method to transform the silhouette cones corresponding to dynamic human silhouettes to obtain a visual hull in a common reference frame. This enables efficient estimation of a consensus 3D shape, texture and implanted animation skeleton based on a large number of frames. Results on 4 different datasets demonstrate the effectiveness of our approach to produce accurate 3D models. Requiring only an RGB camera, our method enables everyone to create their own fully animatable digital double, e.g., for social VR applications or virtual try-on for online fashion shopping.", "title": "" }, { "docid": "0c4de7ce6574bb22d3cb0b9a7f3d5498", "text": "Purpose – The purpose of this paper is to attempts to provide further insight into IS adoption by investigating how 12 factors within the technology-organization-environment framework explain smalland medium-sized enterprises’ (SMEs) adoption of enterprise resource planning (ERP) software. Design/methodology/approach – The approach for data collection was questionnaire survey involving executives of SMEs drawn from six fast service enterprises with strong operations in Port Harcourt. The mode of sampling was purposive and snow ball and analysis involves logistic regression test; the likelihood ratios, Hosmer and Lemeshow’s goodness of fit, and Nagelkerke’s R provided the necessary lenses. Findings – The 12 hypothesized relationships were supported with each factor differing in its statistical coefficient and some bearing negative values. ICT infrastructures, technical know-how, perceived compatibility, perceived values, security, and firm’s size were found statistically significant adoption determinants. Although, scope of business operations, trading partners’ readiness, demographic composition, subjective norms, external supports, and competitive pressures were equally critical but their negative coefficients suggest they pose less of an obstacle to adopters than to non-adopters. Thus, adoption of ERP by SMEs is more driven by technological factors than by organizational and environmental factors. Research limitations/implications – The study is limited by its scope of data collection and phases, therefore extended data are needed to apply the findings to other sectors/industries and to factor in the implementation and post-adoption phases in order to forge a more integrated and holistic adoption framework. Practical implications – The model may be used by IS vendors to make investment decisions, to meet customers’ needs, and to craft informed marketing programs that would appeal to actual and potential adopters and cause them to progress in the customer loyalty ladder. Originality/value – The paper contributes to the growing research on IS innovations’ adoption by using factors within the T-O-E framework to explains SMEs’ adoption of ERP.", "title": "" }, { "docid": "060101cf53a576336e27512431c4c4fc", "text": "The aim of this chapter is to give an overview of domain adaptation and transfer learning with a specific view to visual applications. After a general motivation, we first position domain adaptation in the more general transfer learning problem. Second, we try to address and analyze briefly the state-of-the-art methods for different types of scenarios, first describing the historical shallow methods, addressing both the homogeneous and heterogeneous domain adaptation methods. Third, we discuss the effect of the success of deep convolutional architectures which led to the new type of domain adaptation methods that integrate the adaptation within the deep architecture. Fourth, we review DA methods that go beyond image categorization, such as object detection, image segmentation, video analyses or learning visual attributes. We conclude the chapter with a section where we relate domain adaptation to other machine learning solutions.", "title": "" }, { "docid": "343c1607a4f8df8a8202adb26f9959ed", "text": "This investigation examined the measurement properties of the Three Domains of Disgust Scale (TDDS). Principal components analysis in Study 1 (n = 206) revealed three factors of Pathogen, Sexual, and Moral Disgust that demonstrated excellent reliability, including test-retest over 12 weeks. Confirmatory factor analyses in Study 2 (n = 406) supported the three factors. Supportive evidence for the validity of the Pathogen and Sexual Disgust subscales was found in Study 1 and Study 2 with strong associations with disgust/contamination and weak associations with negative affect. However, the validity of the Moral Disgust subscale was limited. Study 3 (n = 200) showed that the TDDS subscales differentially related to personality traits. Study 4 (n = 47) provided evidence for the validity of the TDDS subscales in relation to multiple indices of disgust/contamination aversion in a select sample. Study 5 (n = 70) further highlighted limitations of the Moral Disgust subscale given the lack of a theoretically consistent association with moral attitudes. Lastly, Study 6 (n = 178) showed that responses on the Moral Disgust scale were more intense when anger was the response option compared with when disgust was the response option. The implications of these findings for the assessment of disgust are discussed.", "title": "" }, { "docid": "20b02c2afa20a2c2d3a5e9fb4ec5be85", "text": "Cloning and characterization of the orphan nuclear receptors constitutive androstane receptor (CAR, NR1I3) and pregnane X receptor (PXR, NR1I2) led to major breakthroughs in studying drug-mediated transcriptional induction of drug-metabolizing cytochromes P450 (CYPs). More recently, additional roles for CAR and PXR have been discovered. As examples, these xenosensors are involved in the homeostasis of cholesterol, bile acids, bilirubin, and other endogenous hydrophobic molecules in the liver: CAR and PXR thus form an intricate regulatory network with other members of the nuclear receptor superfamily, foremost the cholesterol-sensing liver X receptor (LXR, NR1H2/3) and the bile-acid-activated farnesoid X receptor (FXR, NR1H4). In this review, functional interactions between these nuclear receptors as well as the consequences on physiology and pathophysiology of the liver are discussed.", "title": "" }, { "docid": "a0850b5f8b2d994b50bb912d6fca3dfb", "text": "In this paper we describe the development of an accurate, smallfootprint, large vocabulary speech recognizer for mobile devices. To achieve the best recognition accuracy, state-of-the-art deep neural networks (DNNs) are adopted as acoustic models. A variety of speedup techniques for DNN score computation are used to enable real-time operation on mobile devices. To reduce the memory and disk usage, on-the-fly language model (LM) rescoring is performed with a compressed n-gram LM. We were able to build an accurate and compact system that runs well below real-time on a Nexus 4 Android phone.", "title": "" }, { "docid": "bf14f996f9013351aca1e9935157c0e3", "text": "Attributed graphs are becoming important tools for modeling information networks, such as the Web and various social networks (e.g. Facebook, LinkedIn, Twitter). However, it is computationally challenging to manage and analyze attributed graphs to support effective decision making. In this paper, we propose, Pagrol, a parallel graph OLAP (Online Analytical Processing) system over attributed graphs. In particular, Pagrol introduces a new conceptual Hyper Graph Cube model (which is an attributed-graph analogue of the data cube model for relational DBMS) to aggregate attributed graphs at different granularities and levels. The proposed model supports different queries as well as a new set of graph OLAP Roll-Up/Drill-Down operations. Furthermore, on the basis of Hyper Graph Cube, Pagrol provides an efficient MapReduce-based parallel graph cubing algorithm, MRGraph-Cubing, to compute the graph cube for an attributed graph. Pagrol employs numerous optimization techniques: (a) a self-contained join strategy to minimize I/O cost; (b) a scheme that groups cuboids into batches so as to minimize redundant computations; (c) a cost-based scheme to allocate the batches into bags (each with a small number of batches); and (d) an efficient scheme to process a bag using a single MapReduce job. Results of extensive experimental studies using both real Facebook and synthetic datasets on a 128-node cluster show that Pagrol is effective, efficient and scalable.", "title": "" }, { "docid": "fb87648c3bb77b1d9b162a8e9dbc5e86", "text": "With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.", "title": "" }, { "docid": "3d6744ae85a9aa07d8c4cb68c79290c7", "text": "Control over the motional degrees of freedom of atoms, ions, and molecules in a field-free environment enables unrivalled measurement accuracies but has yet to be applied to highly charged ions (HCIs), which are of particular interest to future atomic clock designs and searches for physics beyond the Standard Model. Here, we report on the Coulomb crystallization of HCIs (specifically 40Ar13+) produced in an electron beam ion trap and retrapped in a cryogenic linear radiofrequency trap by means of sympathetic motional cooling through Coulomb interaction with a directly laser-cooled ensemble of Be+ ions. We also demonstrate cooling of a single Ar13+ ion by a single Be+ ion—the prerequisite for quantum logic spectroscopy with a potential 10−19 accuracy level. Achieving a seven-orders-of-magnitude decrease in HCI temperature starting at megakelvin down to the millikelvin range removes the major obstacle for HCI investigation with high-precision laser spectroscopy.", "title": "" }, { "docid": "a33aa33a2ae6efe5ca43948e8ef3043e", "text": "In this paper, we describe COCA -- Computation Offload to Clouds using AOP (aspect-oriented programming). COCA is a programming framework that allows smart phones application developers to offload part of the computation to servers in the cloud easily. COCA works at the source level. By harnessing the power of AOP, \\name inserts appropriate offloading code into the source code of the target application based on the result of static and dynamic profiling. As a proof of concept, we integrate \\name into the Android development environment and fully automate the new build process, making application programming and software maintenance easier. With COCA, mobile applications can now automatically offload part of the computation to the cloud, achieving better performance and longer battery life. Smart phones such as iPhone and Android phones can now easily leverage the immense computing power of the cloud to achieve tasks that were considered difficult before, such as having a more complicated artificial-intelligence engine.", "title": "" } ]
scidocsrr
4757a3cdb5797d250a0873d82cd4fa0e
Locality-Preserving Dimensionality Reduction and Classification for Hyperspectral Image Analysis
[ { "docid": "0fa7896efb6dcacbd2823c8d323f89b0", "text": "Reducing the dimensionality of data without losing intrinsic information is an important preprocessing step in high-dimensional data analysis. Fisher discriminant analysis (FDA) is a traditional technique for supervised dimensionality reduction, but it tends to give undesired results if samples in a class are multimodal. An unsupervised dimensionality reduction method called localitypreserving projection (LPP) can work well with multimodal data due to its locality preserving property. However, since LPP does not take the label information into account, it is not necessarily useful in supervised learning scenarios. In this paper, we propose a new linear supervised dimensionality reduction method called local Fisher discriminant analysis (LFDA), which effectively combines the ideas of FDA and LPP. LFDA has an analytic form of the embedding transformation and the solution can be easily computed just by solving a generalized eigenvalue problem. We demonstrate the practical usefulness and high scalability of the LFDA method in data visualization and classification tasks through extensive simulation studies. We also show that LFDA can be extended to non-linear dimensionality reduction scenarios by applying the kernel trick.", "title": "" }, { "docid": "7655df3f32e6cf7a5545ae2231f71e7c", "text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.", "title": "" } ]
[ { "docid": "8604589b2c45d6190fdbc50073dfda23", "text": "Many real world, complex phenomena have an underlying structure of evolving networks where nodes and links are added and removed over time. A central scientific challenge is the description and explanation of network dynamics, with a key test being the prediction of short and long term changes. For the problem of short-term link prediction, existing methods attempt to determine neighborhood metrics that correlate with the appearance of a link in the next observation period. Here, we provide a novel approach to predicting future links by applying an evolutionary algorithm (Covariance Matrix Evolution) to weights which are used in a linear combination of sixteen neighborhood and node similarity indices. We examine reciprocal reply networks of Twitter users constructed at the time scale of weeks, both as a test of our general method and as a problem of scientific interest in itself. Our evolved predictors exhibit a thousand-fold improvement over random link prediction, to our knowledge strongly outperforming all extant methods. Based on our findings, we suggest possible factors which may be driving the evolution of Twitter reciprocal reply networks.", "title": "" }, { "docid": "cf8bfa9d33bea4ba7db1ca0202773f93", "text": "Primary Cutaneous Peripheral T-Cell Lymphoma NOS (PTL-NOS) is a rare, progressive, fatal dermatologic disease that presents with features similar to many common benign plaque-like skin conditions, making recognition of its distinguishing features critical for early diagnosis and treatment (Bolognia et al., 2008). A 78-year-old woman presented to ambulatory care with a single 5 cm nodule on her shoulder that had developed rapidly over 1-2 weeks. Examination was suspicious for malignancy and a biopsy was performed. Biopsy results demonstrated CD4 positivity, consistent with Mycosis Fungoides with coexpression of CD5, CD47, and CD7. Within three months her cancer had progressed into diffuse lesions spanning her entire body. As rapid progression is usually uncharacteristic of Mycosis Fungoides, her diagnosis was amended to PTL-NOS. Cutaneous T-Cell Lymphoma (CTCL) should be suspected in patients with patches, plaques, erythroderma, or papules that persist or multiply despite conservative treatment. Singular biopsies are often nondiagnostic, requiring a high degree of suspicion if there is deviation from the anticipated clinical course. Multiple biopsies are often necessary to make the diagnosis. Physicians caring for patients with rapidly progressive, nonspecific dermatoses with features described above should keep more uncommon forms of CTCL in mind and refer for early biopsy.", "title": "" }, { "docid": "7b4639a5712fd5f3a32129b04404bd83", "text": "The most important factor for the provision of the electromagnetic compatibility in power plants and substations is the density of the earthing network and equipotential bonding system which must be constructed with subsystems and connected to the earthing system. Besides that proper measures must be made (screening and compensation) to reduce the disturbances and their transfer to secondary systems. The originality and research relevance of the paper is in presenting good engineering practice in the construction of earthing network and equipotential bonding system in order to provide for electromagnetic compatibility in power plants and substations which is relevant for advancing EMC education.", "title": "" }, { "docid": "b4138a3c89e89d402aa92190d25d3d59", "text": "The conotruncal anomaly face syndrome was described in a Japanese publication in 1976 and comprises dysmorphic facial appearance and outflow tract defects of the heart. The authors subsequently noted similarities to Shprintzen syndrome and DiGeorge syndrome. Chromosome analysis in five cases did not show a deletion at high resolution, but fluorescent in situ hybridisation using probe DO832 showed a deletion within chromosome 22q11 in all cases.", "title": "" }, { "docid": "5e858796f025a9e2b91109835d827c68", "text": "Several divergent application protocols have been proposed for Internet of Things (IoT) solutions including CoAP, REST, XMPP, AMQP, MQTT, DDS, and others. Each protocol focuses on a specific aspect of IoT communications. The lack of a protocol that can handle the vertical market requirements of IoT applications including machine-to-machine, machine-to-server, and server-to-server communications has resulted in a fragmented market between many protocols. In turn, this fragmentation is a main hindrance in the development of new services that require the integration of multiple IoT services to unlock new capabilities and provide horizontal integration among services. In this work, after articulating the major shortcomings of the current IoT protocols, we outline a rule-based intelligent gateway that bridges the gap between existing IoT protocols to enable the efficient integration of horizontal IoT services. While this intelligent gateway enhances the gloomy picture of protocol fragmentation in the context of IoT, it does not address the root cause of this fragmentation, which lies in the inability of the current protocols to offer a wide range of QoS guarantees. To offer a solution that stems the root cause of this protocol fragmentation issue, we propose a generic IoT protocol that is flexible enough to address the IoT vertical market requirements. In this regard, we enhance the baseline MQTT protocol by allowing it to support rich QoS features by exploiting a mix of IP multicasting, intelligent broker queuing management, and traffic analytics techniques. Our initial evaluation of the lightweight enhanced MQTT protocol reveals significant improvement over the baseline protocol in terms of the delay performance.", "title": "" }, { "docid": "69a643dfe783bb5e828ad77b8edfd88d", "text": "Advanced ceramic materials such as zirconia have great potential as substitutes for traditional materials in many biomedical applications. Since the end of the 1990s, the form of partially stabilized zirconia has been promoted as suitable for dental use due to its excellent strength and superior fracture resistance as result of an inherent transformation toughening mechanism. In addition, zirconia bioceramic presents enhanced biocompatibility, low radioactivity, and interesting optical properties. The introduction of computer-aided design/computer-aided manufacturing (CAD/CAM) techniques has increased the general acceptance of zirconia in dentistry. However, some fabrication procedures such as grinding, polishing, sandblasting, heat treatment, and veneering of the fine-grained metastable zirconia microstructures may affect the long-term stability and success of the material by influencing its aging sensitivity. The purpose of this review is to address the evolution of zirconia as a biomaterial; to explore the material's physical, chemical, biological, and optical properties; to describe strengthening procedures; and finally to examine aging, processing, and core/veneer interfacial effects.", "title": "" }, { "docid": "9e847f22f7d9effd2bd46901e4599829", "text": "This article presents new alternatives to the similarity function for the TextRank algorithm for automated summarization of texts. We describe the generalities of the algorithm and the different functions we propose. Some of these variants achieve a significative improvement using the same metrics and dataset as the original publication.", "title": "" }, { "docid": "ee6461f83cee5fdf409a130d2cfb1839", "text": "This paper introduces a novel three-phase buck-type unity power factor rectifier appropriate for high power Electric Vehicle battery charging mains interfaces. The characteristics of the converter, named the Swiss Rectifier, including the principle of operation, modulation strategy, suitable control structure, and dimensioning equations are described in detail. Additionally, the proposed rectifier is compared to a conventional 6-switch buck-type ac-dc power conversion. According to the results, the Swiss Rectifier is the topology of choice for a buck-type PFC. Finally, the feasibility of the Swiss Rectifier concept for buck-type rectifier applications is demonstrated by means of a hardware prototype.", "title": "" }, { "docid": "9df51d2e5755caa355869dacb90544c2", "text": "Deep learning frameworks have been widely deployed on GPU servers for deep learning applications in both academia and industry. In training deep neural networks (DNNs), there are many standard processes or algorithms, such as convolution and stochastic gradient descent (SGD), but the running performance of different frameworks might be different even running the same deep model on the same GPU hardware. In this study, we evaluate the running performance of four state-of-the-art distributed deep learning frameworks (i.e., Caffe-MPI, CNTK, MXNet, and TensorFlow) over single-GPU, multi-GPU, and multi-node environments. We first build performance models of standard processes in training DNNs with SGD, and then we benchmark the running performance of these frameworks with three popular convolutional neural networks (i.e., AlexNet, GoogleNet and ResNet-50), after that, we analyze what factors that result in the performance gap among these four frameworks. Through both analytical and experimental analysis, we identify bottlenecks and overheads which could be further optimized. The main contribution is that the proposed performance models and the analysis provide further optimization directions in both algorithmic design and system configuration.", "title": "" }, { "docid": "72ad9915e3f4afb9be4528ac04a9e5aa", "text": "A sensor isolation system was developed to reduce vibrational and noise effects on MEMS IMU sensors. A single degree of freedom model of an isolator was developed and simulated. Then a prototype was constructed for use with a Microstrain 3DM-GX3-25 IMU sensor and experimentally tested on a six DOF motion platform. An order of magnitude noise reduction was observed on the z accelerometer up to seven Hz. The isolator was then deployed on a naval ship along with a DMS TSS-25 IMU used as a truth measurement and a rigid mounted 3DM sensor was used for comparison. Signal quality improvements of the IMU were characterized and engine noise at 20 Hz was reduced by tenfold on x, y, and z accelerometers. A heave estimation algorithm was implemented and several types of filters were evaluated. Lab testing with a six DOF motion platform with pure sinusoidal motion, a fixed frequency four pole bandpass filter provided the least heave error at 12.5% of full scale or 0.008m error. When the experimental sea data was analyzed a fixed three pole highpass filter yielded the most accurate results of the filters tested. A heave period estimator was developed to adjust the filter cutoff frequencies for varying sea conditions. Since the ship motions were small, the errors w.r.t. full scale were rather large at 78% RMS as a worst case and 44% for a best case. In absolute terms when the variable filters and isolator were implemented, the best case peak and RMS errors were 0.015m and 0.050m respectively. The isolator improves the heave accuracy by 200% to 570% when compared with a rigidly mounted 3DM sensor.", "title": "" }, { "docid": "d9cbdb32cec776f94802bbe6b4f41ce5", "text": "This paper presents the hardware and software control framework for a semi-autonomous wheelchair. The hardware design incorporates modular and reconfigurable sensors and corresponding low-level software architecture. Two control schemes are discussed. Assisted control that augments the user inputs by providing functionalities such as obstacle avoidance and wall following. And, semi-autonomous navigation which takes higher level destination goals and executes a SLAM algorithm. We also propose an adaptive motion control with a online parameter estimation. The paper presents both experimental and simulation results.", "title": "" }, { "docid": "7c609e5b5205df9e15a8889a621270da", "text": "This paper presents a singing robot system realized by collaboration of the singing synthesis technology “VOCALOID” (developed by YAMAHA) and the novel biped humanoid robot HRP-4C named “Miim” (developed by AIST). One of the advantages of the cybernetic human HRP-4C is is found on its capacity to perform a variety of body motions and realistic facial expressions. To achieve a realistic robot-singing performance, facial motions such as lip-sync, eyes blinking and facial gestures are required. We developed a demonstration system for VOCALOID and HRP-4C, mainly consisting of singing data and the corresponding facial motions. We report in this work the technical overview of the system and the results of an exhibition presented at CEATEC JAPAN 2009.", "title": "" }, { "docid": "3ef01fd1d63041a8f2b6f5c5921d4ed0", "text": "This paper investigates advanced energy-efficient wireless systems in orthogonal frequency-division multiple access (OFDMA) downlink networks using coordinated multipoint (CoMP) transmissions between the base stations (BSs) in a heterogeneous network (HetNet), which is adopted by Third-Generation Partnership Project (3GPP) Long-Term Evolution (LTE)-Advanced to meet International Mobile Telecommunications-Advanced targets. HetNet CoMP has received significant attention as a way of achieving spectral efficiency (SE) and energy efficiency (EE). Usually, in the literature, the total network power consumption is restricted to the sum of the power consumption of all BSs. The significance of the power consumption of the backhaul links in wireless networks is normally omitted for its trivial effect with respect to that of the radio BSs. For SE and EE analysis of HetNet CoMP, the energy and bandwidth consumption of the backhaul is considered, without which, the investigation remains incomplete. However, SE and EE are design criteria in conflict with each other, and a careful study of their tradeoff is mandatory for designing future wireless communication systems. The EE is measured as “throughput (bits) per joule,” whereas the power consumption model includes RF transmit (radiated), circuit, and backhaul power. Furthermore, a nonideal backhaul model such as a microwave link is also investigated within intra-HetNet-CoMP (inside one cell), where an implementing fiber is not feasible. An intercell interference (ICI) coordination method is also studied to mitigate ICI. At the end, a novel resource allocation algorithm is proposed-modeled as an optimization problem - which takes into account the total power consumption, including radiated, circuit, and backhaul power, and the minimum required data rate to maximize EE. Given an SE requirement, the EE optimization problem is formulated as a constrained optimization problem. The considered optimization problem is transformed into a convex optimization problem by redefining the constraint using cubic inequality, which results in an efficient iterative resource allocation algorithm. In each iteration, the transformed problem is solved by using dual decomposition with a projected gradient method. Simulations results demonstrate how backhaul has a significant impact on total power consumption and the effectiveness of the studied schemes. In addition, the results demonstrate that the proposed iterative resource allocation algorithm converges within a small number of iterations and illustrate the fundamental tradeoffs between SE and EE. Our analytical results shed light on future “green” network planning in advanced OFDMA wireless systems like those envisioned for a fifth-generation (5G) system.", "title": "" }, { "docid": "77d78c5850ee7c2a571bd401068d4581", "text": "In today’s day and age when almost every industry has an online presence with users interacting in online marketplaces, personalized recommendations have become quite important. Traditionally, the problem of collaborative filtering has been tackled using Matrix Factorization which is linear in nature. We extend the work of [11] on using variational autoencoders (VAE) for collaborative filtering with implicit feedback by proposing a hybrid, multi-modal approach. Our approach combines movie embeddings (learned from a sibling VAE network) with user ratings from the Movielens 20M dataset and applies it to the task of movie recommendation. We empirically show how the VAE network is empowered by incorporating movie embeddings. We also visualize movie and user embeddings by clustering their latent representations obtained from a VAE. CCS CONCEPTS •Information systems→Collaborative filtering; Personalization; Clustering;", "title": "" }, { "docid": "476bd671b982450d6d1f6c8d7936bcb5", "text": "Walter Thiel developed the method that enables preservation of the body with natural colors in 1992. It consists in the application of an intravascular injection formula, and maintaining the corps submerged for a determinate period of time in the immersion solution in the pool. After immersion, it is possible to maintain the corps in a hermetically sealed container, thus avoiding dehydration outside the pool. The aim of this work was to review the Thiel method, searching all scientific articles describing this technique from its development point of view, and application in anatomy and morphology teaching, as well as in clinical and su rgic l practice. Most of these studies were carried out in Europe. We used PubMed, Ebsco and Embase databases with the terms “Thiel cadaver”, “Thiel embalming”, “Thiel embalming method” and we searched for papers that cited Thiel`s work. In comparison with methods commonly used with high concentrations of formaldehyde, this method lacks the emanation of noxious or irritating gases; gives the corps important passive joint mobility without stiffness; maintaining color, flexibility and tissue plasticity at a level e quivalent to that of a living body. Furthermore, it allows vascular repletion at the capillary level. All this makes for great advantage over the f rmalinfixed and fresh material. Its multiple uses are applicable in anatomy teaching and research; teaching for undergraduates (prose ction and dissection) and for training in surgical techniques for graduates and specialists (laparoscopies, arthroscopies, endoscopies).", "title": "" }, { "docid": "e077a3c57b1df490d418a2b06cf14b2c", "text": "Inductive power transfer (IPT) is widely discussed for the automated opportunity charging of plug-in hybrid and electric public transport buses without moving mechanical components and reduced maintenance requirements. In this paper, the design of an on-board active rectifier and dc–dc converter for interfacing the receiver coil of a 50 kW/85 kHz IPT system is designed. Both conversion stages employ 1.2 kV SiC MOSFET devices for their low switching losses. For the dc–dc conversion, a modular, nonisolated buck+boost-type topology with coupled magnetic devices is used for increasing the power density. For the presented hardware prototype, a power density of 9.5 kW/dm3 (or 156 W/in3) is achieved, while the ac–dc efficiency from the IPT receiver coil to the vehicle battery is 98.6%. Comprehensive experimental results are presented throughout this paper to support the theoretical analysis.", "title": "" }, { "docid": "2449efefdf9a0858d8e8575b7e24ed16", "text": "Lacking realistic ground truth data, image denoising techniques are traditionally evaluated on images corrupted by synthesized i.i.d. Gaussian noise. We aim to obviate this unrealistic setting by developing a methodology for benchmarking denoising techniques on real photographs. We capture pairs of images with different ISO values and appropriately adjusted exposure times, where the nearly noise-free low-ISO image serves as reference. To derive the ground truth, careful post-processing is needed. We correct spatial misalignment, cope with inaccuracies in the exposure parameters through a linear intensity transform based on a novel heteroscedastic Tobit regression model, and remove residual low-frequency bias that stems, e.g., from minor illumination changes. We then capture a novel benchmark dataset, the Darmstadt Noise Dataset (DND), with consumer cameras of differing sensor sizes. One interesting finding is that various recent techniques that perform well on synthetic noise are clearly outperformed by BM3D on photographs with real noise. Our benchmark delineates realistic evaluation scenarios that deviate strongly from those commonly used in the scientific literature.", "title": "" }, { "docid": "da296c4266c241b3e8d330f5c654439f", "text": "Robotic process automation or intelligent automation (the combination of artificial intelligence and automation) is starting to change the way business is done in nearly every sector of the economy. Intelligent automation systems detect and produce vast amounts of information and can automate entire processes or workflows, learning and adapting as they go. Applications range from the routine to the revolutionary: from collecting, analysing, and making decisions about textual information to guiding autonomous vehicles and advanced robots. It is already helping companies transcend conventional performance trade-offs to achieve unprecedented levels of efficiency and quality. Until recently, robotics has found most of its applications in the primary sector, automating and removing the human element from the production chain. Replacing menial tasks was its first foray, and many organisations introduced robotics into their assembly line, warehouse, and cargo bay operations.", "title": "" }, { "docid": "96a38b8b6286169cdd98aa6778456e0c", "text": "Data mining is on the interface of Computer Science andStatistics, utilizing advances in both disciplines to make progressin extracting information from large databases. It is an emergingfield that has attracted much attention in a very short period oftime. This article highlights some statistical themes and lessonsthat are directly relevant to data mining and attempts to identifyopportunities where close cooperation between the statistical andcomputational communities might reasonably provide synergy forfurther progress in data analysis.", "title": "" }, { "docid": "23e64634579fec3b4e68e0f964bedc2e", "text": "This paper provides guidance to some of the concepts surrounding recurrent neural networks. Contrary to feedforward networks, recurrent networks can be sensitive, and be adapted to past inputs. Backpropagation learning is described for feedforward networks, adapted to suit our (probabilistic) modeling needs, and extended to cover recurrent networks. The aim of this brief paper is to set the scene for applying and understanding recurrent neural networks.", "title": "" } ]
scidocsrr
bc5b69ea78fbccc8757f77e0a188ff0e
A Nonparametric Approach to Modeling Choice with Limited Data
[ { "docid": "84c362cb2d4a737d7ea62d85b9144722", "text": "This paper considers mixed, or random coeff icients, multinomial logit (MMNL) models for discrete response, and establishes the following results: Under mild regularity conditions, any discrete choice model derived from random utilit y maximization has choice probabiliti es that can be approximated as closely as one pleases by a MMNL model. Practical estimation of a parametric mixing family can be carried out by Maximum Simulated Likelihood Estimation or Method of Simulated Moments, and easily computed instruments are provided that make the latter procedure fairl y eff icient. The adequacy of a mixing specification can be tested simply as an omitted variable test with appropriately defined artificial variables. An application to a problem of demand for alternative vehicles shows that MMNL provides a flexible and computationally practical approach to discrete response analysis. Acknowledgments: Both authors are at the Department of Economics, University of Cali fornia, Berkeley CA 94720-3880. Correspondence should be directed to mcfadden@econ.berkeley.edu. We are indebted to the E. Morris Cox fund for research support, and to Moshe Ben-Akiva, David Brownstone, Denis Bolduc, Andre de Palma, and Paul Ruud for useful comments. This paper was first presented at the University of Paris X in June 1997.", "title": "" } ]
[ { "docid": "fdfea6d3a5160c591863351395929a99", "text": "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.", "title": "" }, { "docid": "f0db74061a2befca317f9333a0712ab9", "text": "This paper tries to give a gentle introduction to deep learning in medical image processing, proceeding from theoretical foundations to applications. We first discuss general reasons for the popularity of deep learning, including several major breakthroughs in computer science. Next, we start reviewing the fundamental basics of the perceptron and neural networks, along with some fundamental theory that is often omitted. Doing so allows us to understand the reasons for the rise of deep learning in many application domains. Obviously medical image processing is one of these areas which has been largely affected by this rapid progress, in particular in image detection and recognition, image segmentation, image registration, and computer-aided diagnosis. There are also recent trends in physical simulation, modeling, and reconstruction that have led to astonishing results. Yet, some of these approaches neglect prior knowledge and hence bear the risk of producing implausible results. These apparent weaknesses highlight current limitations of deep ()learning. However, we also briefly discuss promising approaches that might be able to resolve these problems in the future.", "title": "" }, { "docid": "e56bc26cd567aff51de3cb47f9682149", "text": "Recent technological advances have expanded the breadth of available omic data, from whole-genome sequencing data, to extensive transcriptomic, methylomic and metabolomic data. A key goal of analyses of these data is the identification of effective models that predict phenotypic traits and outcomes, elucidating important biomarkers and generating important insights into the genetic underpinnings of the heritability of complex traits. There is still a need for powerful and advanced analysis strategies to fully harness the utility of these comprehensive high-throughput data, identifying true associations and reducing the number of false associations. In this Review, we explore the emerging approaches for data integration — including meta-dimensional and multi-staged analyses — which aim to deepen our understanding of the role of genetics and genomics in complex outcomes. With the use and further development of these approaches, an improved understanding of the relationship between genomic variation and human phenotypes may be revealed.", "title": "" }, { "docid": "9c715e50cf36e14312407ed722fe7a7d", "text": "Usual medical care often fails to meet the needs of chronically ill patients, even in managed, integrated delivery systems. The medical literature suggests strategies to improve outcomes in these patients. Effective interventions tend to fall into one of five areas: the use of evidence-based, planned care; reorganization of practice systems and provider roles; improved patient self-management support; increased access to expertise; and greater availability of clinical information. The challenge is to organize these components into an integrated system of chronic illness care. Whether this can be done most efficiently and effectively in primary care practice rather than requiring specialized systems of care remains unanswered.", "title": "" }, { "docid": "b492a0063354a81bd99ac3f81c3fb1ec", "text": "— Bangla automatic number plate recognition (ANPR) system using artificial neural network for number plate inscribing in Bangla is presented in this paper. This system splits into three major parts-number plate detection, plate character segmentation and Bangla character recognition. In number plate detection there arises many problems such as vehicle motion, complex background, distance changes etc., for this reason edge analysis method is applied. As Bangla number plate consists of two words and seven characters, detected number plates are segmented into individual words and characters by using horizontal and vertical projection analysis. After that a robust feature extraction method is employed to extract the information from each Bangla words and characters which is non-sensitive to the rotation, scaling and size variations. Finally character recognition system takes this information as an input to recognize Bangla characters and words. The Bangla character recognition is implemented using multilayer feed-forward network. According to the experimental result, (The abstract needs some exact figures of findings (like success rates of recognition) and how much the performance is better than previous one.) the performance of the proposed system on different vehicle images is better in case of severe image conditions.", "title": "" }, { "docid": "056f5179fa5c0cdea06d29d22a756086", "text": "Finding solution values for unknowns in Boolean equations was a principal reasoning mode in the Algebra of Logic of the 19th century. Schröder investigated it as Auflösungsproblem (solution problem). It is closely related to the modern notion of Boolean unification. Today it is commonly presented in an algebraic setting, but seems potentially useful also in knowledge representation based on predicate logic. We show that it can be modeled on the basis of first-order logic extended by secondorder quantification. A wealth of classical results transfers, foundations for algorithms unfold, and connections with second-order quantifier elimination and Craig interpolation show up. Although for first-order inputs the set of solutions is recursively enumerable, the development of constructive methods remains a challenge. We identify some cases that allow constructions, most of them based on Craig interpolation, and show a method to take vocabulary restrictions on solution components into account. Revision: June 26, 2017", "title": "" }, { "docid": "359d76f0b4f758c3a58e886e840c5361", "text": "Cover crops are important components of sustainable agricultural systems. They increase surface residue and aid in the reduction of soil erosion. They improve the structure and water-holding capacity of the soil and thus increase the effectiveness of applied N fertilizer. Legume cover crops such as hairy vetch and crimson clover fix nitrogen and contribute to the nitrogen requirements of subsequent crops. Cover crops can also suppress weeds, provide suitable habitat for beneficial predator insects, and act as non-host crops for nematodes and other pests in crop rotations. This paper reviews the agronomic and economic literature on using cover crops in sustainable food production and reports on past and present research on cover crops and sustainable agriculture at the Beltsville Agricultural Research Center, Maryland. Previous studies suggested that the profitability of cover crops is primarily the result of enhanced crop yields rather than reduced input costs. The experiments at the Beltsville Agricultural Research Center on fresh-market tomato production showed that tomatoes grown with hairy vetch mulch were higher yielding and more profitable than those grown with black polyethylene and no mulch system. Previous studies of cover crops in grain production indicated that legume cover crops such as hairy vetch and crimson clover are more profitable than grass cover crops such as rye or wheat because of the ability of legumes to contribute N to the following crop. A com-", "title": "" }, { "docid": "c4d0084aab61645fc26e099115e1995c", "text": "Digital transformation often includes establishing big data analytics capabilities and poses considerable challenges for traditional manufacturing organizations, such as car companies. Successfully introducing big data analytics requires substantial organizational transformation and new organizational structures and business processes. Based on the three-stage evolution of big data analytics capabilities at AUDI, the full article provides recommendations for how traditional manufacturing organizations can successfully introduce big data analytics and master the related organizational transformations. Stage I: Advancing. In Stage I, AUDI’s sales and marketing department initiated data analytics projects. Commitment within the organization for data analytics grew slowly, and the strategic importance of the area was increasingly recognized. During this first stage, the IT department played a passive role, responding to the initiators of data analytics projects. The company’s digital innovation hub, however, laid the technology foundation for big data analytics during the Advancing stage. Stage II: Enabling. In Stage II, analytics competencies were built up not only in the digital innovation hub but also in the IT department. The IT department enabled big data analytics through isolated technology activities, sometimes taking on or insourcing tasks previously carried out by external consultancies or the digital innovation hub. Analytics services were developed through a more advanced technology infrastructure as well as analytics methods. Stage III: Leveraging. In the current Stage III, AUDI is leveraging the analytics competencies of the digital innovation hub and the IT department to centrally provide analytics-as-a-service. The IT department is now fully responsible for all technology tasks and is evolving to become a consulting partner for the other big data analytics stakeholders (sales and marketing department and digital innovation hub). In particular, digital services are enabled by leveraging the most valuable data source (i.e., operational car data).", "title": "" }, { "docid": "0b4f44030a922ba2c970c263583e8465", "text": "BACKGROUND\nSmoking remains one of the few potentially preventable factors associated with low birthweight, preterm birth and perinatal death.\n\n\nOBJECTIVES\nTo assess the effects of smoking cessation programs implemented during pregnancy on the health of the fetus, infant, mother, and family.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Pregnancy and Childbirth Group trials register and the Cochrane Tobacco Addiction Group trials register (July 2003), MEDLINE (January 2002 to July 2003), EMBASE (January 2002 to July 2003), PsychLIT (January 2002 to July 2003), CINAHL (January 2002 to July 2003), and AUSTHEALTH (January 2002 to 2003). We contacted trial authors to locate additional unpublished data. We handsearched references of identified trials and recent obstetric journals.\n\n\nSELECTION CRITERIA\nRandomised and quasi-randomised trials of smoking cessation programs implemented during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nFour reviewers assessed trial quality and extracted data independently.\n\n\nMAIN RESULTS\nThis review included 64 trials. Fifty-one randomised controlled trials (20,931 women) and six cluster-randomised trials (over 7500 women) provided data on smoking cessation and/or perinatal outcomes. Despite substantial variation in the intensity of the intervention and the extent of reminders and reinforcement through pregnancy, there was an increase in the median intensity of both 'usual care' and interventions over time. There was a significant reduction in smoking in the intervention groups of the 48 trials included: (relative risk (RR) 0.94, 95% confidence interval (CI) 0.93 to 0.95), an absolute difference of six in 100 women continuing to smoke. The 36 trials with validated smoking cessation had a similar reduction (RR 0.94, 95% CI 0.92 to 0.95). Smoking cessation interventions reduced low birthweight (RR 0.81, 95% CI 0.70 to 0.94) and preterm birth (RR 0.84, 95% CI 0.72 to 0.98), and there was a 33 g (95% CI 11 g to 55 g) increase in mean birthweight. There were no statistically significant differences in very low birthweight, stillbirths, perinatal or neonatal mortality but these analyses had very limited power. One intervention strategy, rewards plus social support (two trials), resulted in a significantly greater smoking reduction than other strategies (RR 0.77, 95% CI 0.72 to 0.82). Five trials of smoking relapse prevention (over 800 women) showed no statistically significant reduction in relapse.\n\n\nREVIEWERS' CONCLUSIONS\nSmoking cessation programs in pregnancy reduce the proportion of women who continue to smoke, and reduce low birthweight and preterm birth. The pooled trials have inadequate power to detect reductions in perinatal mortality or very low birthweight.", "title": "" }, { "docid": "03cd67f6c96d37b6345b187382b79c44", "text": "Social media is a vital source of information during any major event, especially natural disasters. Data produced through social networking sites is seen as ubiquitous, rapid and accessible, and it is believed to empower average citizens to become more situationally aware during disasters and coordinate to help themselves. However, with the exponential increase in the volume of social media data, so comes the increase in data that are irrelevant to a disaster, thus, diminishing peoples’ ability to find the information that they need in order to organize relief efforts, find help, and potentially save lives. In this paper, we present an approach to identifying informative messages in social media streams during disaster events. Our approach is based on Convolutional Neural Networks and shows significant improvement in performance over models that use the “bag of words” and n-grams as features on several datasets of messages from flooding events.", "title": "" }, { "docid": "46f41dd784c02185e0ba2f3ee4b5c8eb", "text": "The purpose of this study was to examine the changes in temporomandibular joint (TMJ) morphology and clinical symptoms after intraoral vertical ramus osteotomy (IVRO) with and without a Le Fort I osteotomy. Of 50 Japanese patients with mandibular prognathism with mandibular and bimaxillary asymmetry, 25 underwent IVRO and 25 underwent IVRO in combination with a Le Fort I osteotomy. The TMJ symptoms and joint morphology, including disc tissue, were assessed preoperatively and postoperatively by magnetic resonance imaging and axial cephalogram. Improvement was seen in just 50% of joints with anterior disc displacement (ADD) that received IVRO and 52% of those that received IVRO with Le Fort I osteotomy. Fewer or no TMJ symptoms were reported postoperatively in 97% of the joints that received IVRO and 90% that received IVRO with Le Fort I osteotomy. Postoperatively, there were significant condylar position changes and horizontal changes in the condylar long axis on both sides in the two groups. There were no significant differences between improved ADD and unimproved ADD in condylar position change and the angle of the condylar long axis, although distinctive postoperative condylar sag was seen. These results suggest that IVRO with or without Le Fort I osteotomy can improve ADD and TMJ symptoms along with condylar position and angle, but it is difficult to predict the amount of improvement in ADD.", "title": "" }, { "docid": "3af338a01d1419189b7706375feec0c2", "text": "Like E. Paul Torrance, my colleagues and I have tried to understand the nature of creativity, to assess it, and to improve instruction by teaching for creativity as well as teaching students to think creatively. This article reviews our investment theory of creativity, propulsion theory of creative contributions, and some of the data we have collected with regard to creativity. It also describes the propulsion theory of creative contributions. Finally, it draws", "title": "" }, { "docid": "a4731b9d3bfa2813858ff9ea97668577", "text": "Both the Swenson and the Soave procedures have been adapted as transanal approaches. Our purpose is to compare the outcomes and complications between transanal Swenson and Soave procedures.This clinical analysis involved a retrospective series of 148 pediatric patients with HD from Dec, 2001, to Dec, 2015. Perioperative/operative characteristics, postoperative complications, and outcomes between the 2 groups were analyzed. Students' t-test and chi-squared analysis were performed.In total 148 patients (Soave 69, Swenson 79) were included in our study. Mean follow-up was 3.5 years. There are no significant differences in overall hospital stay and bowel function. We noted significant differences regarding mean operating time, blood loss, and overall complications. We noted significant differences in mean operating time, blood loss, and overall complications in favor of the Swenson group when compared to the Soave group (P < 0.05).According to our results, although transanal pullthrough Swenson cannot reduce overall hospital stay and improve bowel function compared with the Soave procedure, it results in less blood loss, shorter operation time, and a lower complication rate.", "title": "" }, { "docid": "2a60990e13e7983edea29b131528222d", "text": "We propose Stereo Direct Sparse Odometry (Stereo DSO) as a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic/extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, we propose a novel approach to integrate constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense/semi-dense direct approaches while providing a higher reconstruction density than feature-based methods.", "title": "" }, { "docid": "cc4c0a749c6a3f4ac92b9709f24f03f4", "text": "Modern GPUs with their several hundred cores and more accessible programming models are becoming attractive devices for compute-intensive applications. They are particularly well suited for applications, such as image processing, where the end result is intended to be displayed via the graphics card. One of the more versatile and powerful graphics techniques is ray tracing. However, tracing each ray of light in a scene is very computational expensive and have traditionally been preprocessed on CPUs over hours, if not days. In this paper, Nvidia’s new OptiX ray tracing engine is used to show how the power of modern graphics cards, such as the Nvidia Quadro FX 5800, can be harnessed to ray trace several scenes that represent real-life applications in real-time speeds ranging from 20.63 to 67.15 fps. Near-perfect speedup is demonstrated on dual GPUs for scenes with complex geometries. The impact on ray tracing of the recently announced Nvidia Fermi processor, is also discussed.", "title": "" }, { "docid": "508ad7d072a62433f3233d90286ef902", "text": "The NP-hard Colorful Components problem is, given a vertex-colored graph, to delete a minimum number of edges such that no connected component contains two vertices of the same color. It has applications in multiple sequence alignment and in multiple network alignment where the colors correspond to species. We initiate a systematic complexity-theoretic study of Colorful Components by presenting NP-hardness as well as fixed-parameter tractability results for different variants of Colorful Components. We also perform experiments with our algorithms and additionally develop an efficient and very accurate heuristic algorithm clearly outperforming a previous min-cut-based heuristic on multiple sequence alignment data.", "title": "" }, { "docid": "3c1db6405945425c61495dd578afd83f", "text": "This paper describes a novel driver-support system that helps to maintain the correct speed and headway (distance) with respect to lane curvature and other vehicles ahead. The system has been developed as part of the Integrating Project PReVENT under the European Framework Programme 6, which is named SAfe SPEed and safe distaNCE (SASPENCE). The application uses a detailed description of the situation ahead of the vehicle. Many sensors [radar, video camera, Global Positioning System (GPS) and accelerometers, digital maps, and vehicle-to-vehicle wireless local area network (WLAN) connections] are used, and state-of-the-art data fusion provides a model of the environment. The system then computes a feasible maneuver and compares it with the driver's behavior to detect possible mistakes. The warning strategies are based on this comparison. The system “talks” to the driver mainly via a haptic pedal or seat belt and “listens” to the driver mainly via the vehicle acceleration. This kind of operation, i.e., the comparison between what the system thinks is possible and what the driver appears to be doing, and the consequent dialog can be regarded as simple implementations of the rider-horse metaphor (H-metaphor). The system has been tested in several situations (driving simulator, hardware in the loop, and real road tests). Objective and subjective data have been collected, revealing good acceptance and effectiveness, particularly in awakening distracted drivers. The system intervenes only when a problem is actually detected in the headway and/or speed (approaching curves or objects) and has been shown to cause prompt reactions and significant speed correction before getting into really dangerous situations.", "title": "" }, { "docid": "370813b3114c8f8c2611b72876159efe", "text": "Sciatic nerve structure and nomenclature: epineurium to paraneurium is this a new paradigm? We read with interest the study by Perlas et al., (1) about the sciatic nerve block at the level of its division in the popliteal fossa. We have been developing this technique in our routine practice during the past 7 years and have no doub about the effi cacy and safety of this approach (2,3). However, we do not agree with the author's defi nition of the structure and limits of the nerve. Given the impact of publications from the principal author's research group on the regional anesthesia community, we are compelled to comment on proposed terminology that we feel may create confusion and contribute to the creation of a new paradigm in peripheral nerve blockade. The peripheral nerve is a well-defi ned anatomical entity with an unequivocal histological structure (Figure 1). The fascicle is the noble and functional unit of the nerves. Fascicles are constituted by a group of axons covered individually by the endoneurium and tightly packed within the perineurium. The epineurium comprises all the tissues that hold and surround the fascicles and defi nes the macroscopic external limit of the nerve. Epineurium includes loose connective and adipose tissue and epineurial vessels. Fascicles can be found as isolated units or in groups of fascicles supported and held together into a mixed collagen and fat tissue in different proportions (within the epineurial cover). The epineurium cover is the thin layer of connective tissue that encloses the whole structure and constitutes the anatomical limit of the nerve. It acts as a mechanical barrier (limiting the spread of injected local anesthetic), but not as a physical barrier (allowing the passive diffusion of local anesthetic along the concentration gradient). The paraneurium is the connective tissue that supports and connects the nerve with the surrounding structures (eg, muscles, bone, joints, tendons, and vessels) and acts as a gliding layer. We agree that the limits of the epineurium of the sciatic nerve, like those of the brachial plexus, are more complex than in single nerves. Therefore, the sciatic nerve block deserves special consideration. If we accept that the sciatic nerve is an anatomical unit, the epineurium should include the groups of fascicles that will constitute the tibial and the common peroneal nerves. Similarly, the epineurium of the common peroneal nerve contains the fascicles that will be part of the lateral cutane-ous, …", "title": "" }, { "docid": "3a942985eb615f459a670ada83ce3a41", "text": "A new method of realising RF barcodes is presented using arrays of identical microstrip dipoles capacitively tuned to be resonant at different frequencies within the desired licensed-free ISM bands. When interrogated, the reader detects each dipole's resonance frequency and with n resonant dipoles, potentially 2/sup n/-1 items in the field can be tagged and identified. Results for RF barcode elements in the 5.8 GHz band are presented. It is shown that with accurate centre frequency prediction and by operating over multiple ISM and other license-exempt bands, a useful number of information bits can be realised. Further increase may be possible using ultra-wideband (UWB) technology. Low cost lithographic printing techniques based on using metal ink on low cost substrates could lead to an economical alternative to current RFID systems in many applications.", "title": "" }, { "docid": "bd47b468b1754ddd9fecf8620eb0b037", "text": "Common bean (Phaseolus vulgaris) is grown throughout the world and comprises roughly 50% of the grain legumes consumed worldwide. Despite this, genetic resources for common beans have been lacking. Next generation sequencing, has facilitated our investigation of the gene expression profiles associated with biologically important traits in common bean. An increased understanding of gene expression in common bean will improve our understanding of gene expression patterns in other legume species. Combining recently developed genomic resources for Phaseolus vulgaris, including predicted gene calls, with RNA-Seq technology, we measured the gene expression patterns from 24 samples collected from seven tissues at developmentally important stages and from three nitrogen treatments. Gene expression patterns throughout the plant were analyzed to better understand changes due to nodulation, seed development, and nitrogen utilization. We have identified 11,010 genes differentially expressed with a fold change ≥ 2 and a P-value < 0.05 between different tissues at the same time point, 15,752 genes differentially expressed within a tissue due to changes in development, and 2,315 genes expressed only in a single tissue. These analyses identified 2,970 genes with expression patterns that appear to be directly dependent on the source of available nitrogen. Finally, we have assembled this data in a publicly available database, The Phaseolus vulgaris Gene Expression Atlas (Pv GEA), http://plantgrn.noble.org/PvGEA/ . Using the website, researchers can query gene expression profiles of their gene of interest, search for genes expressed in different tissues, or download the dataset in a tabular form. These data provide the basis for a gene expression atlas, which will facilitate functional genomic studies in common bean. Analysis of this dataset has identified genes important in regulating seed composition and has increased our understanding of nodulation and impact of the nitrogen source on assimilation and distribution throughout the plant.", "title": "" } ]
scidocsrr
ac602d247351be157d973e0a4757ad79
A Comparison Between the Firefly Algorithm and Particle Swarm Optimization
[ { "docid": "d90b68b84294d0a56d71b3c5b1a5eeb7", "text": "Nature-inspired algorithms are among the most powerful algorithms for optimization. This paper intends to provide a detailed description of a new Firefly Algorithm (FA) for multimodal optimization applications. We will compare the proposed firefly algorithm with other metaheuristic algorithms such as particle swarm optimization (PSO). Simulations and results indicate that the proposed firefly algorithm is superior to existing metaheuristic algorithms. Finally we will discuss its applications and implications for further research.", "title": "" } ]
[ { "docid": "759b85bd270afb908ce2b4f23e0f5269", "text": "In this paper we discuss λ-policy iteration, a method for exact and approximate dynamic programming. It is intermediate between the classical value iteration (VI) and policy iteration (PI) methods, and it is closely related to optimistic (also known as modified) PI, whereby each policy evaluation is done approximately, using a finite number of VI. We review the theory of the method and associated questions of bias and exploration arising in simulation-based cost function approximation. We then discuss various implementations, which offer advantages over well-established PI methods that use LSPE(λ), LSTD(λ), or TD(λ) for policy evaluation with cost function approximation. One of these implementations is based on a new simulation scheme, called geometric sampling, which uses multiple short trajectories rather than a single infinitely long trajectory.", "title": "" }, { "docid": "73d09f005f9335827493c3c47d02852b", "text": "Multiprotocol Label Switched Networks need highly intelligent controls to manage high volume traffic due to issues of traffic congestion and best path selection. The work demonstrated in this paper shows results from simulations for building optimal fuzzy based algorithm for traffic splitting and congestion avoidance. The design and implementation of Fuzzy based software defined networking is illustrated by introducing the Fuzzy Traffic Monitor in an ingress node. Finally, it displays improvements in the terms of mean delay (42.0%) and mean loss rate (2.4%) for Video Traffic. Then, the resu1t shows an improvement in the terms of mean delay (5.4%) and mean loss rate (3.4%) for Data Traffic and an improvement in the terms of mean delay(44.9%) and mean loss rate(4.1%) for Voice Traffic as compared to default MPLS implementation. Keywords—Multiprotocol Label Switched Networks; Fuzzy Traffic Monitor; Network Simulator; Ingress; Traffic Splitting; Fuzzy Logic Control System; Label setup System; Traffic Splitting System", "title": "" }, { "docid": "e1af677fc2a19ade2f315ffc6f660ca6", "text": "In enterprise and data center networks, the scalability of the data plane becomes increasingly challenging as forwarding tables and link speeds grow. Simply building switches with larger amounts of faster memory is not appealing, since high-speed memory is both expensive and power hungry. Implementing hash tables in SRAM is not appealing either because it requires significant overprovisioning to ensure that all forwarding table entries fit. Instead, we propose the BUFFALO architecture, which uses a small SRAM to store one Bloom filter of the addresses associated with each outgoing link. We provide a practical switch design leveraging flat addresses and shortest-path routing. BUFFALO gracefully handles false positives without reducing the packet-forwarding rate, while guaranteeing that packets reach their destinations with bounded stretch with high probability. We tune the sizes of Bloom filters to minimize false positives for a given memory size. We also handle routing changes and dynamically adjust Bloom filter sizes using counting Bloom filters in slow memory. Our extensive analysis, simulation, and prototype implementation in kernel-level Click show that BUFFALO significantly reduces memory cost, increases the scalability of the data plane, and improves packet-forwarding performance.", "title": "" }, { "docid": "5a57a638ad9d7adf6df86e1d834c752d", "text": "Autonomous vehicles operating in dynamic urban environments must account for the uncertainty arising from the behavior of other objects in the environment. For this purpose, we develop an integrated environment modeling and stochastic Model Predictive Control (MPC) framework. The trade–off between risk and conservativeness is managed by a risk factor which is a parameter in the control design process. The environment model consists of an Interacting Multiple Model Kalman Filter to estimate and predict the positions of target vehicles. The uncertain predictions are used to formulate a chance–constrained MPC problem. The overall goal is to develop a framework for safe autonomous navigation in the presence of uncertainty and study the effect of the risk parameter on controller performance. Simulations of an autonomous vehicle driving in the presence of moving vehicles show the effectiveness of the proposed framework.", "title": "" }, { "docid": "ed0f70e6e53666a6f5562cfb082a9a9a", "text": "Biometrics aims at reliable and robust identification of humans from their personal traits, mainly for security and authentication purposes, but also for identifying and tracking the users of smarter applications. Frequently considered modalities are fingerprint, face, iris, palmprint and voice, but there are many other possible biometrics, including gait, ear image, retina, DNA, and even behaviours. This chapter presents a survey of machine learning methods used for biometrics applications, and identifies relevant research issues. We focus on three areas of interest: offline methods for biometric template construction and recognition, information fusion methods for integrating multiple biometrics to obtain robust results, and methods for dealing with temporal information. By introducing exemplary and influential machine learning approaches in the context of specific biometrics applications, we hope to provide the reader with the means to create novel machine learning solutions to challenging biometrics problems.", "title": "" }, { "docid": "49b4dac5d9eea20c3f36fa0db99c02f4", "text": "CUBIC is a congestion control protocol for TCP (transmission control protocol) and the current default TCP algorithm in Linux. The protocol modifies the linear window growth function of existing TCP standards to be a cubic function in order to improve the scalability of TCP over fast and long distance networks. It also achieves more equitable bandwidth allocations among flows with different RTTs (round trip times) by making the window growth to be independent of RTT -- thus those flows grow their congestion window at the same rate. During steady state, CUBIC increases the window size aggressively when the window is far from the saturation point, and the slowly when it is close to the saturation point. This feature allows CUBIC to be very scalable when the bandwidth and delay product of the network is large, and at the same time, be highly stable and also fair to standard TCP flows. The implementation of CUBIC in Linux has gone through several upgrades. This paper documents its design, implementation, performance and evolution as the default TCP algorithm of Linux.", "title": "" }, { "docid": "82dae1a1b6bcd1ca2af690253a6e650a", "text": "The task of automatic document summarization aims at generating short summaries for originally long documents. A good summary should cover the most important information of the original document or a cluster of documents, while being coherent, non-redundant and grammatically readable. Numerous approaches for automatic summarization have been developed to date. In this paper we give a self-contained, broad overview of recent progress made for document summarization within the last 5 years. Specifically, we emphasize on significant contributions made in recent years that represent the state-of-the-art of document summarization, including progress on modern sentence extraction approaches that improve concept coverage, information diversity and content coherence, as well as attempts from summarization frameworks that integrate sentence compression, and more abstractive systems that are able to produce completely new sentences. In addition, we review progress made for document summarization in domains, genres and applications that are different from traditional settings. We also point out some of the latest trends and highlight a few possible future directions.", "title": "" }, { "docid": "4261306ca632ada117bdb69af81dcb3f", "text": "Real-world deployments of wireless sensor networks (WSNs) require secure communication. It is important that a receiver is able to verify that sensor data was generated by trusted nodes. In some cases it may also be necessary to encrypt sensor data in transit. Recently, WSNs and traditional IP networks are more tightly integrated using IPv6 and 6LoWPAN. Available IPv6 protocol stacks can use IPsec to secure data exchange. Thus, it is desirable to extend 6LoWPAN such that IPsec communication with IPv6 nodes is possible. It is beneficial to use IPsec because the existing end-points on the Internet do not need to be modified to communicate securely with the WSN. Moreover, using IPsec, true end-to-end security is implemented and the need for a trustworthy gateway is removed. In this paper we provide End-to-End (E2E) secure communication between an IP enabled sensor nodes and a device on traditional Internet. This is the first compressed lightweight design, implementation, and evaluation of 6LoWPAN extension for IPsec on Contiki. Our extension supports both IPsec’s Authentication Header (AH) and Encapsulation Security Payload (ESP). Thus, communication endpoints are able to authenticate, encrypt and check the integrity of messages using standardized and established IPv6 mechanisms.", "title": "" }, { "docid": "b752f0f474b8f275f09d446818647564", "text": "n engl j med 377;15 nejm.org October 12, 2017 4. Aysola J, Tahirovic E, Troxel AB, et al. A randomized controlled trial of opt-in versus opt-out enrollment into a diabetes behavioral intervention. Am J Health Promot 2016 October 21 (Epub ahead of print). 5. Mehta SJ, Troxel AB, Marcus N, et al. Participation rates with opt-out enrollment in a remote monitoring intervention for patients with myocardial infarction. JAMA Cardiol 2016; 1: 847-8. DOI: 10.1056/NEJMp1707991", "title": "" }, { "docid": "9ca730e26162ecda57f223a3e413e289", "text": "The Wiskott-Aldrich syndrome (WAS) is an X-linked disorder characterized by a triad of diagnostic clinical elements: immunodeficiency, eczema, and hemorrhage caused by thrombocytopenia with small-sized platelets. The formal proof that hematopoietic cell transplantation (HCT) could be used to cure WAS revealed a requirement for both immunosuppression and myelosuppression that still underlies the standard approach to curative therapy today. The current short- and long-term toxicities of HCT are the main stumbling block for the ability to cure every patient with WAS and X-linked thrombocytopenia, and much remains to be done.", "title": "" }, { "docid": "f320e7f092040e72de062dc8203bbcfb", "text": "This research provides a security assessment of the Android framework-Google's software stack for mobile devices. The authors identify high-risk threats to the framework and suggest several security solutions for mitigating them.", "title": "" }, { "docid": "eaf6b4c216515c967ec7addea3916d0b", "text": "In an effort to provide high-quality preschool education, policymakers are increasingly requiring public preschool teachers to have at least a Bachelor's degree, preferably in early childhood education. Seven major studies of early care and education were used to predict classroom quality and children's academic outcomes from the educational attainment and major of teachers of 4-year-olds. The findings indicate largely null or contradictory associations, indicating that policies focused solely on increasing teachers' education will not suffice for improving classroom quality or maximizing children's academic gains. Instead, raising the effectiveness of early childhood education likely will require a broad range of professional development activities and supports targeted toward teachers' interactions with children.", "title": "" }, { "docid": "7350c0433fe1330803403e6aa03a2f26", "text": "An introduction is provided to Multi-Entity Bayesian Networks (MEBN), a logic system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated sub-structures. Knowledge is encoded as a collection of Bayesian network fragments (MFrags) that can be instantiated and combined to form highly complex situation-specific Bayesian networks. A MEBN theory (MTheory) implicitly represents a joint probability distribution over possibly unbounded numbers of hypotheses, and uses Bayesian learning to refine a knowledge base as observations accrue. MEBN provides a logical foundation for the emerging collection of highly expressive probability-based languages. A running example illustrates the representation and reasoning power of the MEBN formalism.", "title": "" }, { "docid": "7cf2c2ce9edff28880bc399e642cee44", "text": "This paper provides new results and insights for tracking an extended target object modeled with an Elliptic Random Hypersurface Model (RHM). An Elliptic RHM specifies the relative squared Mahalanobis distance of a measurement source to the center of the target object by means of a one-dimensional random scaling factor. It is shown that uniformly distributed measurement sources on an ellipse lead to a uniformly distributed squared scaling factor. Furthermore, a Bayesian inference mechanisms tailored to elliptic shapes is introduced, which is also suitable for scenarios with high measurement noise. Closed-form expressions for the measurement update in case of Gaussian and uniformly distributed squared scaling factors are derived.", "title": "" }, { "docid": "0141a93f93a7cf3c8ee8fd705b0a9657", "text": "We systematically explore regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. Furthermore, we connect a maximum entropy based confidence penalty to label smoothing through the direction of the KL divergence. We exhaustively evaluate the proposed confidence penalty and label smoothing on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT’14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and the confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyperparameters, suggesting the wide applicability of these regularizers.", "title": "" }, { "docid": "01a21dde4e7e14ed258cb05025ee4efc", "text": "Computerized and, more recently, Internet-based treatments for depression have been developed and tested in controlled trials. The aim of this meta-analysis was to summarize the effects of these treatments and investigate characteristics of studies that may be related to the effects. In particular, the authors were interested in the role of personal support when completing a computerized treatment. Following a literature search and coding, the authors included 12 studies, with a total of 2446 participants. Ten of the 12 studies were delivered via the Internet. The mean effect size of the 15 comparisons between Internet-based and other computerized psychological treatments vs. control groups at posttest was d = 0.41 (95% confidence interval [CI]: 0.29-0.54). However, this estimate was moderated by a significant difference between supported (d = 0.61; 95% CI: 0.45-0.77) and unsupported (d = 0.25; 95% CI: 0.14-0.35) treatments. The authors conclude that although more studies are needed, Internet and other computerized treatments hold promise as potentially evidence-based treatments of depression.", "title": "" }, { "docid": "02c8093183af96808a71b93ee3103996", "text": "The medical field stands to see significant benefits from the recent advances in deep learning. Knowing the uncertainty in the decision made by any machine learning algorithm is of utmost importance for medical practitioners. This study demonstrates the utility of using Bayesian LSTMs for classification of medical time series. Four medical time series datasets are used to show the accuracy improvement Bayesian LSTMs provide over standard LSTMs. Moreover, we show cherry-picked examples of confident and uncertain classifications of the medical time series. With simple modifications of the common practice for deep learning, significant improvements can be made for the medical practitioner and patient.", "title": "" }, { "docid": "7a77d8d381ec543033626be54119358a", "text": "The advent of continuous glucose monitoring (CGM) is a significant stride forward in our ability to better understand the glycemic status of our patients. Current clinical practice employs two forms of CGM: professional (retrospective or \"masked\") and personal (real-time) to evaluate and/or monitor glycemic control. Most studies using professional and personal CGM have been done in those with type 1 diabetes (T1D). However, this technology is agnostic to the type of diabetes and can also be used in those with type 2 diabetes (T2D). The value of professional CGM in T2D for physicians, patients, and researchers is derived from its ability to: (1) to discover previously unknown hyper- and hypoglycemia (silent and symptomatic); (2) measure glycemic control directly rather than through the surrogate metric of hemoglobin A1C (HbA1C) permitting the observation of a wide variety of metrics that include glycemic variability, the percent of time within, below and above target glucose levels, the severity of hypo- and hyperglycemia throughout the day and night; (3) provide actionable information for healthcare providers derived by the CGM report; (4) better manage patients on hemodialysis; and (5) effectively and efficiently analyze glycemic effects of new interventions whether they be pharmaceuticals (duration of action, pharmacodynamics, safety, and efficacy), devices, or psycho-educational. Personal CGM has also been successfully used in a small number of studies as a behavior modification tool in those with T2D. This comprehensive review describes the differences between professional and personal CGM and the evidence for the use of each form of CGM in T2D. Finally, the opinions of key professional societies on the use of CGM in T2D are presented.", "title": "" }, { "docid": "575edaeaaad1746f3afe17fa97faad39", "text": "The effects of stress on human behavior and performance have been well recognized. However, little has been investigated on the impact of psychological stress on pilots and astronauts. Living a life of constantly altered sleeping hours, pilots or astronauts are experiencing sleep stage fragments or long working hours with little or no sleep. The relationship between sleep and stress is reciprocal. Studies have shown that even in low-stress conditions, sleep-deprived participants demonstrate anger and anxiety a lot more frequently than participants with normal sleep quality and hours. In this paper, we introduce a new psychological health-monitoring framework. The prognostics sensors include Actiwatch for sleep monitoring, BioPatch from Zephyr to collect electrocardiogram (ECG) for heart rate and heart rate variability (HRV) for emotion changing monitoring. In this paper, a new Sleep-HRV-Emotion analytical model is proposed to assess the risk of emotion changes in real time. Prognostics analysis and prediction of psychological health is performed based on the proposed new model. Based on the output of the prognostics analysis, several stress intervention strategies are suggested both for the general public as well as for pilots and astronauts in particular. These interventions include the time to rest/sleep, hours of sleep, ultrasound pulses, and Yoga breathing. As such, these intervention strategies help recover, preserve, or improve pilot and astronaut performance, especially during long-duration space flights, and, ultimately, human settlement on other planetary bodies.", "title": "" } ]
scidocsrr
5f6bbcd47bd8f7dcf2e60bb16a67241a
Musical anhedonia: selective loss of emotional experience in listening to music.
[ { "docid": "08331361929f3634bc705221ec25287c", "text": "The present study used pleasant and unpleasant music to evoke emotion and functional magnetic resonance imaging (fMRI) to determine neural correlates of emotion processing. Unpleasant (permanently dissonant) music contrasted with pleasant (consonant) music showed activations of amygdala, hippocampus, parahippocampal gyrus, and temporal poles. These structures have previously been implicated in the emotional processing of stimuli with (negative) emotional valence; the present data show that a cerebral network comprising these structures can be activated during the perception of auditory (musical) information. Pleasant (contrasted to unpleasant) music showed activations of the inferior frontal gyrus (IFG, inferior Brodmann's area (BA) 44, BA 45, and BA 46), the anterior superior insula, the ventral striatum, Heschl's gyrus, and the Rolandic operculum. IFG activations appear to reflect processes of music-syntactic analysis and working memory operations. Activations of Rolandic opercular areas possibly reflect the activation of mirror-function mechanisms during the perception of the pleasant tunes. Rolandic operculum, anterior superior insula, and ventral striatum may form a motor-related circuitry that serves the formation of (premotor) representations for vocal sound production during the perception of pleasant auditory information. In all of the mentioned structures, except the hippocampus, activations increased over time during the presentation of the musical stimuli, indicating that the effects of emotion processing have temporal dynamics; the temporal dynamics of emotion have so far mainly been neglected in the functional imaging literature.", "title": "" } ]
[ { "docid": "4ae6afb7039936b2e6bcfc030fdb9cea", "text": "Apart from being used as a means of entertainment, computer games have been adopted for a long time as a valuable tool for learning. Computer games can offer many learning benefits to students since they can consume their attention and increase their motivation and engagement which can then lead to stimulate learning. However, most of the research to date on educational computer games, in particular learning versions of existing computer games, focused only on learner with typical development. Rather less is known about designing educational games for learners with special needs. The current research presents the results of a pilot study. The principal aim of this pilot study is to examine the interest of learners with hearing impairments in using an educational game for learning the sign language notation system SignWriting. The results found indicated that, overall, the application is useful, enjoyable and easy to use: the game can stimulate the students’ interest in learning such notations.", "title": "" }, { "docid": "cddd8adea2d507d937db4052627136fd", "text": "For the reception of Satellite Digital Audio Radio Services (SDARS) and Global Positioning Systems (GPS) transmitted via satellite an invisible antenna combination embedded in the roof of a car is presented. Without changing the surface of the vehicle the antenna combination can be completely embedded in a metal cavity and covered by a thick dielectric part of the roof. The measurement results show a high efficiency and a large bandwidth which exceeds the necessary bandwidth significantly for both services. The antenna combination offers a radiation pattern which is tailored to the reception of SDARS signals transmitted via highly-elliptical-orbit (HEO) satellites, geostationary earth orbit (GEO) satellites and terrestrial repeaters and for GPS signals transmitted via medium earth orbit (MEO) satellites. Although the antennas are mounted in such a small mounting volume, the antennas are decoupled optimally.", "title": "" }, { "docid": "fdbad1d98044bf6494bfd211e6116db8", "text": "This work addresses the problem of underwater archaeological surveys from the point of view of knowledge. We propose an approach based on underwater photogrammetry guided by a representation of knowledge used, as structured by ontologies. Survey data feed into to ontologies and photogrammetry in order to produce graphical results. This paper focuses on the use of ontologies during the exploitation of 3D results. JAVA software dedicated to photogram‐ metry and archaeological survey has been mapped onto an OWL formalism. The use of procedural attachment in a dual representation (JAVA OWL) of the involved concepts allows us to access computational facilities directly from OWL. As SWRL The use of rules illustrates very well such ‘double formalism’ as well as the use of computational capabilities of ‘rules logical expression’. We present an application that is able to read the ontology populated with a photo‐ grammetric survey data. Once the ontology is read, it is possible to produce a 3D representation of the individuals and observing graphically the results of logical spatial queries on the ontology. This work is done on a very important underwater archaeological site in Malta named Xlendi, probably the most ancient shipwreck of the central Mediterranean Sea.", "title": "" }, { "docid": "bbb08c98a2265c53ba590e0872e91e1d", "text": "Reinforcement learning (RL) is one of the most general approaches to learning control. Its applicability to complex motor systems, however, has been largely impossible so far due to the computational difficulties that reinforcement learning encounters in high dimensional continuous state-action spaces. In this paper, we derive a novel approach to RL for parameterized control policies based on the framework of stochastic optimal control with path integrals. While solidly grounded in optimal control theory and estimation theory, the update equations for learning are surprisingly simple and have no danger of numerical instabilities as neither matrix inversions nor gradient learning rates are required. Empirical evaluations demonstrate significant performance improvements over gradient-based policy learning and scalability to high-dimensional control problems. Finally, a learning experiment on a robot dog illustrates the functionality of our algorithm in a real-world scenario. We believe that our new algorithm, Policy Improvement with Path Integrals (PI2), offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL in robotics.", "title": "" }, { "docid": "91cb22c1dc8e0f92df0848f665985a18", "text": "Social tags, serving as a textual source of simple but useful semantic metadata to reflect the user preference or describe the web objects, has been widely used in many applications. However, social tags have several unique characteristics, i.e., sparseness and data coupling (i.e., non-IIDness), which makes existing text analysis methods such as LDA not directly applicable. In this paper, we propose a new generative algorithm for social tag analysis named joint latent Dirichlet allocation, which models the generation of tags based on both the users and the objects, and thus accounts for the coupling relationships among social tags. The model introduces two latent factors that jointly influence tag generation: the user's latent interest factor and the object's latent topic factor, formulated as user-topic distribution matrix and object-topic distribution matrix, respectively. A Gibbs sampling approach is adopted to simultaneously infer the above two matrices as well as a topic-word distribution matrix. Experimental results on four social tagging datasets have shown that our model is able to capture more reasonable topics and achieves better performance than five state-of-the-art topic models in terms of the widely used point-wise mutual information metric. In addition, we analyze the learnt topics showing that our model recovers more themes from social tags while LDA may lead the topic vanishing problems, and demonstrate its advantages in the social recommendation by evaluating the retrieval results with mean reciprocal rank metric. Finally, we explore the joint procedure of our model in depth to show the non-IID characteristic of social tagging process.", "title": "" }, { "docid": "4b408cc1c15e6099c16fe0a94923f86e", "text": "Speaker diarization is the task of determining “who spoke when?” in an audio or video recording that contains an unknown amount of speech and also an unknown number of speakers. Initially, it was proposed as a research topic related to automatic speech recognition, where speaker diarization serves as an upstream processing step. Over recent years, however, speaker diarization has become an important key technology for many tasks, such as navigation, retrieval, or higher level inference on audio data. Accordingly, many important improvements in accuracy and robustness have been reported in journals and conferences in the area. The application domains, from broadcast news, to lectures and meetings, vary greatly and pose different problems, such as having access to multiple microphones and multimodal information or overlapping speech. The most recent review of existing technology dates back to 2006 and focuses on the broadcast news domain. In this paper, we review the current state-of-the-art, focusing on research developed since 2006 that relates predominantly to speaker diarization for conference meetings. Finally, we present an analysis of speaker diarization performance as reported through the NIST Rich Transcription evaluations on meeting data and identify important areas for future research.", "title": "" }, { "docid": "1196ab65ddfcedb8775835f2e176576f", "text": "Faster R-CNN achieves state-of-the-art performance on generic object detection. However, a simple application of this method to a large vehicle dataset performs unimpressively. In this paper, we take a closer look at this approach as it applies to vehicle detection. We conduct a wide range of experiments and provide a comprehensive analysis of the underlying structure of this model. We show that through suitable parameter tuning and algorithmic modification, we can significantly improve the performance of Faster R-CNN on vehicle detection and achieve competitive results on the KITTI vehicle dataset. We believe our studies are instructive for other researchers investigating the application of Faster R-CNN to their problems and datasets.", "title": "" }, { "docid": "01b9bf49c88ae37de79b91edeae20437", "text": "While online, some people self-disclose or act out more frequently or intensely than they would in person. This article explores six factors that interact with each other in creating this online disinhibition effect: dissociative anonymity, invisibility, asynchronicity, solipsistic introjection, dissociative imagination, and minimization of authority. Personality variables also will influence the extent of this disinhibition. Rather than thinking of disinhibition as the revealing of an underlying \"true self,\" we can conceptualize it as a shift to a constellation within self-structure, involving clusters of affect and cognition that differ from the in-person constellation.", "title": "" }, { "docid": "09e98de8c53d4695ec7054c4d6451bce", "text": "This paper presents an intelligent traffic management system using RFID technology. The system is capable of providing practically important traffic data which would aid in reducing the travel time for the users. Also, it can be used for other purposes like tracing of stolen cars, vehicles that evade traffic signals/tickets, toll collection or vehicle taxes etc. The system consists of a passive tag, an RFID reader, a microcontroller, a GPRS module, a high-speed server with a database system and a user module. Using RFID technology, this system collects the required data and calculates average speed of vehicles on each road of a city under consideration. It then transmits the acquired data i.e., average speed calculated at various junctions to the central computation server which calculates the time taken by a vehicle to travel in a particular road. Through Dijkstra's algorithm, the central server computes the fastest route to all the nodes (junctions) considering each node as the initial point in the city. Therefore, the system creates a map of shortest time paths of the whole city. This data is accessed by users through an interface module placed in their vehicles.", "title": "" }, { "docid": "00cabf8e41382d8a1b206da952b8633a", "text": "Autonomous vehicle operations in outdoor environments challenge robotic perception. Construction, mining, agriculture, and planetary exploration environments are examples in which the presence of dust, fog, rain, changing illumination due to low sun angles, and lack of contrast can dramatically degrade conventional stereo and laser sensing. Nonetheless, environment perception can still succeed under compromised visibility through the use of a millimeter-wave radar. Radar also allows for multiple object detection within a single beam, whereas other range sensors are limited to one target return per emission. However, radar has shortcomings as well, such as a large footprint, specularity effects, and limited range resolution, all of which may result in poor environment survey or difficulty in interpretation. This paper presents a novel method for ground segmentation using a millimeter-wave radar mounted on a ground vehicle. Issues relevant to short-range perception in an outdoor environment are described along with field experiments and a quantitative comparison to laser data. The ability to classify the ground is successfully demonstrated in clear and low-visibility conditions, and significant improvement in range accuracy is shown. Finally, conclusions are drawn on the utility of millimeter-wave radar as a robotic sensor for persistent and accurate perception in natural scenarios. C © 2011 Wiley Periodicals, Inc.", "title": "" }, { "docid": "60a3538ec6a64af6f8fd447ed0fb79f5", "text": "Several Pinned Photodiode (PPD) CMOS Image Sensors (CIS) are designed, manufactured, characterized and exposed biased to ionizing radiation up to 10 kGy(SiO2 ). In addition to the usually reported dark current increase and quantum efficiency drop at short wavelengths, several original radiation effects are shown: an increase of the pinning voltage, a decrease of the buried photodiode full well capacity, a large change in charge transfer efficiency, the creation of a large number of Total Ionizing Dose (TID) induced Dark Current Random Telegraph Signal (DC-RTS) centers active in the photodiode (even when the Transfer Gate (TG) is accumulated) and the complete depletion of the Pre-Metal Dielectric (PMD) interface at the highest TID leading to a large dark current and the loss of control of the TG on the dark current. The proposed mechanisms at the origin of these degradations are discussed. It is also demonstrated that biasing (i.e., operating) the PPD CIS during irradiation does not enhance the degradations compared to sensors grounded during irradiation.", "title": "" }, { "docid": "bde1d85da7f1ac9c9c30b0fed448aac6", "text": "We survey temporal description logics that are based on standard temporal logics such as LTL and CTL. In particular, we concentrate on the computational complexity of the satisfiability problem and algorithms for deciding it.", "title": "" }, { "docid": "ebd4901b9352f98f879c27f50e999ef1", "text": "This paper describes a probabilistic approach to global localization within an in-door environment with minimum infrastructure requirements. Global localization is a flavor of localization in which the device is unaware of its initial position and has to determine the same from scratch. Localization is performed based on the received signal strength indication (RSSI) as the only sensor reading, which is provided by most off-the-shelf wireless network interface cards. Location and orientation estimates are computed using Bayesian filtering on a sample set derived using Monte-Carlo sampling. Research leading to the proposed method is outlined along with results and conclusions from simulations and real life experiments.", "title": "" }, { "docid": "1ad65bf27c4c4037d85a97c0cead8c41", "text": "This study explores the issue of effectiveness within virtual teams — groups of people who work together although they are often dispersed across space, time, and/or organizational boundaries. Due to the recent trend towards corporate restructuring, which can, in part, be attributed to an increase in corporate layoffs, mergers and acquisitions, competition, and globalization, virtual teams have become critical for companies to survive. Globalization of the marketplace alone, for that matter, makes such distributed work groups the primary operating units needed to achieve a competitive advantage in this ever-changing business environment. In an effort to determine the factors that contribute to/inhibit the success of a virtual team, a survey was distributed to a total of eight companies in the high technology, agriculture, and professional services industries. Data was then collected from 67 individuals who comprised a total of 12 virtual teams from these companies. Results indicated that several factors were positively correlated to the effectiveness of the participating teams. The teams’ processes and team members’ relations presented the strongest relationships to team performance and team member satisfaction, while the selection procedures and executive leadership styles also exhibited moderate associations to these measures of effectiveness. Analysis of predictor variables such as the design process, other internal group dynamics, and additional external support mechanisms, however, depicted weaker relations. Although the connections between the teams’ tools and technologies and communication patterns and the teams’ effectiveness measures did not prove significant, content analysis of the participants’ narrative responses to questions regarding the greatest challenges to virtual teams suggested otherwise. Beyond the traditional strategies used to enhance a team’s effectiveness, further efforts directed towards the specific technology and communication-related issues that concern dispersed team members are needed to supplement the set of best practices identified in the current study. # 2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "f447d9aadcaa4fb56f951838f84eb6af", "text": "A systematic method for developing isolated buck-boost (IBB) converters is proposed in this paper, and single-stage power conversion, soft-switching operation, and high-efficiency performance can be achieved with the proposed family of converters. On the basis of a nonisolated two-switch buck-boost converter, the proposed IBB converters are generated by replacing the dc buck-cell and boost-cell in the non-IBB converter with the ac buck-cell and boost-cell, respectively. Furthermore, a family of semiactive rectifiers (SARs) is proposed to serve as the secondary rectification circuit for the IBB converters, which helps to extend the converter voltage gain and reduce the voltage stresses on the devices in the rectification circuit. Hence, the efficiency is improved by employing a transformer with a smaller turns ratio and reduced parasitic parameters, by using low-voltage rating MOSFETs and diodes with better switching and conduction performances. A full-bridge IBB converter is proposed and analyzed in detail as an example. The phase-shift modulation strategy is applied to the full-bridge IBB converter to achieve IBB conversion. Moreover, soft-switching performance of all active switches and diodes can be achieved over a wide load and voltage range by the proposed converter and control strategy. A 380-V-output prototype is fabricated to verify the effectiveness of the proposed family of IBB converters, the SARs, and the control strategies.", "title": "" }, { "docid": "bb2c01181664baaf20012e321b5e1f9f", "text": "Systems able to suggest items that a user may be interested in are usually named as Recommender Systems. The new emergent field of Recommender Systems has undoubtedly gained much interest in the research community. Although Recommender Systems work well in suggesting books, movies and items of general interest, many users express today a feeling that the existing systems don’t actually identify them as individual personalities. This dissatisfaction turned the research society towards the development of new approaches on Recommender Systems, more user-centric. A methodology originated from Decision Theory is exploited herein, aiming to address to the lack of personalization in Recommender Systems by integrating the user in the recommendation process.", "title": "" }, { "docid": "9994825fcf1d5a5e252937af66255c8d", "text": "Common-sense physical reasoning is an essential ingredient for any intelligent agent operating in the real-world. For example, it can be used to simulate the environment, or to infer the state of parts of the world that are currently unobserved. In order to match real-world conditions this causal knowledge must be learned without access to supervised data. To solve this problem, we present a novel method that incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and to learn them efficiently. It learns to discover objects and to model physical interactions between them from raw visual images in a purely unsupervised fashion. On videos of bouncing balls we show the superior modelling capabilities of our method compared to other unsupervised neural approaches, that do not incorporate such prior knowledge. We show its ability to handle occlusion and that it can extrapolate learned knowledge to environments with different numbers of objects.", "title": "" }, { "docid": "65060deb3fafc21de3db4b9946c6df06", "text": "In this paper we describe the Wireless Power-Controlled Outlet Module (WPCOM) with a scalable mechanism for home power management which we have developed. The WPCOM integrates the multiple AC power sockets and a simple low-power microcontroller into a power outlet to switch the power of the sockets ON/OFF and to measure the power consumption of plugged electric home appliances. Our WPCOM consists of six scalable modules, that is, the Essential Control Module, the Bluetooth Module, the GSM Module, the Ethernet Module, the SD Card Module and the Power Measuring Module, which together provide an indoor wireless, and an outdoor remote control and monitor of electric home appliances. We have designed a PDA control software and remote control software which support the Graphic User Interface, thus allowing the user to easily monitor the electric home appliances through the PDA and the Internet individually. In addition, we use a Short Message Service to achieve control and monitoring through a GSM cellular mobile phone for remote use anytime and anywhere.", "title": "" }, { "docid": "21afffc79652f8e6c0f5cdcd74a03672", "text": "It’s useful to automatically transform an image from its original form to some synthetic form (style, partial contents, etc.), while keeping the original structure or semantics. We define this requirement as the ”image-to-image translation” problem, and propose a general approach to achieve it, based on deep convolutional and conditional generative adversarial networks (GANs), which has gained a phenomenal success to learn mapping images from noise input since 2014. In this work, we develop a two step (unsupervised) learning method to translate images between different domains by using unlabeled images without specifying any correspondence between them, so that to avoid the cost of acquiring labeled data. Compared with prior works, we demonstrated the capacity of generality in our model, by which variance of translations can be conduct by a single type of model. Such capability is desirable in applications like bidirectional translation", "title": "" }, { "docid": "84b018fa45e06755746309014854bb9a", "text": "For years, ontologies have been known in computer science as consensual models of domains of discourse, usually implemented as formal definitions of the relevant conceptual entities. Researchers have written much about the potential benefits of using them, and most of us regard ontologies as central building blocks of the semantic Web and other semantic systems. Unfortunately, the number and quality of actual, \"non-toy\" ontologies available on the Web today is remarkably low. This implies that the semantic Web community has yet to build practically useful ontologies for a lot of relevant domains in order to make the semantic Web a reality. Theoretically minded advocates often assume that the lack of ontologies is because the \"stupid business people haven't realized ontologies' enormous benefits.\" As a liberal market economist, the author assumes that humans can generally figure out what's best for their well-being, at least in the long run, and that they act accordingly. In other words, the fact that people haven't yet created as many useful ontologies as the ontology research community would like might indicate either unresolved technical limitations or the existence of sound rationales for why individuals refrain from building them - or both. Indeed, several social and technical difficulties exist that put a brake on developing and eventually constrain the space of possible ontologies", "title": "" } ]
scidocsrr
509d38ceda71f68928cfcc16c6e5e604
Protected area needs in a changing climate
[ { "docid": "a28be57b2eb045a525184b67afb14bb2", "text": "Climate change has already triggered species distribution shifts in many parts of the world. Increasing impacts are expected for the future, yet few studies have aimed for a general understanding of the regional basis for species vulnerability. We projected late 21st century distributions for 1,350 European plants species under seven climate change scenarios. Application of the International Union for Conservation of Nature and Natural Resources Red List criteria to our projections shows that many European plant species could become severely threatened. More than half of the species we studied could be vulnerable or threatened by 2080. Expected species loss and turnover per pixel proved to be highly variable across scenarios (27-42% and 45-63% respectively, averaged over Europe) and across regions (2.5-86% and 17-86%, averaged over scenarios). Modeled species loss and turnover were found to depend strongly on the degree of change in just two climate variables describing temperature and moisture conditions. Despite the coarse scale of the analysis, species from mountains could be seen to be disproportionably sensitive to climate change (approximately 60% species loss). The boreal region was projected to lose few species, although gaining many others from immigration. The greatest changes are expected in the transition between the Mediterranean and Euro-Siberian regions. We found that risks of extinction for European plants may be large, even in moderate scenarios of climate change and despite inter-model variability.", "title": "" } ]
[ { "docid": "795a4d9f2dc10563dfee28c3b3cd0f08", "text": "A wide-band probe fed patch antenna with low cross polarization and symmetrical broadside radiation pattern is proposed and studied. By employing a novel meandering probe feed and locating a patch about 0.1/spl lambda//sub 0/ above a ground plane, a patch antenna with 30% impedance bandwidth (SWR<2) and 9 dBi gain is designed. The far field radiation pattern of the antenna is stable across the operating bandwidth. Parametric studies and design guidelines of the proposed feeding structure are provided.", "title": "" }, { "docid": "72c79b86a91f7c8453cd6075314a6b4d", "text": "This talk aims to introduce LATEX users to XSL-FO. It does not attempt to give an exhaustive view of XSL-FO, but allows a LATEX user to get started. We show the common and different points between these two approaches of word processing.", "title": "" }, { "docid": "888de1004e212e1271758ac35ff9807d", "text": "We present the design and implementation of iVoLVER, a tool that allows users to create visualizations without textual programming. iVoLVER is designed to enable flexible acquisition of many types of data (text, colors, shapes, quantities, dates) from multiple source types (bitmap charts, webpages, photographs, SVGs, CSV files) and, within the same canvas, supports transformation of that data through simple widgets to construct interactive animated visuals. Aside from the tool, which is web-based and designed for pen and touch, we contribute the design of the interactive visual language and widgets for extraction, transformation, and representation of data. We demonstrate the flexibility and expressive power of the tool through a set of scenarios, and discuss some of the challenges encountered and how the tool fits within the current infovis tool landscape.", "title": "" }, { "docid": "718e31eabfd386768353f9b75d9714eb", "text": "The mathematical structure of Sudoku puzzles is akin to hard constraint satisfaction problems lying at the basis of many applications, including protein folding and the ground-state problem of glassy spin systems. Via an exact mapping of Sudoku into a deterministic, continuous-time dynamical system, here we show that the difficulty of Sudoku translates into transient chaotic behavior exhibited by this system. We also show that the escape rate κ, an invariant of transient chaos, provides a scalar measure of the puzzle's hardness that correlates well with human difficulty ratings. Accordingly, η = -log₁₀κ can be used to define a \"Richter\"-type scale for puzzle hardness, with easy puzzles having 0 < η ≤ 1, medium ones 1 < η ≤ 2, hard with 2 < η ≤ 3 and ultra-hard with η > 3. To our best knowledge, there are no known puzzles with η > 4.", "title": "" }, { "docid": "b2817d85893a624574381eee4f8648db", "text": "A coupled-fed antenna design capable of covering eight-band WWAN/LTE operation in a smartphone and suitable to integrate with a USB connector is presented. The antenna comprises an asymmetric T-shaped monopole as a coupling feed and a radiator as well, and a coupled-fed loop strip shorted to the ground plane. The antenna generates a wide lower band to cover (824-960 MHz) for GSM850/900 operation and a very wide upper band of larger than 1 GHz to cover the GPS/GSM1800/1900/UMTS/LTE2300/2500 operation (1565-2690 MHz). The proposed antenna provides wideband operation and exhibits great flexible behavior. The antenna is capable of providing eight-band operation for nine different sizes of PCBs, and enhance impedance matching only by varying a single element length, L. Details of proposed antenna, parameters and performance are presented and discussed in this paper.", "title": "" }, { "docid": "d197875ea8637bf36d2746a2a1861c23", "text": "There are billions of Internet of things (IoT) devices connecting to the Internet and the number is increasing. As a still ongoing technology, IoT can be used in different fields, such as agriculture, healthcare, manufacturing, energy, retailing and logistics. IoT has been changing our world and the way we live and think. However, IoT has no uniform architecture and there are different kinds of attacks on the different layers of IoT, such as unauthorized access to tags, tag cloning, sybil attack, sinkhole attack, denial of service attack, malicious code injection, and man in middle attack. IoT devices are more vulnerable to attacks because it is simple and some security measures can not be implemented. We analyze the privacy and security challenges in the IoT and survey on the corresponding solutions to enhance the security of IoT architecture and protocol. We should focus more on the security and privacy on IoT and help to promote the development of IoT.", "title": "" }, { "docid": "3d12dea4ae76c5af54578262996fe0bb", "text": "We introduce a two-layer undirected graphical model, calle d a “Replicated Softmax”, that can be used to model and automatically extract low -dimensional latent semantic representations from a large unstructured collec ti n of documents. We present efficient learning and inference algorithms for thi s model, and show how a Monte-Carlo based method, Annealed Importance Sampling, c an be used to produce an accurate estimate of the log-probability the model a ssigns to test data. This allows us to demonstrate that the proposed model is able to g neralize much better compared to Latent Dirichlet Allocation in terms of b th the log-probability of held-out documents and the retrieval accuracy.", "title": "" }, { "docid": "a58930da8179d71616b8b6ef01ed1569", "text": "Collecting sensor data results in large temporal data sets which need to be visualized, analyzed, and presented. One-dimensional time-series charts are used, but these present problems when screen resolution is small in comparison to the data. This can result in severe over-plotting, giving rise for the requirement to provide effective rendering and methods to allow interaction with the detailed data. Common solutions can be categorized as multi-scale representations, frequency based, and lens based interaction techniques. In this paper, we comparatively evaluate existing methods, such as Stack Zoom [15] and ChronoLenses [38], giving a graphical overview of each and classifying their ability to explore and interact with data. We propose new visualizations and other extensions to the existing approaches. We undertake and report an empirical study and a field study using these techniques.", "title": "" }, { "docid": "73adcdf18b86ab3598731d75ac655f2c", "text": "Many individuals exhibit unconscious body movements called mannerisms while speaking. These repeated changes often distract the audience when not relevant to the verbal context. We present an intelligent interface that can automatically extract human gestures using Microsoft Kinect to make speakers aware of their mannerisms. We use a sparsity-based algorithm, Shift Invariant Sparse Coding, to automatically extract the patterns of body movements. These patterns are displayed in an interface with subtle question and answer-based feedback scheme that draws attention to the speaker's body language. Our formal evaluation with 27 participants shows that the users became aware of their body language after using the system. In addition, when independent observers annotated the accuracy of the algorithm for every extracted pattern, we find that the patterns extracted by our algorithm is significantly (p<0.001) more accurate than just random selection. This represents a strong evidence that the algorithm is able to extract human-interpretable body movement patterns. An interactive demo of AutoManner is available at http://tinyurl.com/AutoManner.", "title": "" }, { "docid": "154c40c2fab63ad15ded9b341ff60469", "text": "ICU mortality risk prediction may help clinicians take effective interventions to improve patient outcome. Existing machine learning approaches often face challenges in integrating a comprehensive panel of physiologic variables and presenting to clinicians interpretable models. We aim to improve both accuracy and interpretability of prediction models by introducing Subgraph Augmented Non-negative Matrix Factorization (SANMF) on ICU physiologic time series. SANMF converts time series into a graph representation and applies frequent subgraph mining to automatically extract temporal trends. We then apply non-negative matrix factorization to group trends in a way that approximates patient pathophysiologic states. Trend groups are then used as features in training a logistic regression model for mortality risk prediction, and are also ranked according to their contribution to mortality risk. We evaluated SANMF against four empirical models on the task of predicting mortality or survival 30 days after discharge from ICU using the observed physiologic measurements between 12 and 24 hours after admission. SANMF outperforms all comparison models, and in particular, demonstrates an improvement in AUC (0.848 vs. 0.827, p<0.002) compared to a state-of-the-art machine learning method that uses manual feature engineering. Feature analysis was performed to illuminate insights and benefits of subgraph groups in mortality risk prediction.", "title": "" }, { "docid": "bfa38fded95303834d487cb27d228ad7", "text": "Apparel classification encompasses the identification of an outfit in an image. The area has its applications in social media advertising, e-commerce and criminal law. In our work, we introduce a new method for shopping apparels online. This paper describes our approach to classify images using Convolutional Neural Networks. We concentrate mainly on two aspects of apparel classification: (1) Multiclass classification of apparel type and (2) Similar Apparel retrieval based on the query image. This shopping technique relieves the burden of storing a lot of information related to the images and traditional ways of filtering search results can be replaced by image filters", "title": "" }, { "docid": "73bf620a97b2eadeb2398dd718b85fe8", "text": "The Semeval task 5 was an opportunity for experimenting with the key term extraction module of GROBID, a system for extracting and generating bibliographical information from technical and scientific documents. The tool first uses GROBID’s facilities for analyzing the structure of scientific articles, resulting in a first set of structural features. A second set of features captures content properties based on phraseness, informativeness and keywordness measures. Two knowledge bases, GRISP and Wikipedia, are then exploited for producing a last set of lexical/semantic features. Bagged decision trees appeared to be the most efficient machine learning algorithm for generating a list of ranked key term candidates. Finally a post ranking was realized based on statistics of cousage of keywords in HAL, a large Open Access publication repository.", "title": "" }, { "docid": "80ff93b5f2e0ff3cff04c314e28159fc", "text": "In the past 30 years there has been a growing body of research using different methods (behavioural, electrophysiological, neuropsychological, TMS and imaging studies) asking whether processing words from different grammatical classes (especially nouns and verbs) engage different neural systems. To date, however, each line of investigation has provided conflicting results. Here we present a review of this literature, showing that once we take into account the confounding in most studies between semantic distinctions (objects vs. actions) and grammatical distinction (nouns vs. verbs), and the conflation between studies concerned with mechanisms of single word processing and those studies concerned with sentence integration, the emerging picture is relatively clear-cut: clear neural separability is observed between the processing of object words (nouns) and action words (typically verbs), grammatical class effects emerge or become stronger for tasks and languages imposing greater processing demands. These findings indicate that grammatical class per se is not an organisational principle of knowledge in the brain; rather, all the findings we review are compatible with two general principles described by typological linguistics as underlying grammatical class membership across languages: semantic/pragmatic, and distributional cues in language that distinguish nouns from verbs. These two general principles are incorporated within an emergentist view which takes these constraints into account.", "title": "" }, { "docid": "f8b0dcd771e7e7cf50a05cf7221f4535", "text": "Studies on monocyte and macrophage biology and differentiation have revealed the pleiotropic activities of these cells. Macrophages are tissue sentinels that maintain tissue integrity by eliminating/repairing damaged cells and matrices. In this M2-like mode, they can also promote tumor growth. Conversely, M1-like macrophages are key effector cells for the elimination of pathogens, virally infected, and cancer cells. Macrophage differentiation from monocytes occurs in the tissue in concomitance with the acquisition of a functional phenotype that depends on microenvironmental signals, thereby accounting for the many and apparently opposed macrophage functions. Many questions arise. When monocytes differentiate into macrophages in a tissue (concomitantly adopting a specific functional program, M1 or M2), do they all die during the inflammatory reaction, or do some of them survive? Do those that survive become quiescent tissue macrophages, able to react as naïve cells to a new challenge? Or, do monocyte-derived tissue macrophages conserve a \"memory\" of their past inflammatory activation? This review will address some of these important questions under the general framework of the role of monocytes and macrophages in the initiation, development, resolution, and chronicization of inflammation.", "title": "" }, { "docid": "f71b1df36ee89cdb30a1dd29afc532ea", "text": "Finite state machines are a standard tool to model event-based control logic, and dynamic programming is a staple of optimal decision-making. We combine these approaches in the context of radar resource management for Naval surface warfare. There is a friendly (Blue) force in the open sea, equipped with one multi-function radar and multiple ships. The enemy (Red) force consists of missiles that target the Blue force's radar. The mission of the Blue force is to foil the enemy's threat by careful allocation of radar resources. Dynamically composed finite state machines are used to formalize the model of the battle space and dynamic programming is applied to our dynamic state machine model to generate an optimal policy. To achieve this in near-real-time and a changing environment, we use approximate dynamic programming methods. Example scenario illustrating the model and simulation results are presented.", "title": "" }, { "docid": "8bdd02547be77f4c825c9aed8016ddf8", "text": "Global terrestrial ecosystems absorbed carbon at a rate of 1–4 Pg yr-1 during the 1980s and 1990s, offsetting 10–60 per cent of the fossil-fuel emissions. The regional patterns and causes of terrestrial carbon sources and sinks, however, remain uncertain. With increasing scientific and political interest in regional aspects of the global carbon cycle, there is a strong impetus to better understand the carbon balance of China. This is not only because China is the world’s most populous country and the largest emitter of fossil-fuel CO2 into the atmosphere, but also because it has experienced regionally distinct land-use histories and climate trends, which together control the carbon budget of its ecosystems. Here we analyse the current terrestrial carbon balance of China and its driving mechanisms during the 1980s and 1990s using three different methods: biomass and soil carbon inventories extrapolated by satellite greenness measurements, ecosystem models and atmospheric inversions. The three methods produce similar estimates of a net carbon sink in the range of 0.19–0.26 Pg carbon (PgC) per year, which is smaller than that in the conterminous United States but comparable to that in geographic Europe. We find that northeast China is a net source of CO2 to the atmosphere owing to overharvesting and degradation of forests. By contrast, southern China accounts for more than 65 per cent of the carbon sink, which can be attributed to regional climate change, large-scale plantation programmes active since the 1980s and shrub recovery. Shrub recovery is identified as the most uncertain factor contributing to the carbon sink. Our data and model results together indicate that China’s terrestrial ecosystems absorbed 28–37 per cent of its cumulated fossil carbon emissions during the 1980s and 1990s.", "title": "" }, { "docid": "232bf10d578c823b0cd98a3641ace44a", "text": "The effect of economic globalization on the number of transnational terrorist incidents within countries is analyzed statistically, using a sample of 112 countries from 1975 to 1997. Results show that trade, foreign direct investment (FDI), and portfolio investment have no direct positive effect on transnational terrorist incidents within countries and that economic developments of a country and its top trading partners reduce the number of terrorist incidents inside the country. To the extent that trade and FDI promote economic development, they have an indirect negative effect on transnational terrorism.", "title": "" }, { "docid": "66fd7de53986e8c4a7ed08ed88f0b45b", "text": "BACKGROUND\nConcerns regarding the risk of estrogen replacement have resulted in a significant increase in the use of soy products by menopausal women who, despite the lack of evidence of the efficacy of such products, seek alternatives to menopausal hormone therapy. Our goal was to determine the efficacy of soy isoflavone tablets in preventing bone loss and menopausal symptoms.\n\n\nMETHODS\nThe study design was a single-center, randomized, placebo-controlled, double-blind clinical trial conducted from July 1, 2004, through March 31, 2009. Women aged 45 to 60 years within 5 years of menopause and with a bone mineral density T score of -2.0 or higher in the lumbar spine or total hip were randomly assigned, in equal proportions, to receive daily soy isoflavone tablets, 200 mg, or placebo. The primary outcome was changes in bone mineral density in the lumbar spine, total hip, and femoral neck at the 2-year follow-up. Secondary outcomes included changes in menopausal symptoms, vaginal cytologic characteristics, N -telopeptide of type I bone collagen, lipids, and thyroid function.\n\n\nRESULTS\nAfter 2 years, no significant differences were found between the participants receiving soy tablets (n = 122) and those receiving placebo (n = 126) regarding changes in bone mineral density in the spine (-2.0% and -2.3%, respectively), the total hip (-1.2% and -1.4%, respectively), or the femoral neck (-2.2% and -2.1%, respectively). A significantly larger proportion of participants in the soy group experienced hot flashes and constipation compared with the control group. No significant differences were found between groups in other outcomes.\n\n\nCONCLUSIONS\nIn this population, the daily administration of tablets containing 200 mg of soy isoflavones for 2 years did not prevent bone loss or menopausal symptoms.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov Identifier: NCT00076050.", "title": "" }, { "docid": "a63db4f5e588e23e4832eae581fc1c4b", "text": "Driver drowsiness is a major cause of mortality in traffic accidents worldwide. Electroencephalographic (EEG) signal, which reflects the brain activities, is more directly related to drowsiness. Thus, many Brain-Machine-Interface (BMI) systems have been proposed to detect driver drowsiness. However, detecting driver drowsiness at its early stage poses a major practical hurdle when using existing BMI systems. This study proposes a context-aware BMI system aimed to detect driver drowsiness at its early stage by enriching the EEG data with the intensity of head-movements. The proposed system is carefully designed for low-power consumption with on-chip feature extraction and low energy Bluetooth connection. Also, the proposed system is implemented using JAVA programming language as a mobile application for on-line analysis. In total, 266 datasets obtained from six subjects who participated in a one-hour monotonous driving simulation experiment were used to evaluate this system. According to a video-based reference, the proposed system obtained an overall detection accuracy of 82.71% for classifying alert and slightly drowsy events by using EEG data alone and 96.24% by using the hybrid data of head-movement and EEG. These results indicate that the combination of EEG data and head-movement contextual information constitutes a robust solution for the early detection of driver drowsiness.", "title": "" }, { "docid": "dba13fea4538f23ea1208087d3e81d6b", "text": "This paper investigates the effectiveness of using MeSH® in PubMed through its automatic query expansion process: Automatic Term Mapping (ATM). We run Boolean searches based on a collection of 55 topics and about 160,000 MEDLINE® citations used in the 2006 and 2007 TREC Genomics Tracks. For each topic, we first automatically construct a query by selecting keywords from the question. Next, each query is expanded by ATM, which assigns different search tags to terms in the query. Three search tags: [MeSH Terms], [Text Words], and [All Fields] are chosen to be studied after expansion because they all make use of the MeSH field of indexed MEDLINE citations. Furthermore, we characterize the two different mechanisms by which the MeSH field is used. Retrieval results using MeSH after expansion are compared to those solely based on the words in MEDLINE title and abstracts. The aggregate retrieval performance is assessed using both F-measure and mean rank precision. Experimental results suggest that query expansion using MeSH in PubMed can generally improve retrieval performance, but the improvement may not affect end PubMed users in realistic situations.", "title": "" } ]
scidocsrr
0f10327bfb8a54d1f87bcbc48c4b3125
A semiotic analysis of the genetic information system *
[ { "docid": "0be3178ff2f412952934a49084ee8edc", "text": "This article introduces the physics of information in the context of molecular biology and genomics. Entropy and information, the two central concepts of Shannon’s theory of information and communication, are often confused with each other but play transparent roles when applied to statistical ensembles (i.e., identically prepared sets) of symbolic sequences. Such an approach can distinguish between entropy and information in genes, predict the secondary structure of ribozymes, and detect the covariation between residues in folded proteins. We also review applications to molecular sequence and structure analysis, and introduce new tools in the characterization of resistance mutations, and in drug design. In a curious twist of history, the dawn of the age of genomics has both seen the rise of the science of bioinformatics as a tool to cope with the enormous amounts of data being generated daily, and the decline of the theory of information as applied to molecular biology. Hailed as a harbinger of a “new movement” (Quastler 1953) along with Cybernetics, the principles of information theory were thought to be applicable to the higher functions of living organisms, and able to analyze such functions as metabolism, growth, and differentiation (Quastler 1953). Today, the metaphors and the jargon of information theory are still widely used (Maynard Smith 1999a, 1999b), as opposed to the mathematical formalism, which is too often considered to be inapplicable to biological information. Clearly, looking back it appears that too much hope was laid upon this theory’s relevance for biology. However, there was well-founded optimism that information theory ought to be able to address the complex issues associated with the storage of information in the genetic code, only to be repeatedly questioned and rebuked (see, e.g., Vincent 1994, Sarkar 1996). In this article, I outline the concepts of entropy and information (as defined by Shannon) in the context of molecular biology. We shall see that not only are these terms well-defined and useful, they also coincide precisely with what we intuitively mean when we speak about information stored in genes, for example. I then present examples of applications of the theory to measuring the information content of biomolecules, the identification of polymorphisms, RNA and protein secondary structure prediction, the prediction and analysis of molecular interactions, and drug design. 1 Entropy and Information Entropy and information are often used in conflicting manners in the literature. A precise understanding, both mathematical and intuitive, of the notion of information (and its relationship to entropy) is crucial for applications in molecular biology. Therefore, let us begin by outlining Shannon’s original entropy concept (Shannon, 1948). 1.1 Shannon’s Uncertainty Measure Entropy in Shannon’s theory (defined mathematically below) is a measure of uncertainty about the identity of objects in an ensemble. Thus, while “en-", "title": "" } ]
[ { "docid": "d9870dc31895226f60537b3e8591f9fd", "text": "This paper reports on the design of a low phase noise 76.8 MHz AlN-on-silicon reference oscillator using SiO2 as temperature compensation material. The paper presents profound theoretical optimization of all the important parameters for AlN-on-silicon width extensional mode resonators, filling into the knowledge gap targeting the tens of megahertz frequency range for this type of resonators. Low loading CMOS cross coupled series resonance oscillator is used to reach the-state-of-the-art LTE phase noise specifications. Phase noise of 123 dBc/Hz at 1 kHz, and 162 dBc/Hz at 1 MHz offset is achieved. The oscillator's integrated root mean square RMS jitter is 106 fs (10 kHz to 20 MHz), consuming 850 μA, with startup time of 250 μs, and a figure-of-merit FOM of 216 dB. This work offers a platform for high performance MEMS reference oscillators; where, it shows the applicability of replacing bulky quartz with MEMS resonators in cellular platforms. & 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "96b2bebeea8fd724609501e753fcf324", "text": "From failures of intelligence analysis to misguided beliefs about vaccinations, biased judgment and decision making contributes to problems in policy, business, medicine, law, education, and private life. Early attempts to reduce decision biases with training met with little success, leading scientists and policy makers to focus on debiasing by using incentives and changes in the presentation and elicitation of decisions. We report the results of two longitudinal experiments that found medium to large effects of one-shot debiasing training interventions. Participants received a single training intervention, played a computer game or watched an instructional video, which addressed biases critical to intelligence analysis (in Experiment 1: bias blind spot, confirmation bias, and fundamental attribution error; in Experiment 2: anchoring, representativeness, and social projection). Both kinds of interventions produced medium to large debiasing effects immediately (games ≥ −31.94% and videos ≥ −18.60%) that persisted at least 2 months later (games ≥ −23.57% and videos ≥ −19.20%). Games that provided personalized feedback and practice produced larger effects than did videos. Debiasing effects were domain general: bias reduction occurred across problems in different contexts, and problem formats that were taught and not taught in the interventions. The results suggest that a single training intervention can improve decision making. We suggest its use alongside improved incentives, information presentation, and nudges to reduce costly errors associated with biased judgments and decisions.", "title": "" }, { "docid": "1bd75e455b57b14c2a275e50aff0d2db", "text": "Keratosis pilaris is a common skin disorder comprising less common variants and rare subtypes, including keratosis pilaris rubra, erythromelanosis follicularis faciei et colli, and the spectrum of keratosis pilaris atrophicans. Data, and critical analysis of existing data, are lacking, so the etiologies, pathogeneses, disease associations, and treatments of these clinical entities are poorly understood. The present article aims to fill this knowledge gap by reviewing literature in the PubMed, EMBASE, and CINAHL databases and providing a comprehensive, analytical summary of the clinical characteristics and pathophysiology of keratosis pilaris and its subtypes through the lens of disease associations, genetics, and pharmacologic etiologies. Histopathologic, genomic, and epidemiologic evidence points to keratosis pilaris as a primary disorder of the pilosebaceous unit as a result of inherited mutations or acquired disruptions in various biomolecular pathways. Recent data highlight aberrant Ras signaling as an important contributor to the pathophysiology of keratosis pilaris and its subtypes. We also evaluate data on treatments for keratosis pilaris and its subtypes, including topical, systemic, and energy-based therapies. The effectiveness of various types of lasers in treating keratosis pilaris and its subtypes deserves wider recognition.", "title": "" }, { "docid": "1c8c532c86db01056ffff2aac49fa248", "text": "In many classification problems, the input is represented as a set of features, e.g., the bag-of-words (BoW) representation of documents. Support vector machines (SVMs) are widely used tools for such classification problems. The performance of the SVMs is generally determined by whether kernel values between data points can be defined properly. However, SVMs for BoW representations have a major weakness in that the co-occurrence of different but semantically similar words cannot be reflected in the kernel calculation. To overcome the weakness, we propose a kernel-based discriminative classifier for BoW data, which we call the latent support measure machine (latent SMM). With the latent SMM, a latent vector is associated with each vocabulary term, and each document is represented as a distribution of the latent vectors for words appearing in the document. To represent the distributions efficiently, we use the kernel embeddings of distributions that hold high order moment information about distributions. Then the latent SMM finds a separating hyperplane that maximizes the margins between distributions of different classes while estimating latent vectors for words to improve the classification performance. In the experiments, we show that the latent SMM achieves state-of-the-art accuracy for BoW text classification, is robust with respect to its own hyper-parameters, and is useful to visualize words.", "title": "" }, { "docid": "33df3da22e9a24767c68e022bb31bbe5", "text": "The credit card industry has been growing rapidly recently, and thus huge numbers of consumers’ credit data are collected by the credit department of the bank. The credit scoring manager often evaluates the consumer’s credit with intuitive experience. However, with the support of the credit classification model, the manager can accurately evaluate the applicant’s credit score. Support Vector Machine (SVM) classification is currently an active research area and successfully solves classification problems in many domains. This study used three strategies to construct the hybrid SVM-based credit scoring models to evaluate the applicant’s credit score from the applicant’s input features. Two credit datasets in UCI database are selected as the experimental data to demonstrate the accuracy of the SVM classifier. Compared with neural networks, genetic programming, and decision tree classifiers, the SVM classifier achieved an identical classificatory accuracy with relatively few input features. Additionally, combining genetic algorithms with SVM classifier, the proposed hybrid GA-SVM strategy can simultaneously perform feature selection task and model parameters optimization. Experimental results show that SVM is a promising addition to the existing data mining methods. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7a076d150ecc4382c20a6ce08f3a0699", "text": "Cyber-physical system (CPS) is a new trend in the Internet-of-Things related research works, where physical systems act as the sensors to collect real-world information and communicate them to the computation modules (i.e. cyber layer), which further analyze and notify the findings to the corresponding physical systems through a feedback loop. Contemporary researchers recommend integrating cloud technologies in the CPS cyber layer to ensure the scalability of storage, computation, and cross domain communication capabilities. Though there exist a few descriptive models of the cloud-based CPS architecture, it is important to analytically describe the key CPS properties: computation, control, and communication. In this paper, we present a digital twin architecture reference model for the cloud-based CPS, C2PS, where we analytically describe the key properties of the C2PS. The model helps in identifying various degrees of basic and hybrid computation-interaction modes in this paradigm. We have designed C2PS smart interaction controller using a Bayesian belief network, so that the system dynamically considers current contexts. The composition of fuzzy rule base with the Bayes network further enables the system with reconfiguration capability. We also describe analytically, how C2PS subsystem communications can generate even more complex system-of-systems. Later, we present a telematics-based prototype driving assistance application for the vehicular domain of C2PS, VCPS, to demonstrate the efficacy of the architecture reference model.", "title": "" }, { "docid": "f6c874435978db83361f62bfe70a6681", "text": "“Microbiology Topics” discusses various topics in microbiology of practical use in validation and compliance. We intend this column to be a useful resource for daily work applications. Reader comments, questions, and suggestions are needed to help us fulfill our objective for this column. Please send your comments and suggestions to column coordinator Scott Sutton at scott. sutton@microbiol.org or journal managing editor Susan Haigney at shaigney@advanstar.com.", "title": "" }, { "docid": "1f3e2c432a5f2f1a6ffcf892c6a06eab", "text": "In this letter, we study the Ramanujan Sums (RS) transform by means of matrix multiplication. The RS are orthogonal in nature and therefore offer excellent energy conservation capability. The 1-D and 2-D forward RS transforms are easy to calculate, but their inverse transforms are not defined in the literature for non-even function <formula formulatype=\"inline\"><tex Notation=\"TeX\">$ ({\\rm mod}~ {\\rm M}) $</tex></formula>. We solved this problem by using matrix multiplication in this letter.", "title": "" }, { "docid": "6e267600a085f150fa357fb778cbebd8", "text": "The amount of data available in the internet is increasing at a very high speed. Text summarization has helped in making a better use of the information available online. Various methods were adopted to automate text summarization. However there is no existing system for summarizing Malayalam documents. In this paper we have investigated on developing efficient and effective methods to summarize Malayalam documents. This paper explains a statistical sentence scoring technique and a semantic graph based technique for text summarization.", "title": "" }, { "docid": "a9433e55cd58416bbe2a7b8bd0d78302", "text": "The southern Alps–Ligurian basin junction is one of the most seismically active zone of the western Europe. A constant microseismicity and moderate size events (3.5 < M < 5) are regularly recorded. The last reported historical event took place in February 1887 and reached an estimated magnitude between 6 and 6.5, causing human losses and extensive damages (intensity X, Medvedev–Sponheuer–Karnik). Such an event, occurring nowadays, could have critical consequences given the high density of population living on the French and Italian Riviera. We study the case of an offshore Mw 6.3 earthquake located at the place where two moderate size events (Mw 4.5) occurred recently and where a morphotectonic feature has been detected by a bathymetric survey. We used a stochastic empiriJ. Salichon · C. Kohrs-Sansorny · F. Courboulex Observatoire de la Côte d’Azur, Géoazur Nice-Sophia Antipolis University, CNRS, Nice, France E. Bertrand LCPC-CETE Méditerranée, Nice, France J. Salichon (B) · E. Bertrand (B) · F. Courboulex (B) Géoazur, 250 rue Albert Einstein, Sophia Antipolis, 06560 Valbonne, France e-mail: salichon@geoazur.unice.fr e-mail: etienne.bertrand@developpement-durable.gouv.fr e-mail: courboulex@geoazur.unice.fr cal Green’s functions (EGFs) summation method to produce a population of realistic accelerograms on rock and soil sites in the city of Nice. The ground motion simulations are calibrated on a rock site with a set of ground motion prediction equations (GMPEs) in order to estimate a reasonable stress-drop ratio between the February 25th, 2001, Mw 4.5, event taken as an EGF and the target earthquake. Our results show that the combination of the GMPEs and EGF techniques is an interesting tool for site-specific strong ground motion estimation.", "title": "" }, { "docid": "513aec195a654a3c89de60ce9e4a52c9", "text": "Adversarial examples and poisoning attacks have become indisputable threats to the security of modern AI systems based on deep neural networks (DNNs). The Adversarial Robustness Toolbox (ART) is a Python library designed to support researchers and developers in creating novel defence techniques, as well as in deploying practical defences of real-world AI systems. Researchers can use ART to benchmark novel defences against the state-of-the-art. For developers, the library provides interfaces which support the composition of comprehensive defence systems using individual methods as building blocks. The Adversarial Robustness Toolbox supports machine learning models (and deep neural networks (DNNs) specifically) implemented in any of the most popular deep learning frameworks (TensorFlow, Keras, PyTorch and MXNet). Currently, the library is primarily intended to improve the adversarial robustness of visual recognition systems, however, future releases that will comprise adaptations to other data modes (such as speech, text or time series) are envisioned. The ART source code is released (https://github.com/IBM/ adversarial-robustness-toolbox) under an MIT license. The release includes code examples and extensive documentation (http://adversarial-robustness-toolbox.readthedocs.io) to help researchers and developers get quickly started. *maria-irina.nicolae@ibm.com †mathsinn@ie.ibm.com ‡Contributed to ART while doing an internship with IBM Research – Ireland. 1 ar X iv :1 80 7. 01 06 9v 3 [ cs .L G ] 1 1 Ja n 20 19", "title": "" }, { "docid": "521bab3f363637e0b8d8d8a830816c9b", "text": "We address the task of Named Entity Disambiguation (NED) for noisy text. We present WikilinksNED, a large-scale NED dataset of text fragments from the web, which is significantly noisier and more challenging than existing newsbased datasets. To capture the limited and noisy local context surrounding each mention, we design a neural model and train it with a novel method for sampling informative negative examples. We also describe a new way of initializing word and entity embeddings that significantly improves performance. Our model significantly outperforms existing state-ofthe-art methods on WikilinksNED while achieving comparable performance on a smaller newswire dataset.", "title": "" }, { "docid": "37ed4c0703266525a7d62ca98dd65e0f", "text": "Social cognition in humans is distinguished by psychological processes that allow us to make inferences about what is going on inside other people-their intentions, feelings, and thoughts. Some of these processes likely account for aspects of human social behavior that are unique, such as our culture and civilization. Most schemes divide social information processing into those processes that are relatively automatic and driven by the stimuli, versus those that are more deliberative and controlled, and sensitive to context and strategy. These distinctions are reflected in the neural structures that underlie social cognition, where there is a recent wealth of data primarily from functional neuroimaging. Here I provide a broad survey of the key abilities, processes, and ways in which to relate these to data from cognitive neuroscience.", "title": "" }, { "docid": "27a28b74cd2c42c19fcb31c7e3c4ac67", "text": "The backpropagation of error algorithm (BP) is impossible to implement in a real brain. The recent success of deep networks in machine learning and AI, however, has inspired proposals for understanding how the brain might learn across multiple layers, and hence how it might approximate BP. As of yet, none of these proposals have been rigorously evaluated on tasks where BP-guided deep learning has proved critical, or in architectures more structured than simple fully-connected networks. Here we present results on scaling up biologically motivated models of deep learning on datasets which need deep networks with appropriate architectures to achieve good performance. We present results on the MNIST, CIFAR-10, and ImageNet datasets and explore variants of target-propagation (TP) and feedback alignment (FA) algorithms, and explore performance in both fullyand locally-connected architectures. We also introduce weight-transport-free variants of difference target propagation (DTP) modified to remove backpropagation from the penultimate layer. Many of these algorithms perform well for MNIST, but for CIFAR and ImageNet we find that TP and FA variants perform significantly worse than BP, especially for networks composed of locally connected units, opening questions about whether new architectures and algorithms are required to scale these approaches. Our results and implementation details help establish baselines for biologically motivated deep learning schemes going forward.", "title": "" }, { "docid": "f099eeead6741665f061fcfe736c5c9f", "text": "For many applications, in particular in natural science, the task is to determine hidden system parameters from a set of measurements. Often, the forward process from parameterto measurement-space is well-defined, whereas the inverse problem is ambiguous: multiple parameter sets can result in the same measurement. To fully characterize this ambiguity, the full posterior parameter distribution, conditioned on an observed measurement, has to be determined. We argue that a particular class of neural networks is well suited for this task – so-called Invertible Neural Networks (INNs). Unlike classical neural networks, which attempt to solve the ambiguous inverse problem directly, INNs focus on learning the forward process, using additional latent output variables to capture the information otherwise lost. Due to invertibility, a model of the corresponding inverse process is learned implicitly. Given a specific measurement and the distribution of the latent variables, the inverse pass of the INN provides the full posterior over parameter space. We prove theoretically and verify experimentally, on artificial data and real-world problems from medicine and astrophysics, that INNs are a powerful analysis tool to find multi-modalities in parameter space, uncover parameter correlations, and identify unrecoverable parameters.", "title": "" }, { "docid": "4f2dfce9c09c62a314143353fb3e3bb5", "text": "Same-sex marriage, barely on the political radar a decade ago, is a reality in America. How will it affect the well-being of children? Some observers worry that legalizing same-sex marriage would send the message that same-sex parenting and opposite-sex parenting are interchangeable, when in fact they may lead to different outcomes for children. To evaluate that concern, William Meezan and Jonathan Rauch review the growing body of research on how same-sex parenting affects children. After considering the methodological problems inherent in studying small, hard-to-locate populations--problems that have bedeviled this literature-the authors find that the children who have been studied are doing about as well as children normally do. What the research does not yet show is whether the children studied are typical of the general population of children raised by gay and lesbian couples. A second important question is how same-sex marriage might affect children who are already being raised by same-sex couples. Meezan and Rauch observe that marriage confers on children three types of benefits that seem likely to carry over to children in same-sex families. First, marriage may increase children's material well-being through such benefits as family leave from work and spousal health insurance eligibility. It may also help ensure financial continuity, should a spouse die or be disabled. Second, same-sex marriage may benefit children by increasing the durability and stability of their parents' relationship. Finally, marriage may bring increased social acceptance of and support for same-sex families, although those benefits might not materialize in communities that meet same-sex marriage with rejection or hostility. The authors note that the best way to ascertain the costs and benefits of the effects of same-sex marriage on children is to compare it with the alternatives. Massachusetts is marrying same-sex couples, Vermont and Connecticut are offering civil unions, and several states offer partner-benefit programs. Studying the effect of these various forms of unions on children could inform the debate over gay marriage to the benefit of all sides of the argument.", "title": "" }, { "docid": "6b693af5ed67feab686a9a92e4329c94", "text": "Physicians and nurses express their judgments and observations towards a patient’s health status in clinical narratives. Thus, their judgments are explicitly or implicitly included in patient records. To get impressions on the current health situation of a patient or on changes in the status, analysis and retrieval of this subjective content is crucial. In this paper, we approach this question as sentiment analysis problem and analyze the feasibility of assessing these judgments in clinical text by means of general sentiment analysis methods. Specifically, the word usage in clinical narratives and in a general text corpus is compared. The linguistic characteristics of judgments in clinical narratives are collected. Besides, the requirements for sentiment analysis and retrieval from clinical narratives are derived.", "title": "" }, { "docid": "fcd0c523e74717c572c288a90c588259", "text": "From analyzing 100 assessments of coping, the authors critiqued strategies and identified best practices for constructing category systems. From current systems, a list of 400 ways of coping was compiled. For constructing lower order categories, the authors concluded that confirmatory factor analysis should replace the 2 most common strategies (exploratory factor analysis and rational sorting). For higher order categories, they recommend that the 3 most common distinctions (problem- vs. emotion-focused, approach vs. avoidance, and cognitive vs. behavioral) no longer be used. Instead, the authors recommend hierarchical systems of action types (e.g., proximity seeking, accommodation). From analysis of 6 such systems, 13 potential core families of coping were identified. Future steps involve deciding how to organize these families, using their functional homogeneity and distinctiveness, and especially their links to adaptive processes.", "title": "" }, { "docid": "c0bf378bd6c763b83249163733c21f07", "text": "Although videos appear to be very high-dimensional in terms of duration × frame-rate × resolution, temporal smoothness constraints ensure that the intrinsic dimensionality for videos is much lower. In this paper, we use this idea for investigating Domain Adaptation (DA) in videos, an area that remains under-explored. An approach that has worked well for the image DA is based on the subspace modeling of the source and target domains, which works under the assumption that the two domains share a latent subspace where the domain shift can be reduced or eliminated. In this paper, first we extend three subspace based image DA techniques for human action recognition and then combine it with our proposed Eclectic Domain Mixing (EDM) approach to improve the effectiveness of the DA. Further, we use discrepancy measures such as Symmetrized KL Divergence and Target Density Around Source for empirical study of the proposed EDM approach. While, this work mainly focuses on Domain Adaptation in videos, for completeness of the study, we comprehensively evaluate our approach using both object and action datasets. In this paper, we have achieved consistent improvements over chosen baselines and obtained some state-of-the-art results for the datasets.", "title": "" } ]
scidocsrr
3be7c032174e7c0804da33209d27ac8d
No Dutch Book can be built against the TBM even though update is not obtained by Bayes rule of conditioning
[ { "docid": "7cf625ce06d335d7758c868514b4c635", "text": "Jeffrey's rule of conditioning has been proposed in order to revise a probability measure by another probability function. We generalize it within the framework of the models based on belief functions. We show that several forms of Jeffrey's conditionings can be defined that correspond to the geometrical rule of conditioning and to Dempster's rule of conditioning, respectively. 1. Jeffrey's rule in probability theory. In probability theory conditioning on an event . is classically obtained by the application of Bayes' rule. Let (Q, � , P) be a probability space where P(A) is the probability of the event Ae � where� is a Boolean algebra defined on a finite2 set n. P(A) quantified the degree of belief or the objective probability, depending on the interpretation given to the probability measure, that a particular arbitrary element m of n which is not a priori located in any of the sets of� belongs to a particular set Ae�. Suppose it is known that m belongs to Be� and P(B)>O. The probability measure P must be updated into PB that quantifies the same event as previously but after taking in due consideration the know ledge that me B. PB is obtained by Bayes' rule of conditioning: This rule can be obtained by requiring that: 81: VBE�. PB(B) = 1 82: VBe�, VX,Ye� such that X.Y�B. and PJ3(X) _ P(X) PB(Y)P(Y) PB(Y) = 0 ifP(Y)>O", "title": "" } ]
[ { "docid": "ee6612fa13482f7e3bbc7241b9e22297", "text": "The MOND limit is shown to follow from a requirement of space-time scale invariance of the equations of motion for nonrelativistic, purely gravitational systems; i.e., invariance of the equations of motion under (t, r) → (λt, λr) in the limit a 0 → ∞. It is suggested that this should replace the definition of the MOND limit based on the asymptotic behavior of a Newtonian-MOND interpolating function. In this way, the salient, deep-MOND results–asymptotically flat rotation curves, the mass-rotational-speed relation (baryonic Tully-Fisher relation), the Faber-Jackson relation, etc.–follow from a symmetry principle. For example, asymptotic flatness of rotation curves reflects the fact that radii change under scaling, while velocities do not. I then comment on the interpretation of the deep-MOND limit as one of \" zero mass \" : Rest masses, whose presence obstructs scaling symmetry, become negligible compared to the \" phantom \" , dynamical masses–those that some would attribute to dark matter. Unlike the former masses, the latter transform in a way that is consistent with the symmetry. Finally, I discuss the putative MOND-cosmology connection, in particular the possibility that MOND-especially the deep-MOND limit– is related to the asymptotic de Sitter geometry of our universe. I point out, in this connection, the possible relevance of a (classical) de Sitter-conformal-field-theory (dS/CFT) correspondence.", "title": "" }, { "docid": "7974d0299ffcca73bb425fb72f463429", "text": "The development of human gut microbiota begins as soon as the neonate leaves the protective environment of the uterus (or maybe in-utero) and is exposed to innumerable microorganisms from the mother as well as the surrounding environment. Concurrently, the host responses to these microbes during early life manifest during the development of an otherwise hitherto immature immune system. The human gut microbiome, which comprises an extremely diverse and complex community of microorganisms inhabiting the intestinal tract, keeps on fluctuating during different stages of life. While these deviations are largely natural, inevitable and benign, recent studies show that unsolicited perturbations in gut microbiota configuration could have strong impact on several features of host health and disease. Our microbiota undergoes the most prominent deviations during infancy and old age and, interestingly, our immune health is also in its weakest and most unstable state during these two critical stages of life, indicating that our microbiota and health develop and age hand-in-hand. However, the mechanisms underlying these interactions are only now beginning to be revealed. The present review summarizes the evidences related to the age-associated changes in intestinal microbiota and vice-versa, mechanisms involved in this bi-directional relationship, and the prospective for development of microbiota-based interventions such as probiotics for healthy aging.", "title": "" }, { "docid": "8e8f3d504bdeb2b6c4b86999df3ece67", "text": "Software released in binary form frequently uses third-party packages without respecting their licensing terms. For instance, many consumer devices have firmware containing the Linux kernel, without the suppliers following the requirements of the GNU General Public License. Such license violations are often accidental, e.g., when vendors receive binary code from their suppliers with no indication of its provenance. To help find such violations, we have developed the Binary Analysis Tool (BAT), a system for code clone detection in binaries. Given a binary, such as a firmware image, it attempts to detect cloning of code from repositories of packages in source and binary form. We evaluate and compare the effectiveness of three of BAT's clone detection techniques: scanning for string literals, detecting similarity through data compression, and detecting similarity by computing binary deltas.", "title": "" }, { "docid": "767e133857d336e73d04e0ae5e924283", "text": "OVERVIEW 376 THEORETICAL PERSPECTIVES ON MEDIA USE AND EFFECTS 377 Social Cognitive Theory 377 Parasocial Relationships and Parasocial Interactions 377 Cognitive Approaches 378 The Cultivation Hypothesis 378 Uses and Gratification Theory 379 Arousal Theory 379 Psychoanalytic Theory 379 Behaviorism and Classical Conditioning 379 Summary 380 THE HISTORY AND EVOLUTION OF MEDIA PLATFORMS 380 THE ECOLOGY OF THE DIGITALWORLD 381 Media Access 381 Defining Media Exposure 382 Measuring Media Use and Exposure 383 THE DISAPPEARANCE OF QUIET ENVIRONMENTS 386 Media, Imaginative Play, Creativity, and Daydreaming 386 Media and Sleep Patterns 388 Media and Concentration 389 THE SOCIAL NATURE OF MEDIA ENVIRONMENTS: ELECTRONIC FRIENDS AND COMMUNICATIONS 389 Prosocial Media: “It’s a Beautiful Day in the Neighborhood” 390 Parasocial Relationships With Media Characters 392 Social Media: Being and Staying Connected 393 THE MEAN AND SCARY WORLD: MEDIA VIOLENCE AND SCARY CONTENT 394 Media Violence 394 Children’s Fright Reactions to Scary Media Content 396 MEDIA, GENDER, AND SEXUALITY 397 Gender-Stereotyped Content 397 Influences of Media on Gender-Related Processing and Outcomes 398 Sexual Content 400 Influences of Sexual Content on Children 400 FROM OUTDOOR TO INDOOR ENVIRONMENTS: THE OBESITY EPIDEMIC 401 The Content of Food and Beverage Advertisements 402 Energy Intake: Media Influences on Children’s Diets and Health Outcomes 402 Media-Related Caloric Expenditure 403 Summary 403 RISKY MEDIA ENVIRONMENTS: ALCOHOL, TOBACCO, AND ILLEGAL DRUGS 404 The Content: Exposure to Risky Behaviors 404 Influences of Exposure to Alcohol, Tobacco, and Illegal Drugs on Children 404 MEDIA POLICY 404 Early Media Exposure 405 The V-Chip 405 Media Violence 405 Regulating Sexual Content 405 The Commercialization of Childhood 406 Driving Hazards 407 The Children’s Television Act 407 CONCLUSIONS 407 REFERENCES 408", "title": "" }, { "docid": "bb95c0246cbd1238ad4759f488763c37", "text": "The massive scale of future wireless networks will cause computational bottlenecks in performance optimization. In this paper, we study the problem of connecting mobile traffic to Cloud RAN (C-RAN) stations. To balance station load, we steer the traffic by designing device association rules. The baseline association rule connects each device to the station with the strongest signal, which does not account for interference or traffic hot spots, and leads to load imbalances and performance deterioration. Instead, we can formulate an optimization problem to decide centrally the best association rule at each time instance. However, in practice this optimization has such high dimensions, that even linear programming solvers fail to solve. To address the challenge of massive connectivity, we propose an approach based on the theory of optimal transport, which studies the economical transfer of probability between two distributions. Our proposed methodology can further inspire scalable algorithms for massive optimization problems in wireless networks.", "title": "" }, { "docid": "f2274a04e0a54fb5a46e2be99863d9ac", "text": "I find that dialysis providers in the United States exercise market power by reducing the clinical quality, or dose, of dialysis treatment. This market power stems from two sources. The first is a spatial dimension—patients face high travel costs and traveling farther for quality is undesirable. The second source is congestion—technological constraints may require dialysis capacity to be rationed among patients. Both of these sources of market power should be considered when developing policies aimed at improving quality or access in this industry. To this end, I develop and estimate an entry game with quality competition where providers choose both capacity and quality. Increasing the Medicare reimbursement rate for dialysis or subsidizing entry result in increased entry and improved quality for patients. However, these policies are extremely costly because providers are able to capture 84 to 97 percent of the additional surplus, leaving very little pass-through to consumers. Policies targeting the sources of market power provide a cost effective way of improving quality by enhancing competition and forcing providers to give up producer surplus. For example, I find that a program subsidizing patient travel costs $373 million, increases consumer surplus by $440 million, and reduces the mortality rate by 3 percent. ∗I thank my advisers Allan Collard-Wexler, Pat Bayer, Ryan McDevitt, James Roberts, and Chris Timmins for their extensive comments, guidance, and support. I am also grateful to Peter Arcidiacono, Federico Bugni, David Ridley, Adam Rosen, John Singleton, Frank Sloan, Daniel Xu and seminar participants at Duke, the International Industrial Organization Conference, and the Applied Micro Workshop at the Federal Reserve Board. †paul.eliason@duke.edu", "title": "" }, { "docid": "59bd3e5db7291e43a8439e63d957aa31", "text": "Semi-supervised classifier design that simultaneously utilizes both labeled and unlabeled samples is a major research issue in machine learning. Existing semisupervised learning methods belong to either generative or discriminative approaches. This paper focuses on probabilistic semi-supervised classifier design and presents a hybrid approach to take advantage of the generative and discriminative approaches. Our formulation considers a generative model trained on labeled samples and a newly introduced bias correction model. Both models belong to the same model family. The proposed hybrid model is constructed by combining both generative and bias correction models based on the maximum entropy principle. The parameters of the bias correction model are estimated by using training data, and combination weights are estimated so that labeled samples are correctly classified. We use naive Bayes models as the generative models to apply the hybrid approach to text classification problems. In our experimental results on three text data sets, we confirmed that the proposed method significantly outperformed pure generative and discriminative methods when the classification performances of the both methods were comparable.", "title": "" }, { "docid": "681360f20a662f439afaaa022079f7c0", "text": "We present a multi-PC/camera system that can perform 3D reconstruction and ellipsoids fitting of moving humans in real time. The system consists of five cameras. Each camera is connected to a PC which locally extracts the silhouettes of the moving person in the image captured by the camera. The five silhouette images are then sent, via local network, to a host computer to perform 3D voxel-based reconstruction by an algorithm called SPOT. Ellipsoids are then used to fit the reconstructed data. By using a simple and user-friendly interface, the user can display and observe, in real time and from any view-point, the 3D models of the moving human body. With a rate of higher than 15 frames per second, the system is able to capture nonintrusively sequence of human motions.", "title": "" }, { "docid": "d0da33c18339070575bf1244e93c81fe", "text": "The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments (single factor or factorial designs), A/B tests (and their generalizations), split tests, Control/Treatment tests, and parallel flights. Controlled experiments embody the best scientific design for establishing a causal relationship between changes and their influence on user-observable behavior. We provide a practical guide to conducting online experiments, where end-users can help guide the development of features. Our experience indicates that significant learning and return-on-investment (ROI) are seen when development teams listen to their customers, not to the Highest Paid Person's Opinion (HiPPO). We provide several examples of controlled experiments with surprising results. We review the important ingredients of running controlled experiments, and discuss their limitations (both technical and organizational). We focus on several areas that are critical to experimentation, including statistical power, sample size, and techniques for variance reduction. We describe common architectures for experimentation systems and analyze their advantages and disadvantages. We evaluate randomization and hashing techniques, which we show are not as simple in practice as is often assumed. Controlled experiments typically generate large amounts of data, which can be analyzed using data mining techniques to gain deeper understanding of the factors influencing the outcome of interest, leading to new hypotheses and creating a virtuous cycle of improvements. Organizations that embrace controlled experiments with clear evaluation criteria can evolve their systems with automated optimizations and real-time analyses. Based on our extensive practical experience with multiple systems and organizations, we share key lessons that will help practitioners in running trustworthy controlled experiments.", "title": "" }, { "docid": "a0c42d2b0ffd4a784c016663dfb6bb4e", "text": "College of Information and Electrical Engineering, China Agricultural University, Beijing, China Abstract. This paper presents a system framework taking the advantages of the WSN for the real-time monitoring on the water quality in aquaculture. We design the structure of the wireless sensor network to collect and continuously transmit data to the monitoring software. Then we accomplish the configuration model in the software that enhances the reuse and facility of the monitoring project. Moreover, the monitoring software developed to represent the monitoring hardware and data visualization, and analyze the data with expert knowledge to implement the auto control. The monitoring system has been realization of the digital, intelligent, and effectively ensures the quality of aquaculture water. Practical deployment results are to show the system reliability and real-time characteristics, and to display good effect on environmental monitoring of water quality.", "title": "" }, { "docid": "423f246065662358b1590e8f59a2cc55", "text": "Caused by the rising interest in traffic surveillance for simulations and decision management many publications concentrate on automatic vehicle detection or tracking. Quantities and velocities of different car classes form the data basis for almost every traffic model. Especially during mass events or disasters a wide-area traffic monitoring on demand is needed which can only be provided by airborne systems. This means a massive amount of image information to be handled. In this paper we present a combination of vehicle detection and tracking which is adapted to the special restrictions given on image size and flow but nevertheless yields reliable information about the traffic situation. Combining a set of modified edge filters it is possible to detect cars of different sizes and orientations with minimum computing effort, if some a priori information about the street network is used. The found vehicles are tracked between two consecutive images by an algorithm using Singular Value Decomposition. Concerning their distance and correlation the features are assigned pairwise with respect to their global positioning among each other. Choosing only the best correlating assignments it is possible to compute reliable values for the average velocities.", "title": "" }, { "docid": "28a481f51a7d673d1acb396d8b9c25fb", "text": "This study investigated the combination of mothers' and fathers' parenting styles (affection, behavioral control, and psychological control) that would be most influential in predicting their children's internal and external problem behaviors. A total of 196 children (aged 5-6 years) were followed up six times from kindergarten to the second grade to measure their problem behaviors. Mothers and fathers filled in a questionnaire measuring their parenting styles once every year. The results showed that a high level of psychological control exercised by mothers combined with high affection predicted increases in the levels of both internal and external problem behaviors among children. Behavioral control exercised by mothers decreased children's external problem behavior but only when combined with a low level of psychological control.", "title": "" }, { "docid": "da63f023a1fd1f646deb5b2908e8634f", "text": "This paper presents a new algorithm for smoothing 3D binary images in a topology preserving way. Our algorithm is a reduction operator: some border points that are considered as extremities are removed. The proposed method is composed of two parallel reduction operators. We are to apply our smoothing algorithm as an iterationby-iteration pruning for reducing the noise sensitivity of 3D parallel surface-thinning algorithms. An efficient implementation of our algorithm is sketched and its topological correctness for (26,6) pictures is proved.", "title": "" }, { "docid": "53633432216e383297e401753332b00a", "text": "Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock) has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR) at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR) is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears) rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right) precentral sulcus (lPCS), a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream. Results suggest that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity) help to explain why past ASSR studies of auditory spatial attention yield seemingly contradictory results.", "title": "" }, { "docid": "c273620e05cc5131e8c6d58b700a0aab", "text": "Differential evolution has been shown to be an effective methodology for solving optimization problems over continuous space. In this paper, we propose an eigenvector-based crossover operator. The proposed operator utilizes eigenvectors of covariance matrix of individual solutions, which makes the crossover rotationally invariant. More specifically, the donor vectors during crossover are modified, by projecting each donor vector onto the eigenvector basis that provides an alternative coordinate system. The proposed operator can be applied to any crossover strategy with minimal changes. The experimental results show that the proposed operator significantly improves DE performance on a set of 54 test functions in CEC 2011, BBOB 2012, and CEC 2013 benchmark sets.", "title": "" }, { "docid": "08b5bff9f96619083c16607090311345", "text": "This demo presents a prototype mobile app that provides out-of-the-box personalised content recommendations to its users by leveraging and combining the user's location, their Facebook and/or Twitter feed and their in-app actions to automatically infer their interests. We build individual models for each user and each location. At retrieval time we construct the user's personalised feed by mixing different sources of content-based recommendations with content directly from their Facebook/Twitter feeds, locally trending articles and content propagated through their in-app social network. Both explicit and implicit feedback signals from the users' interactions with their recommendations are used to update their interests models and to learn their preferences over the different content sources.", "title": "" }, { "docid": "e67f95384ce816124648cdc33cd7091c", "text": "A high-efficiency push-pull power amplifier has been designed and measured across a bandwidth of 250MHz to 3.1GHz. The output power was 46dBm with a drain efficiency of above 45% between 700MHz and 2GHz, with a minimum output power of 43dBm across the entire band. In addition, a minimum of 60% drain efficiency and 11dB transducer gain was measured between 350MHz and 1GHz. The design was realized using a coaxial cable transmission line balun, which provides a broadband 2∶1 impedance transformation ratio and reduces the need for bandwidth-limiting conventional matching. The combination of output power, bandwidth and efficiency are believed to be the best reported to date at these frequencies.", "title": "" }, { "docid": "142c5598f0a8b95b5d4f3e5656a857a9", "text": "Flavanols from chocolate appear to increase nitric oxide bioavailability, protect vascular endothelium, and decrease cardiovascular disease (CVD) risk factors. We sought to test the effect of flavanol-rich dark chocolate (FRDC) on endothelial function, insulin sensitivity, beta-cell function, and blood pressure (BP) in hypertensive patients with impaired glucose tolerance (IGT). After a run-in phase, 19 hypertensives with IGT (11 males, 8 females; 44.8 +/- 8.0 y) were randomized to receive isocalorically either FRDC or flavanol-free white chocolate (FFWC) at 100 g/d for 15 d. After a wash-out period, patients were switched to the other treatment. Clinical and 24-h ambulatory BP was determined by sphygmometry and oscillometry, respectively, flow-mediated dilation (FMD), oral glucose tolerance test, serum cholesterol and C-reactive protein, and plasma homocysteine were evaluated after each treatment phase. FRDC but not FFWC ingestion decreased insulin resistance (homeostasis model assessment of insulin resistance; P < 0.0001) and increased insulin sensitivity (quantitative insulin sensitivity check index, insulin sensitivity index (ISI), ISI(0); P < 0.05) and beta-cell function (corrected insulin response CIR(120); P = 0.035). Systolic (S) and diastolic (D) BP decreased (P < 0.0001) after FRDC (SBP, -3.82 +/- 2.40 mm Hg; DBP, -3.92 +/- 1.98 mm Hg; 24-h SBP, -4.52 +/- 3.94 mm Hg; 24-h DBP, -4.17 +/- 3.29 mm Hg) but not after FFWC. Further, FRDC increased FMD (P < 0.0001) and decreased total cholesterol (-6.5%; P < 0.0001), and LDL cholesterol (-7.5%; P < 0.0001). Changes in insulin sensitivity (Delta ISI - Delta FMD: r = 0.510, P = 0.001; Delta QUICKI - Delta FMD: r = 0.502, P = 0.001) and beta-cell function (Delta CIR(120) - Delta FMD: r = 0.400, P = 0.012) were directly correlated with increases in FMD and inversely correlated with decreases in BP (Delta ISI - Delta 24-h SBP: r = -0.368, P = 0.022; Delta ISI - Delta 24-h DBP r = -0.384, P = 0.017). Thus, FRDC ameliorated insulin sensitivity and beta-cell function, decreased BP, and increased FMD in IGT hypertensive patients. These findings suggest flavanol-rich, low-energy cocoa food products may have a positive impact on CVD risk factors.", "title": "" }, { "docid": "2364fc795ff8e449a557eda4b498b42d", "text": "With the increasing utilization and popularity of the cloud infrastructure, more and more data are moved to the cloud storage systems. This makes the availability of cloud storage services critically important, particularly given the fact that outages of cloud storage services have indeed happened from time to time. Thus, solely depending on a single cloud storage provider for storage services can risk violating the service-level agreement (SLA) due to the weakening of service availability. This has led to the notion of Cloud-of-Clouds, where data redundancy is introduced to distribute data among multiple independent cloud storage providers, to address the problem. The key in the effectiveness of the Cloud-of-Clouds approaches lies in how the data redundancy is incorporated and distributed among the clouds. However, the existing Cloud-of-Clouds approaches utilize either replication or erasure codes to redundantly distribute data across multiple clouds, thus incurring either high space or high performance overheads. In this paper, we propose a hybrid redundant data distribution approach, called HyRD, to improve the cloud storage availability in Cloud-of-Clouds by exploiting the workload characteristics and the diversity of cloud providers. In HyRD, large files are distributed in multiple cost-efficient cloud storage providers with erasure-coded data redundancy while small files and file system metadata are replicated on multiple high-performance cloud storage providers. The experiments conducted on our lightweight prototype implementation of HyRD show that HyRD improves the cost efficiency by 33.4 and 20.4 percent, and reduces the access latency by 58.7 and 34.8 percent than the DuraCloud and RACS schemes, respectively.", "title": "" }, { "docid": "09c5bfd9c7fcd78f15db76e8894751de", "text": "Recently, active suspension is gaining popularity in commercial automobiles. To develop the control methodologies for active suspension control, a quarter-car test bed was built employing a direct-drive tubular linear brushless permanent-magnet motor (LBPMM) as a force-generating component. Two accelerometers and a linear variable differential transformer (LVDT) are used in this quarter-car test bed. Three pulse-width-modulation (PWM) amplifiers supply the currents in three phases. Simulated road disturbance is generated by a rotating cam. Modified lead-lag control, linear-quadratic (LQ) servo control with a Kalman filter, fuzzy control methodologies were implemented for active-suspension control. In the case of fuzzy control, an asymmetric membership function was introduced to eliminate the DC offset in sensor data and to reduce the discrepancy in the models. This controller could attenuate road disturbance by up to 77% in the sprung mass velocity and 69% in acceleration. The velocity and the acceleration data of the sprung mass are presented to compare the controllers' performance in the ride comfort of a vehicle. Both simulation and experimental results are presented to demonstrate the effectiveness of these control methodologies.", "title": "" } ]
scidocsrr
ba3a1bfaa8b3054d3bd5821ac2870b8c
Relaxed online SVMs for spam filtering
[ { "docid": "5dc901d0d82147a7098a63c0d1617649", "text": "Naive Bayes is very popular in commercial and open-source anti-spam e-mail filters. There are, however, several forms of Naive Bayes, something the anti-spam literature does not always acknowledge. We discuss five different versions of Naive Bayes, and compare them on six new, non-encoded datasets, that contain ham messages of particular Enron users and fresh spam messages. The new datasets, which we make publicly available, are more realistic than previous comparable benchmarks, because they maintain the temporal order of the messages in the two categories, and they emulate the varying proportion of spam and ham messages that users receive over time. We adopt an experimental procedure that emulates the incremental training of personalized spam filters, and we plot roc curves that allow us to compare the different versions of nb over the entire tradeoff between true positives and true negatives.", "title": "" } ]
[ { "docid": "eac2100a0fa189aecc148b70e113a0b0", "text": "Zolt ́n Dörnyei Language Teaching / Volume 31 / Issue 03 / July 1998, pp 117 ­ 135 DOI: 10.1017/S026144480001315X, Published online: 12 June 2009 Link to this article: http://journals.cambridge.org/abstract_S026144480001315X How to cite this article: Zolt ́n Dörnyei (1998). Motivation in second and foreign language learning. Language Teaching, 31, pp 117­135 doi:10.1017/S026144480001315X Request Permissions : Click here", "title": "" }, { "docid": "8117b4daeac4cca15a4be1ee84b0e65f", "text": "Multi-Attribute Trade-Off Analysis (MATA) provides decision-makers with an analytical tool to identify Pareto Superior options for solving a problem with conflicting objectives or attributes. This technique is ideally suited to electric distribution systems, where decision-makers must choose investments that will ensure reliable service at reasonable cost. This paper describes the application of MATA to an electric distribution system facing dramatic growth, the Abu Dhabi Distribution Company (ADDC) in the United Arab Emirates. ADDC has a range of distribution system design options from which to choose in order to meet this growth. The distribution system design options have different levels of service quality (i.e., reliability) and service cost. Management can use MATA to calculate, summarize and compare the service quality and service cost attributes of the various design options. The Pareto frontier diagrams present management with clear, simple pictures of the trade-offs between service cost and service quality.", "title": "" }, { "docid": "e3d9d30900b899bcbf54cbd1b5479713", "text": "A new test method has been implemented for testing the EMC performance of small components like small connectors and IC's, mainly used in mobile applications. The test method is based on the EMC-stripline method. Both emission and immunity can be tested up to 6GHz, based on good RF matching conditions and with high field strengths.", "title": "" }, { "docid": "41a1d736a48c9f18a5d7f48400179850", "text": "DNA complexes, like the double crossover, are used as building blocks for the assembly of higher-order structures. Currently, the number of experimentally proven reliable complexes is small. We have begun work on expanding the collection of such complexes. Here we report on our design concepts and initial experiments. In particular, we present experimental evidence of two new complexes: quadruple crossovers and triangles. In principle, quadruple crossovers can be extended to three-dimensional, spacefilling lego brick complexes, while triangles are capable of hexagonally tiling the plane.", "title": "" }, { "docid": "700eb7f86bc3b815cddb460ba1e0c92b", "text": "Information centric network (ICN) is progressively becoming the revolutionary paradigm to the traditional Internet with improving data (content) distribution on the Internet along with global unique names. Some ICN-based architecture, such as named data network (NDN) and content centric network (CCN) has recently been developed to deal with prominent advantages to implement the basic idea of ICN. To improve the Internet services, its architecture design is shifting from host-centric (end-to-end) communication to receive-driven content retrieval. A prominent advantage of this novel architecture is that networks are equipped with transparent in-network caching to accelerate the content dissemination and improve the utilization of network resources. The gigantic increase of global network traffic poses new challenges to CCN caching technologies. It requires extensive flexibility for consumers to get information. One of the most imperative commonalities of CCN design is ubiquitous caching. It is broadly accepted that the in-network caching would improve the performance. ICN cache receives on several new characteristics: cache is ubiquitous, cache is transparent to application, and content to be cached is more significant. This paper presents a complete survey of state-of-art CCN-based probabilistic caching schemes aiming to address the caching issues, with certain focus on minimizing cache redundancy and improving the accessibility of cached content.", "title": "" }, { "docid": "c77042cb1a8255ac99ebfbc74979c3c6", "text": "Machine translation systems require semantic knowledge and grammatical understanding. Neural machine translation (NMT) systems often assume this information is captured by an attention mechanism and a decoder that ensures fluency. Recent work has shown that incorporating explicit syntax alleviates the burden of modeling both types of knowledge. However, requiring parses is expensive and does not explore the question of what syntax a model needs during translation. To address both of these issues we introduce a model that simultaneously translates while inducing dependency trees. In this way, we leverage the benefits of structure while investigating what syntax NMT must induce to maximize performance. We show that our dependency trees are 1. language pair dependent and 2. improve translation quality.", "title": "" }, { "docid": "091d9afe87fa944548b9f11386112d6e", "text": "In a cognitive radio network, the secondary users are allowed to utilize the frequency bands of primary users when these bands are not currently being used. To support this spectrum reuse functionality, the secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance in cognitive radio networks. There are two parameters associated with spectrum sensing: probability of detection and probability of false alarm. The higher the probability of detection, the better the primary users are protected. However, from the secondary users' perspective, the lower the probability of false alarm, the more chances the channel can be reused when it is available, thus the higher the achievable throughput for the secondary network. In this paper, we study the problem of designing the sensing duration to maximize the achievable throughput for the secondary network under the constraint that the primary users are sufficiently protected. We formulate the sensing-throughput tradeoff problem mathematically, and use energy detection sensing scheme to prove that the formulated problem indeed has one optimal sensing time which yields the highest throughput for the secondary network. Cooperative sensing using multiple mini-slots or multiple secondary users are also studied using the methodology proposed in this paper. Computer simulations have shown that for a 6 MHz channel, when the frame duration is 100 ms, and the signal-to-noise ratio of primary user at the secondary receiver is -20 dB, the optimal sensing time achieving the highest throughput while maintaining 90% detection probability is 14.2 ms. This optimal sensing time decreases when distributed spectrum sensing is applied.", "title": "" }, { "docid": "e461ba6f2a569fd93094a7ad8643cbb7", "text": "Sequence Generator Projection Constraint Conjunction 1 scheme(612,34,18,34,1) id alldifferent*18 2 scheme(612,34,18,2,2) id alldifferent*153 3 scheme(612,34,18,1,18) id alldifferent*34 4 scheme(612,34,18,1,18) absolute value symmetric alldifferent([1..18])*34 5 scheme(612,34,18,17,1) absolute value alldifferent*36 6 repart(612,34,18,34,9) id sum ctr(0)*306 7 repart(612,34,18,34,9) id twin*1 8 repart(612,34,18,34,9) id elements([i,-i ])*1 9 first(9,[1,3,5,7,9,11,13,15,17]) id strictly increasing*1 10 vector(612) id global cardinality([-18.. -1-17,0-0,1..18-17])*1 11 repart(612,34,18,34,9) id sum powers5 ctr(0)*306 12 repart(612,34,18,34,9) id sum cubes ctr(0)*306 13 repart(612,34,18,34,3) sign global cardinality([-1-3,0-0,1-3])*102 14 scheme(612,34,18,34,1) sign global cardinality([-1-17,0-0,1-17])*18 15 repart(612,34,18,17,9) sign global cardinality([-1-2,0-0,1-2])*153 16 repart(612,34,18,2,9) sign global cardinality([-1-17,0-0,1-17])*18 17 scheme(612,34,18,1,18) sign global cardinality([-1-9,0-0,1-9])*34 18 repart(612,34,18,34,9) sign sum ctr(0)*306 19 repart(612,34,18,34,9) sign twin*1 20 repart(612,34,18,34,9) absolute value twin*1 21 repart(612,34,18,34,9) sign elements([i,-i ])*1 22 scheme(612,34,18,34,1) sign among seq(3,[-1])*18 23 repart(612,34,18,34,9) absolute value elements([i,i ])*1 24 first(9,[1,3,5,7,9,11,13,15,17]) absolute value strictly increasing*1 25 first(6,[1,4,7,10,13,16]) absolute value strictly increasing*1 26 scheme(612,34,18,34,1) absolute value nvalue(17)*18 Selected Example Results", "title": "" }, { "docid": "ce8f000fa9a9ec51b8b2b63e98cec5fb", "text": "The Berlin Brain-Computer Interface (BBCI) project develops a noninvasive BCI system whose key features are 1) the use of well-established motor competences as control paradigms, 2) high-dimensional features from 128-channel electroencephalogram (EEG), and 3) advanced machine learning techniques. As reported earlier, our experiments demonstrate that very high information transfer rates can be achieved using the readiness potential (RP) when predicting the laterality of upcoming left- versus right-hand movements in healthy subjects. A more recent study showed that the RP similarly accompanies phantom movements in arm amputees, but the signal strength decreases with longer loss of the limb. In a complementary approach, oscillatory features are used to discriminate imagined movements (left hand versus right hand versus foot). In a recent feedback study with six healthy subjects with no or very little experience with BCI control, three subjects achieved an information transfer rate above 35 bits per minute (bpm), and further two subjects above 24 and 15 bpm, while one subject could not achieve any BCI control. These results are encouraging for an EEG-based BCI system in untrained subjects that is independent of peripheral nervous system activity and does not rely on evoked potentials even when compared to results with very well-trained subjects operating other BCI systems.", "title": "" }, { "docid": "e3bb16dfbe54599c83743e5d7f1facc6", "text": "Testosterone-dependent secondary sexual characteristics in males may signal immunological competence and are sexually selected for in several species,. In humans, oestrogen-dependent characteristics of the female body correlate with health and reproductive fitness and are found attractive. Enhancing the sexual dimorphism of human faces should raise attractiveness by enhancing sex-hormone-related cues to youth and fertility in females,, and to dominance and immunocompetence in males,,. Here we report the results of asking subjects to choose the most attractive faces from continua that enhanced or diminished differences between the average shape of female and male faces. As predicted, subjects preferred feminized to average shapes of a female face. This preference applied across UK and Japanese populations but was stronger for within-population judgements, which indicates that attractiveness cues are learned. Subjects preferred feminized to average or masculinized shapes of a male face. Enhancing masculine facial characteristics increased both perceived dominance and negative attributions (for example, coldness or dishonesty) relevant to relationships and paternal investment. These results indicate a selection pressure that limits sexual dimorphism and encourages neoteny in humans.", "title": "" }, { "docid": "775ffaac9501a46c246f96174b906700", "text": "We report on our experiences of introducing an instant messaging and group chat application into geographically distributed workgroups. We describe a number of issues we encountered, including privacy concerns, individual versus group training, and focusing on teams or individuals. The perception of the tool's utility was a complex issue, depending both on users' views of the importance of informal communication, and their perceptions of the nature of cross-site communication issues. Finally, we conclude with a discussion of critical mass, which is related to the features each user actually uses. More generally, we encountered a dilemma that imposes serious challenges for user-centered design of groupware systems", "title": "" }, { "docid": "cb98fd6c850d9b3d9a2bac638b9f632d", "text": "Artificial immune systems are a collection of algorithms inspired by the human immune system. Over the past 15 years, extensive research has been performed regarding the application of artificial immune systems to computer security. However, existing immune-inspired techniques have not performed as well as expected when applied to the detection of intruders in computer systems. In this thesis the development of the Dendritic Cell Algorithm is described. This is a novel immune-inspired algorithm based on the function of the dendritic cells of the human immune system. In nature, dendritic cells function as natural anomaly detection agents, instructing the immune system to respond if stress or damage is detected. Dendritic cells are a crucial cell in the detection and combination of ‘signals’ which provide the immune system with a sense of context. The Dendritic Cell Algorithm is based on an abstract model of dendritic cell behaviour, with the abstraction process performed in close collaboration with immunologists. This algorithm consists of components based on the key properties of dendritic cell behaviour, which involves data fusion and correlation components. In this algorithm, four categories of input signal are used. The resultant algorithm is formally described in this thesis and is validated on a standard machine learning dataset. The validation process shows that the Dendritic Cell Algorithm can be applied to static datasets and suggests that the algorithm is suitable for the analysis of time-dependent data. Further analysis and evaluation of the Dendritic Cell Algorithm is performed. This is assessed through the algorithm’s application to the detection of anomalous port scans. The results of this investigation show that the Dendritic Cell Algorithm can be applied to detection problems in real-time. This analysis also shows that detection with this algorithm produces high rates of false positives and high rates of true positives, in addition to being robust against modification to system parameters. The limitations of the Dendritic Cell Algorithm are also evaluated and presented, including loss of sensitivity and the generation of false positives under certain circumstances. It is shown that the Dendritic Cell Algorithm can perform well as an anomaly detection algorithm and can be applied to real-world, realtime data.", "title": "" }, { "docid": "13b372770bcd13729eaa6c8916ab6f9a", "text": "Finger vein recognition has drawn increasing attention from biometrics community due to its security and convenience. In this paper, a novel discriminative binary codes (DBC) learning method is proposed for finger vein recognition. First of all, subject relation graph is built to capture correlations among subjects. Based on the relation graph, binary templates are transformed to describe vein characteristics of subjects. To ensure that templates are discriminative and representative, graph transform is formulated into an optimization problem, in which the distance between templates from different subjects is maximized and templates provide maximum information about subjects. At last, supervised information for training instances is provided by the obtained binary templates, and SVMs are trained as the code learner for each bit. Compared with existing binary codes for finger vein recognition, DBC are more discriminative and shorter. In addition, they are generated with considering the relationships among subjects which may be useful to improve performance. Experimental results on PolyU database and MLA database demonstrate the effectiveness and efficiency of DBC for finger vein recognition and retrieval.", "title": "" }, { "docid": "96471eda3162fa5bdac40220646e7697", "text": "A key step in mass spectrometry (MS)-based proteomics is the identification of peptides in sequence databases by their fragmentation spectra. Here we describe Andromeda, a novel peptide search engine using a probabilistic scoring model. On proteome data, Andromeda performs as well as Mascot, a widely used commercial search engine, as judged by sensitivity and specificity analysis based on target decoy searches. Furthermore, it can handle data with arbitrarily high fragment mass accuracy, is able to assign and score complex patterns of post-translational modifications, such as highly phosphorylated peptides, and accommodates extremely large databases. The algorithms of Andromeda are provided. Andromeda can function independently or as an integrated search engine of the widely used MaxQuant computational proteomics platform and both are freely available at www.maxquant.org. The combination enables analysis of large data sets in a simple analysis workflow on a desktop computer. For searching individual spectra Andromeda is also accessible via a web server. We demonstrate the flexibility of the system by implementing the capability to identify cofragmented peptides, significantly improving the total number of identified peptides.", "title": "" }, { "docid": "17806963c91f6d6981f1dcebf3880927", "text": "The ability to assess the reputation of a member in a web community is a need addressed in many different ways according to the many different stages in which the nature of communities has evolved over time. In the case of reputation of goods/services suppliers, the solutions available to prevent the feedback abuse are generally reliable but centralized under the control of few big Internet companies. In this paper we show how a decentralized and distributed feedback management system can be built on top of the Bitcoin blockchain.", "title": "" }, { "docid": "6804ac61348ebd0cf9daca2396259d56", "text": "The hybrid electric bus (HEB) presents an emerging solution to exhaust gas emissions in urban transport. This paper proposes a multiport bidirectional switched reluctance motor (SRM) drive for solar-assisted HEB (SHEB) powertrain, which not only improves the motoring performance, but also achieves flexible charging functions. To extend the driving miles and achieve self-charging ability, photovoltaic (PV) panels are installed on the bus to decrease the reliance on fuelsbatteries and charging stations. A bidirectional front-end circuit with a PV-fed circuit is designed to integrate electrical components into one converter. Six driving and five charging modes are achieved. The dc voltage is boosted by the battery in generator control unit (GCU) driving mode and by the charge capacitor in battery driving mode, where the torque capability is improved. Usually, an extra converter is needed to achieve battery charging. In this paper, the battery can be directly charged by the demagnetization current in GCU or PV driving mode, and can be quickly charged by the PV panels and GCUAC grids at SHEB standstill conditions, by utilizing the traction motor windings and integrated converter circuit, without external charging converters. Experiments on a three-phase 128 SRM confirm the effectiveness of the proposed drive and control scheme.", "title": "" }, { "docid": "b6bf6c87040bc4996315fee62acb911b", "text": "The influence of the sleep patterns of 2,259 students, aged 11 to 14 years, on trajectories of depressive symptoms, self-esteem, and grades was longitudinally examined using latent growth cross-domain models. Consistent with previous research, sleep decreased over time. Students who obtained less sleep in sixth grade exhibited lower initial self-esteem and grades and higher initial levels of depressive symptoms. Similarly, students who obtained less sleep over time reported heightened levels of depressive symptoms and decreased self-esteem. Sex of the student played a strong role as a predictor of hours of sleep, self-esteem, and grades. This study underscores the role of sleep in predicting adolescents' psychosocial outcomes and highlights the importance of using idiographic methodologies in the study of developmental processes.", "title": "" }, { "docid": "25b250495fd4989ce1a365d5ddaa526e", "text": "Supervised automation of selected subtasks in Robot-Assisted Minimally Invasive Surgery (RMIS) has potential to reduce surgeon fatigue, operating time, and facilitate tele-surgery. Tumor resection is a multi-step multilateral surgical procedure to localize, expose, and debride (remove) a subcutaneous tumor, then seal the resulting wound with surgical adhesive. We developed a finite state machine using the novel devices to autonomously perform the tumor resection. The first device is an interchangeable instrument mount which uses the jaws and wrist of a standard RMIS gripping tool to securely hold and manipulate a variety of end-effectors. The second device is a fluid injection system that can facilitate precision delivery of material such as chemotherapy, stem cells, and surgical adhesives to specific targets using a single-use needle attached using the interchangeable instrument mount. Fluid flow through the needle is controlled via an externallymounted automated lead screw. Initial experiments suggest that an automated Intuitive Surgical dVRK system which uses these devices combined with a palpation probe and sensing model described in a previous paper can successfully complete the entire procedure in five of ten trials. We also show the most common failure phase, debridement, can be improved with visual feedback. Design details and video are available at: http://berkeleyautomation.github.io/surgical-tools.", "title": "" }, { "docid": "24abb3d2f2ced31c37acfd0624dcec4e", "text": "A new type of dependency, which includes the well-known functional dependencies as a special case, is defined for relational databases. By using this concept, a new (“fourth”) normal form for relation schemata is defined. This fourth normal form is strictly stronger than Codd's “improved third normal form” (or “Boyce-Codd normal form”). It is shown that every relation schema can be decomposed into a family of relation schemata in fourth normal form without loss of information (that is, the original relation can be obtained from the new relations by taking joins).", "title": "" } ]
scidocsrr
b39750117a119c36c25aae7d13e87597
Large-scale JPEG steganalysis using hybrid deep-learning framework
[ { "docid": "f0b522d7f3a0eeb6cb951356407cf15a", "text": "Today, the most accurate steganalysis methods for digital media are built as supervised classifiers on feature vectors extracted from the media. The tool of choice for the machine learning seems to be the support vector machine (SVM). In this paper, we propose an alternative and well-known machine learning tool-ensemble classifiers implemented as random forests-and argue that they are ideally suited for steganalysis. Ensemble classifiers scale much more favorably w.r.t. the number of training examples and the feature dimensionality with performance comparable to the much more complex SVMs. The significantly lower training complexity opens up the possibility for the steganalyst to work with rich (high-dimensional) cover models and train on larger training sets-two key elements that appear necessary to reliably detect modern steganographic algorithms. Ensemble classification is portrayed here as a powerful developer tool that allows fast construction of steganography detectors with markedly improved detection accuracy across a wide range of embedding methods. The power of the proposed framework is demonstrated on three steganographic methods that hide messages in JPEG images.", "title": "" } ]
[ { "docid": "b6ffc55e4e6b8e44201a5568cd9ca372", "text": "Unlike digitally savvy banking and retail industries, oil and natural gas businesses are latecomers to digitization. With large capital investments in complex industrial operations, firms in latecomer industries are seeking ways to cut costs and become responsive to market demands. Executives in oil and natural gas companies are facing unprecedented pressure to cut costs due to market turbulence and need advice on how to undertake digitization; organizational changes needed; who should lead the digitization effort; and the role of the chief information officer (CIO) in executing a digital strategy.", "title": "" }, { "docid": "9864bce09ff74218fb817aab62e70081", "text": "Nowadays, sentiment analysis methods become more and more popular especially with the proliferation of social media platform users number. In the same context, this paper presents a sentiment analysis approach which can faithfully translate the sentimental orientation of Arabic Twitter posts, based on a novel data representation and machine learning techniques. The proposed approach applied a wide range of features: lexical, surface-form, syntactic, etc. We also made use of lexicon features inferred from two Arabic sentiment words lexicons. To build our supervised sentiment analysis system, we use several standard classification methods (Support Vector Machines, K-Nearest Neighbour, Naïve Bayes, Decision Trees, Random Forest) known by their effectiveness over such classification issues.\n In our study, Support Vector Machines classifier outperforms other supervised algorithms in Arabic Twitter sentiment analysis. Via an ablation experiments, we show the positive impact of lexicon based features on providing higher prediction performance.", "title": "" }, { "docid": "ef1064ba6dcd464fd048aab9f70c4bdd", "text": "The problem of reproducing high dynamic range images on devices with restricted dynamic range has gained a lot of interest in the computer graphics community. There exist various approaches to this issue, which span several research areas including computer graphics, image processing, color science, physiology, neurology, psychology, et c. These approaches assume a thorough knowledge of both the objective and subjective attributes of an image. However, no comprehensive overview and analysis of such attributes has been published so far. In this paper, we present an overview of image quality attributes of different tone mapping methods. Furthermore, we propose a scheme of relationships between these attributes, leading to the definition of an overall image qua lity measure. We present results of subjective psychophysic al tests that we have performed to prove the proposed relationship scheme. We also present the evaluation of existing tone mapping methods with regard to these attributes. Our effort is not just useful to get into the tone mapping field or when implementing a tone mapping operator, but it also sets the stage for well-founded quality comparisons between tone mapping operators. By providing good definitions of the different attributes, user-driven or fully a utomatic comparisons are made possible at all.", "title": "" }, { "docid": "2b18aa800c4251e8cd8fbe39614eda4a", "text": "We consider the problem of finding small distance-preserving subgraphs of undirected, unweighted interval graphs with k terminal vertices. We prove the following results. 1. Finding an optimal distance-preserving subgraph is NP-hard for general graphs. 2. Every interval graph admits a subgraph with O(k) branching vertices that approximates pairwise terminal distances up to an additive term of +1. 3. There exists an interval graph Gint for which the +1 approximation is necessary to obtain the O(k) upper bound on the number of branching vertices. In particular, any distance-preserving subgraph of Gint has Ω(k log k) branching vertices. 4. Every interval graph admits a distance-preserving subgraph with O(k log k) branching vertices, i.e. the Ω(k log k) lower bound for interval graphs is tight. 5. There exists an interval graph such that every optimal distance-preserving subgraph of it has O(k) branching vertices and Ω(k log k) branching edges, thereby providing a separation between branching vertices and branching edges. The O(k) bound for distance-approximating subgraphs follows from a näıve analysis of shortest paths in interval graphs. Gint is constructed using bit-reversal permutation matrices. The O(k log k) bound for distance-preserving subgraphs uses a divide-and-conquer approach. Finally, the separation between branching vertices and branching edges employs Hansel’s lemma [Han64] for graph covering.", "title": "" }, { "docid": "b75eca4c07d5f04b73c4c8e447cbc878", "text": "For a conventional offline Buck-Boost LED driver, significant low frequency ripple current is produced when a high power factor has been achieved. In this paper, an innovative LED driver technology based on the Buck-Boost topology has been proposed. The featured configuration has greatly reduced the low frequency ripple current without compromising power factor performance. High efficiency and low component cost features have also been retained from conventional Buck-Boost LED driver. A 10W, 50V-0.2A experimental prototype has been constructed to verify the performance of the proposed technology.", "title": "" }, { "docid": "46884062bbf3153edec5d4943433c216", "text": "We address the key question of how object part representations can be found from the internal states of CNNs that are trained for high-level tasks, such as object classification. This work provides a new unsupervised method to learn semantic parts and gives new understanding of the internal representations of CNNs. Our technique is based on the hypothesis that semantic parts are represented by populations of neurons rather than by single filters. We propose a clustering technique to extract part representations, which we call Visual Concepts. We show that visual concepts are semantically coherent in that they represent semantic parts, and visually coherent in that corresponding image patches appear very similar. Also, visual concepts provide full spatial coverage of the parts of an object, rather than a few sparse parts as is typically found in keypoint annotations. Furthermore, We treat each visual concept as part detector and evaluate it for keypoint detection using the PASCAL3D+ dataset, and for part detection using our newly annotated ImageNetPart dataset. The experiments demonstrate that visual concepts can be used to detect parts. We also show that some visual concepts respond to several semantic parts, provided these parts are visually similar. Note that our ImageNetPart dataset gives rich part annotations which cover the whole object, making it useful for other part-related applications.", "title": "" }, { "docid": "dbb087a999a784669d2189e1c9cd92c4", "text": "Home Automation industry is growing rapidly; this is fuelled by provide supporting systems for the elderly and the disabled, especially those who live alone. Coupled with this, the world population is confirmed to be getting older. Home automation systems must comply with the household standards and convenience of usage. This paper details the overall design of a wireless home automation system (WHAS) which has been built and implemented. The automation centers on recognition of voice commands and uses low-power RF ZigBee wireless communication modules which are relatively cheap. The home automation system is intended to control all lights and electrical appliances in a home or office using voice commands. The system has been tested and verified. The verification tests included voice recognition response test, indoor ZigBee communication test. The tests involved a mix of 10 male and female subjects with different Indian languages. 7 different voice commands were sent by each person. Thus the test involved sending a total of 70 commands and 80.05% of these commands were recognized correctly. Keywords— Home automation, ZigBee transceivers, voice streaming, HM 2007, voice recognition. ——————————  ——————————", "title": "" }, { "docid": "276e3670984416d145b426f78c529ed8", "text": "State estimators in power systems are currently used to, for example, detect faulty equipment and to route power flows. It is believed that state estimators will also play an increasingly important role in future smart power grids, as a tool to optimally and more dynamically route power flows. Therefore security of the estimator becomes an important issue. The estimators are currently located in control centers, and large numbers of measurements are sent over unencrypted communication channels to the centers. We here study stealthy false-data attacks against these estimators. We define a security measure tailored to quantify how hard attacks are to perform, and describe an efficient algorithm to compute it. Since there are so many measurement devices in these systems, it is not reasonable to assume that all devices can be made encrypted overnight in the future. Therefore we propose two algorithms to place encrypted devices in the system such as to maximize their utility in terms of increased system security. We illustrate the effectiveness of our algorithms on two IEEE benchmark power networks under two attack and protection cost models.", "title": "" }, { "docid": "652ae28212819fbfdb12fa8f44a7b0c6", "text": "We show how interval analysis can be used to compute the minimum value of a twice continuously differentiable function of one variable over a closed interval. When both the first and second derivatives of the function have a finite number of isolated zeros, our method never fails to find the global minimum.", "title": "" }, { "docid": "d880535f198a1f0a26b18572f674b829", "text": "Human Activity Recognition (HAR) aims to identify the actions performed by humans using signals collected from various sensors embedded in mobile devices. In recent years, deep learning techniques have further improved HAR performance on several benchmark datasets. In this paper, we propose one-dimensional Convolutional Neural Network (1D CNN) for HAR that employs a divide and conquer-based classifier learning coupled with test data sharpening. Our approach leverages a two-stage learning of multiple 1D CNN models; we first build a binary classifier for recognizing abstract activities, and then build two multi-class 1D CNN models for recognizing individual activities. We then introduce test data sharpening during prediction phase to further improve the activity recognition accuracy. While there have been numerous researches exploring the benefits of activity signal denoising for HAR, few researches have examined the effect of test data sharpening for HAR. We evaluate the effectiveness of our approach on two popular HAR benchmark datasets, and show that our approach outperforms both the two-stage 1D CNN-only method and other state of the art approaches.", "title": "" }, { "docid": "2a56585a288405b9adc7d0844980b8bf", "text": "In this paper we propose the first exact solution to the problem of estimating the 3D room layout from a single image. This problem is typically formulated as inference in a Markov random field, where potentials count image features (e.g ., geometric context, orientation maps, lines in accordance with vanishing points) in each face of the layout. We present a novel branch and bound approach which splits the label space in terms of candidate sets of 3D layouts, and efficiently bounds the potentials in these sets by restricting the contribution of each individual face. We employ integral geometry in order to evaluate these bounds in constant time, and as a consequence, we not only obtain the exact solution, but also in less time than approximate inference tools such as message-passing. We demonstrate the effectiveness of our approach in two benchmarks and show that our bounds are tight, and only a few evaluations are necessary.", "title": "" }, { "docid": "2ffb20d66a0d5cb64442c2707b3155c6", "text": "A botnet is a network of compromised hosts that is under the control of a single, malicious entity, often called the botmaster. We present a system that aims to detect bot-infected machines, independent of any prior information about the command and control channels or propagation vectors, and without requiring multiple infections for correlation. Our system relies on detection models that target the characteristic fact that every bot receives commands from the botmaster to which it responds in a specific way. These detection models are generated automatically from network traffic traces recorded from actual bot instances. We have implemented the proposed approach and demonstrate that it can extract effective detection models for a variety of different bot families. These models are precise in describing the activity of bots and raise very few false positives.", "title": "" }, { "docid": "b1f98cbb045f8c15f53d284c9fa9d881", "text": "If the pace of increase in life expectancy in developed countries over the past two centuries continues through the 21st century, most babies born since 2000 in France, Germany, Italy, the UK, the USA, Canada, Japan, and other countries with long life expectancies will celebrate their 100th birthdays. Although trends differ between countries, populations of nearly all such countries are ageing as a result of low fertility, low immigration, and long lives. A key question is: are increases in life expectancy accompanied by a concurrent postponement of functional limitations and disability? The answer is still open, but research suggests that ageing processes are modifiable and that people are living longer without severe disability. This finding, together with technological and medical development and redistribution of work, will be important for our chances to meet the challenges of ageing populations.", "title": "" }, { "docid": "f05225e7e7c35eaafef59487d16a67c9", "text": "Although constructivism is a concept that has been embraced recently, a great number of sociologists, psychologists, applied linguists, and teachers have provided varied definitions of this concept. Also many philosophers and educationalists such as Piaget, Vygotsky, and Perkins suggest that constructivism and social constructivism try to solve the problems of traditional teaching and learning. This research review represents the meaning and the origin of constructivism, and then discusses the role of leaning, teaching, learner, and teacher in the first part from constructivist perspective. In the second part, the paper discusses the same issues, as presented in the first part, from social constructivist perspective. The purpose of this research review is to make EFL teachers and EFL students more familiar with the importance and guidance of both constructivism and social constructivism perspectives.", "title": "" }, { "docid": "54fb0028468f5e766d2c005dabe504c5", "text": "English. Several unsupervised methods for hypernym detection have been investigated in distributional semantics. Here we present a new approach based on a smoothed version of the distributional inclusion hypothesis. The new method is able to improve hypernym detection after testing on the BLESS dataset. Italiano. Sulla base dei metodi non supervisionati presenti in letteratura, affrontiamo il task di riconoscimento di iperonimi nello spazio distribuzionale. Introduciamo una nuova misura direzionale, basata su un’espansione dell’ipotesi di inclusione distribuzionale, che migliora il riconoscimento degli iperonimi, testandola sul dataset BLESS.", "title": "" }, { "docid": "7177503e5a6dffcaab46009673af5eed", "text": "This paper describes a heart attack self-test application for a mobile phone that allows potential victims, without the intervention of a medical specialist, to quickly assess whether they are having a heart attack. Heart attacks can occur anytime and anywhere. Using pervasive technology such as a mobile phone and a small wearable ECG sensor it is possible to collect the user's symptoms and to detect the onset of a heart attack by analysing the ECG recordings. If the application assesses that the user is at risk, it will urge the user to call the emergency services immediately. If the user has a cardiac arrest the application will automatically determine the current location of the user and alert the ambulance services and others to the person's location.", "title": "" }, { "docid": "cc6895789b42f7ae779c2236cde4636a", "text": "Modern day social media search and recommender systems require complex query formulation that incorporates both user context and their explicit search queries. Users expect these systems to be fast and provide relevant results to their query and context. With millions of documents to choose from, these systems utilize a multi-pass scoring function to narrow the results and provide the most relevant ones to users. Candidate selection is required to sift through all the documents in the index and select a relevant few to be ranked by subsequent scoring functions. It becomes crucial to narrow down the document set while maintaining relevant ones in resulting set. In this tutorial we survey various candidate selection techniques and deep dive into case studies on a large scale social media platform. In the later half we provide hands-on tutorial where we explore building these candidate selection models on a real world dataset and see how to balance the tradeoff between relevance and latency.", "title": "" }, { "docid": "f3aa019816ae399c3fe834ffce3db53e", "text": "This paper presents a method to incorporate 3D line segments in vision based SLAM. A landmark initialization method that relies on the Plucker coordinates to represent a 3D line is introduced: a Gaussian sum approximates the feature initial state and is updated as new observations are gathered by the camera. Once initialized, the landmarks state is estimated along an EKF-based SLAM approach: constraints associated with the Plucker representation are considered during the update step of the Kalman filter. The whole SLAM algorithm is validated in simulation runs and results obtained with real data are presented.", "title": "" }, { "docid": "19f4de5f01f212bf146087d4695ce15e", "text": "Reliable feature correspondence between frames is a critical step in visual odometry (VO) and visual simultaneous localization and mapping (V-SLAM) algorithms. In comparison with existing VO and V-SLAM algorithms, semi-direct visual odometry (SVO) has two main advantages that lead to stateof-the-art frame rate camera motion estimation: direct pixel correspondence and efficient implementation of probabilistic mapping method. This paper improves the SVO mapping by initializing the mean and the variance of the depth at a feature location according to the depth prediction from a singleimage depth prediction network. By significantly reducing the depth uncertainty of the initialized map point (i.e., small variance centred about the depth prediction), the benefits are twofold: reliable feature correspondence between views and fast convergence to the true depth in order to create new map points. We evaluate our method with two outdoor datasets: KITTI dataset and Oxford Robotcar dataset. The experimental results indicate that the improved SVO mapping results in increased robustness and camera tracking accuracy.", "title": "" }, { "docid": "300cd3e2d8e21f0c8dcf5ecba72cf283", "text": "Accurate and reliable traffic forecasting for complicated transportation networks is of vital importance to modern transportation management. The complicated spatial dependencies of roadway links and the dynamic temporal patterns of traffic states make it particularly challenging. To address these challenges, we propose a new capsule network (CapsNet) to extract the spatial features of traffic networks and utilize a nested LSTM (NLSTM) structure to capture the hierarchical temporal dependencies in traffic sequence data. A framework for network-level traffic forecasting is also proposed by sequentially connecting CapsNet and NLSTM. On the basis of literature review, our study is the first to adopt CapsNet and NLSTM in the field of traffic forecasting. An experiment on a Beijing transportation network with 278 links shows that the proposed framework with the capability of capturing complicated spatiotemporal traffic patterns outperforms multiple state-of-the-art traffic forecasting baseline models. The superiority and feasibility of CapsNet and NLSTM are also demonstrated, respectively, by visualizing and quantitatively evaluating the experimental results.", "title": "" } ]
scidocsrr
abe66f029600b23d6f9401a51417505d
The Feature Selection and Intrusion Detection Problems
[ { "docid": "2568f7528049b4ffc3d9a8b4f340262b", "text": "We introduce a new form of linear genetic programming (GP). Two methods of acceleration of our GP approach are discussed: 1) an efficient algorithm that eliminates intron code and 2) a demetic approach to virtually parallelize the system on a single processor. Acceleration of runtime is especially important when operating with complex data sets, because they are occuring in real-world applications. We compare GP performance on medical classification problems from a benchmark database with results obtained by neural networks. Our results show that GP performs comparable in classification and generalization.", "title": "" } ]
[ { "docid": "7c9cd59a4bb14f678c57ad438f1add12", "text": "This paper proposes a new ensemble method built upon a deep neural network architecture. We use a set of meteorological models for rain forecast as base predictors. Each meteorological model is provided to a channel of the network and, through a convolution operator, the prediction models are weighted and combined. As a result, the predicted value produced by the ensemble depends on both the spatial neighborhood and the temporal pattern. We conduct some computational experiments in order to compare our approach to other ensemble methods widely used for daily rainfall prediction. The results show that our architecture based on ConvLSTM networks is a strong candidate to solve the problem of combining predictions in a spatiotemporal context.", "title": "" }, { "docid": "bba15d88edc2574dcb3b12a78c3b2d57", "text": "Gaussian Processes (GPs) are widely used tools in statistics, machine learning, robotics, computer vision, and scientific computation. However, despite their popularity, they can be difficult to apply; all but the simplest classification or regression applications require specification and inference over complex covariance functions that do not admit simple analytical posteriors. This paper shows how to embed Gaussian processes in any higherorder probabilistic programming language, using an idiom based on memoization, and demonstrates its utility by implementing and extending classic and state-of-the-art GP applications. The interface to Gaussian processes, called gpmem, takes an arbitrary real-valued computational process as input and returns a statistical emulator that automatically improve as the original process is invoked and its input-output behavior is recorded. The flexibility of gpmem is illustrated via three applications: (i) Robust GP regression with hierarchical hyper-parameter learning, (ii) discovering symbolic expressions from time-series data by fully Bayesian structure learning over kernels generated by a stochastic grammar, and (iii) a bandit formulation of Bayesian optimization with automatic inference and action selection. All applications share a single 50-line Python library and require fewer than 20 lines of probabilistic code each.", "title": "" }, { "docid": "7ce1646e0fe1bd83f9feb5ec20233c93", "text": "An emerging class of theories concerning the functional structure of the brain takes the reuse of neural circuitry for various cognitive purposes to be a central organizational principle. According to these theories, it is quite common for neural circuits established for one purpose to be exapted (exploited, recycled, redeployed) during evolution or normal development, and be put to different uses, often without losing their original functions. Neural reuse theories thus differ from the usual understanding of the role of neural plasticity (which is, after all, a kind of reuse) in brain organization along the following lines: According to neural reuse, circuits can continue to acquire new uses after an initial or original function is established; the acquisition of new uses need not involve unusual circumstances such as injury or loss of established function; and the acquisition of a new use need not involve (much) local change to circuit structure (e.g., it might involve only the establishment of functional connections to new neural partners). Thus, neural reuse theories offer a distinct perspective on several topics of general interest, such as: the evolution and development of the brain, including (for instance) the evolutionary-developmental pathway supporting primate tool use and human language; the degree of modularity in brain organization; the degree of localization of cognitive function; and the cortical parcellation problem and the prospects (and proper methods to employ) for function to structure mapping. The idea also has some practical implications in the areas of rehabilitative medicine and machine interface design.", "title": "" }, { "docid": "4dfb1fab364811cdd9cd7baa8c9ae0f3", "text": "Understanding the mechanisms of evolution of brain pathways for complex behaviours is still in its infancy. Making further advances requires a deeper understanding of brain homologies, novelties and analogies. It also requires an understanding of how adaptive genetic modifications lead to restructuring of the brain. Recent advances in genomic and molecular biology techniques applied to brain research have provided exciting insights into how complex behaviours are shaped by selection of novel brain pathways and functions of the nervous system. Here, we review and further develop some insights to a new hypothesis on one mechanism that may contribute to nervous system evolution, in particular by brain pathway duplication. Like gene duplication, we propose that whole brain pathways can duplicate and the duplicated pathway diverge to take on new functions. We suggest that one mechanism of brain pathway duplication could be through gene duplication, although other mechanisms are possible. We focus on brain pathways for vocal learning and spoken language in song-learning birds and humans as example systems. This view presents a new framework for future research in our understanding of brain evolution and novel behavioural traits.", "title": "" }, { "docid": "4073da56cc874ea71f5e8f9c1c376cf8", "text": "AIM\nThis article reports the results of a study evaluating a preferred music listening intervention for reducing anxiety in older adults with dementia in nursing homes.\n\n\nBACKGROUND\nAnxiety can have a significant negative impact on older adults' functional status, quality of life and health care resources. However, anxiety is often under-diagnosed and inappropriately treated in those with dementia. Little is known about the use of a preferred music listening intervention for managing anxiety in those with dementia.\n\n\nDESIGN\nA quasi-experimental pretest and posttest design was used.\n\n\nMETHODS\nThis study aimed to evaluate the effectiveness of a preferred music listening intervention on anxiety in older adults with dementia in nursing home. Twenty-nine participants in the experimental group received a 30-minute music listening intervention based on personal preferences delivered by trained nursing staff in mid-afternoon, twice a week for six weeks. Meanwhile, 23 participants in the control group only received usual standard care with no music. Anxiety was measured by Rating Anxiety in Dementia at baseline and week six. Analysis of covariance (ancova) was used to determine the effectiveness of a preferred music listening intervention on anxiety at six weeks while controlling for pretest anxiety, age and marital status.\n\n\nRESULTS\nancova results indicated that older adults who received the preferred music listening had a significantly lower anxiety score at six weeks compared with those who received the usual standard care with no music (F = 12.15, p = 0.001).\n\n\nCONCLUSIONS\nPreferred music listening had a positive impact by reducing the level of anxiety in older adults with dementia.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nNursing staff can learn how to implement preferred music intervention to provide appropriate care tailored to the individual needs of older adults with dementia. Preferred music listening is an inexpensive and viable intervention to promote mental health of those with dementia.", "title": "" }, { "docid": "f0bbe4e6d61a808588153c6b5fc843aa", "text": "The development of Information and Communications Technologies (ICT) has affected various fields including the automotive industry. Therefore, vehicle network protocols such as Controller Area Network (CAN), Local Interconnect Network (LIN), and FlexRay have been introduced. Although CAN is the most widely used for vehicle network protocol, its security issue is not properly addressed. In this paper, we propose a security gateway, an improved version of existing CAN gateways, to protect CAN from spoofing and DoS attacks. We analyze sequence of messages based on the driver’s behavior to resist against spoofing attack and utilize a temporary ID and SipHash algorithm to resist against DoS attack. For the verification of our proposed method, OMNeT++ is used. The suggested method shows high detection rate and low increase of traffic. Also, analysis of frame drop rate during DoS attack shows that our suggested method can defend DoS attack.", "title": "" }, { "docid": "094d027465ac59fda9ae67d62e83782f", "text": "In this paper, frequency domain techniques are used to derive the tracking properties of the recursive least squares (RLS) algorithm applied to an adaptive antenna array in a mobile fading environment, expanding the use of such frequency domain approaches for nonstationary RLS tracking to the interference canceling problem that characterizes the use of antenna arrays in mobile wireless communications. The analysis focuses on the effect of the exponential weighting of the correlation estimation filter and its effect on the estimations of the time variant autocorrelation matrix and cross-correlation vector. Specifically, the case of a flat Rayleigh fading desired signal applied to an array in the presence of static interferers is considered with an AR2 fading process approximating the Jakes’ fading model. The result is a mean square error (MSE) performance metric parameterized by the fading bandwidth and the RLS exponential weighting factor, allowing optimal parameter selection. The analytic results are verified and demonstrated with a simulation example.", "title": "" }, { "docid": "b14502732b07cfc3153cd419b01084e5", "text": "Functional logic programming and probabilistic programming have demonstrated the broad benefits of combining laziness (non-strict evaluation with sharing of the results) with non-determinism. Yet these benefits are seldom enjoyed in functional programming, because the existing features for non-strictness, sharing, and non-determinism in functional languages are tricky to combine.\n We present a practical way to write purely functional lazy non-deterministic programs that are efficient and perspicuous. We achieve this goal by embedding the programs into existing languages (such as Haskell, SML, and OCaml) with high-quality implementations, by making choices lazily and representing data with non-deterministic components, by working with custom monadic data types and search strategies, and by providing equational laws for the programmer to reason about their code.", "title": "" }, { "docid": "b677a4762ceb4ec6f9f1fc418a701982", "text": "NoSQL databases are the new breed of databases developed to overcome the drawbacks of RDBMS. The goal of NoSQL is to provide scalability, availability and meet other requirements of cloud computing. The common motivation of NoSQL design is to meet scalability and fail over. In most of the NoSQL database systems, data is partitioned and replicated across multiple nodes. Inherently, most of them use either Google's MapReduce or Hadoop Distributed File System or Hadoop MapReduce for data collection. Cassandra, HBase and MongoDB are mostly used and they can be termed as the representative of NoSQL world. This tutorial discusses the features of NoSQL databases in the light of CAP theorem.", "title": "" }, { "docid": "d440e08b7f2868459fbb31b94c15db5b", "text": "Recently, the necessity of hybrid-microgrid system has been proved as a modern power structure. This paper studies a power management system (PMS) in a hybrid network to control the power-flow procedure between DC and AC buses. The proposed architecture for PMS is designed to eliminate the power disturbances and manage the automatic connection among multiple sources. In this paper, PMS benefits from a 3-phase proportional resonance (PR) control ability to accurately adjust the inverter operation. Also, a Photo-Voltaic (PV) unit and a distributed generator (DG) are considered to supply the load demand power. Compared to the previous studies, the applied scheme has sufficient capability of quickly supplying the load in different scenarios with no network failures. The validity of implemented method is verified through the simulation results.", "title": "" }, { "docid": "0c70966c4dbe41458f7ec9692c566c1f", "text": "By 2012 the U.S. military had increased its investment in research and production of unmanned aerial vehicles (UAVs) from $2.3 billion in 2008 to $4.2 billion [1]. Currently UAVs are used for a wide range of missions such as border surveillance, reconnaissance, transportation and armed attacks. UAVs are presumed to provide their services at any time, be reliable, automated and autonomous. Based on these presumptions, governmental and military leaders expect UAVs to improve national security through surveillance or combat missions. To fulfill their missions, UAVs need to collect and process data. Therefore, UAVs may store a wide range of information from troop movements to environmental data and strategic operations. The amount and kind of information enclosed make UAVs an extremely interesting target for espionage and endangers UAVs of theft, manipulation and attacks. Events such as the loss of an RQ-170 Sentinel to Iranian military forces on 4th December 2011 [2] or the “keylogging” virus that infected an U.S. UAV fleet at Creech Air Force Base in Nevada in September 2011 [3] show that the efforts of the past to identify risks and harden UAVs are insufficient. Due to the increasing governmental and military reliance on UAVs to protect national security, the necessity of a methodical and reliable analysis of the technical vulnerabilities becomes apparent. We investigated recent attacks and developed a scheme for the risk assessment of UAVs based on the provided services and communication infrastructures. We provide a first approach to an UAV specific risk assessment and take into account the factors exposure, communication systems, storage media, sensor systems and fault handling mechanisms. We used this approach to assess the risk of some currently used UAVs: The “MQ-9 Reaper” and the “AR Drone”. A risk analysis of the “RQ-170 Sentinel” is discussed.", "title": "" }, { "docid": "7a3573bfb32dc1e081d43fe9eb35a23b", "text": "Collections of relational paraphrases have been automatically constructed from large text corpora, as a WordNet counterpart for the realm of binary predicates and their surface forms. However, these resources fall short in their coverage of hypernymy links (subsumptions) among the synsets of phrases. This paper closes this gap by computing a high-quality alignment between the relational phrases of the Patty taxonomy, one of the largest collections of this kind, and the verb senses of WordNet. To this end, we devise judicious features and develop a graph-based alignment algorithm by adapting and extending the SimRank random-walk method. The resulting taxonomy of relational phrases and verb senses, coined HARPY, contains 20,812 synsets organized into a Directed Acyclic Graph (DAG) with 616,792 hypernymy links. Our empirical assessment, indicates that the alignment links between Patty and WordNet have high accuracy, with Mean Reciprocal Rank (MRR) score 0.7 and Normalized Discounted Cumulative Gain (NDCG) score 0.73. As an additional extrinsic value, HARPY provides fine-grained lexical types for the arguments of verb senses in WordNet.", "title": "" }, { "docid": "9e592238813d2bb28629f3dddaba109d", "text": "Traveling-wave array design techniques are applied to microstrip comb-line antennas in the millimeter-wave band. The simple design procedure is demonstrated. To neglect the effect of reflection waves in the design, a radiating element with a reflection-canceling slit and a stub-integrated radiating element are proposed. Matching performance is also improved.", "title": "" }, { "docid": "3e6aac2e0ff6099aabeee97dc1292531", "text": "A lthough ordinary least-squares (OLS) regression is one of the most familiar statistical tools, far less has been written − especially in the pedagogical literature − on regression through the origin (RTO). Indeed, the subject is surprisingly controversial. The present note highlights situations in which RTO is appropriate, discusses the implementation and evaluation of such models and compares RTO functions among three popular statistical packages. Some examples gleaned from past Teaching Statistics articles are used as illustrations. For expository convenience, OLS and RTO refer here to linear regressions obtained by least-squares methods with and without a constant term, respectively.", "title": "" }, { "docid": "4cb0d0d6f1823f108a3fc32e0c407605", "text": "This paper describes a novel method to approximate instantaneous frequency of non-stationary signals through an application of fractional Fourier transform (FRFT). FRFT enables us to build a compact and accurate chirp dictionary for each windowed signal, thus the proposed approach offers improved computational efficiency, and good performance when compared with chirp atom method.", "title": "" }, { "docid": "5a805b6f9e821b7505bccc7b70fdd557", "text": "There are many factors that influence the translators while translating a text. Amongst these factors is the notion of ideology transmission through the translated texts. This paper is located within the framework of Descriptive Translation Studies (DTS) and Critical Discourse Analysis (CDA). It investigates the notion of ideology with particular use of critical discourse analysis. The purpose is to highlight the relationship between language and ideology in translated texts. It also aims at discovering whether the translator’s socio-cultural and ideology constraints influence the production of his/her translations. As a mixed research method study, the corpus consists of two different Arabic translated versions of the English book “Media Control” by Noam Chomsky. The micro-level contains the qualitative stage where detailed description and comparison -contrastive and comparativeanalysis will be provided. The micro-level analysis should include the lexical items along with the grammatical items (passive verses. active, nominalisation vs. de-nominalisation, moralisation and omission vs. addition). In order to have more reliable and objective data, computed frequencies of the ideological significance occurrences along with percentage and Chi-square formula were conducted through out the data analysis stage which then form the quantitative part of the current study. The main objective of the mentioned data analysis methodologies is to find out the dissimilarity between the proportions of the information obtained from the target texts (TTs) and their equivalent at the source text (ST). The findings indicts that there are significant differences amongst the two TTs in relation to International Journal of Linguistics ISSN 1948-5425 2014, Vol. 6, No. 3 www.macrothink.org/ijl 119 the word choices including the lexical items and the other syntactic structure compared by the ST. These significant differences indicate some ideological transmission through translation process of the two TTs. Therefore, and to some extent, it can be stated that the differences were also influenced by the translators’ socio-cultural and ideological constraints.", "title": "" }, { "docid": "6c730f32b02ca58f66e98f9fc5181484", "text": "When analyzing a visualized network, users need to explore different sections of the network to gain insight. However, effective exploration of large networks is often a challenge. While various tools are available for users to explore the global and local features of a network, these tools usually require significant interaction activities, such as repetitive navigation actions to follow network nodes and edges. In this paper, we propose a structure-based suggestive exploration approach to support effective exploration of large networks by suggesting appropriate structures upon user request. Encoding nodes with vectorized representations by transforming information of surrounding structures of nodes into a high dimensional space, our approach can identify similar structures within a large network, enable user interaction with multiple similar structures simultaneously, and guide the exploration of unexplored structures. We develop a web-based visual exploration system to incorporate this suggestive exploration approach and compare performances of our approach under different vectorizing methods and networks. We also present the usability and effectiveness of our approach through a controlled user study with two datasets.", "title": "" }, { "docid": "38e9aa4644edcffe87dd5ae497e99bbe", "text": "Hashtags, created by social network users, have gained a huge popularity in recent years. As a kind of metatag for organizing information, hashtags in online social networks, especially in Instagram, have greatly facilitated users' interactions. In recent years, academia starts to use hashtags to reshape our understandings on how users interact with each other. #like4like is one of the most popular hashtags in Instagram with more than 290 million photos appended with it, when a publisher uses #like4like in one photo, it means that he will like back photos of those who like this photo. Different from other hashtags, #like4like implies an interaction between a photo's publisher and a user who likes this photo, and both of them aim to attract likes in Instagram. In this paper, we study whether #like4like indeed serves the purpose it is created for, i.e., will #like4like provoke more likes? We first perform a general analysis of #like4like with 1.8 million photos collected from Instagram, and discover that its quantity has dramatically increased by 1,300 times from 2012 to 2016. Then, we study whether #like4like will attract likes for photo publishers; results show that it is not #like4like but actually photo contents attract more likes, and the lifespan of a #like4like photo is quite limited. In the end, we study whether users who like #like4like photos will receive likes from #like4like publishers. However, results show that more than 90% of the publishers do not keep their promises, i.e., they will not like back others who like their #like4like photos; and for those who keep their promises, the photos which they like back are often randomly selected.", "title": "" }, { "docid": "c79510daa790e5c92e0c3899cc4a563b", "text": "Purpose – The purpose of this study is to interpret consumers’ emotion in their consumption experience in the context of mobile commerce from an experiential view. The study seeks to address concerns about the experiential aspects of mobile commerce regardless of the consumption type. For the purpose, the authors aims to propose a stimulus-organism-response (S-O-R) based model that incorporates both utilitarian and hedonic factors of consumers. Design/methodology/approach – A survey study was conducted to collect data from 293 mobile phone users. The questionnaire was administered in study classrooms, a library, or via e-mail. The measurement model and structural model were examined using LISREL 8.7. Findings – The results of this research implied that emotion played a significant role in the mobile consumption experience; hedonic factors had a positive effect on the consumption experience, while utilitarian factors had a negative effect on the consumption experience of consumers. The empirical findings also indicated that media richness was as important as subjective norms, and more important than convenience and self-efficacy. Originality/value – Few m-commerce studies have focused directly on the experiential aspects of consumption, including the hedonic experience and positive emotions among mobile device users. Applying the stimulus-organism-response (S-O-R) framework from the perspective of the experiential view, the current research model is developed to examine several utilitarian and hedonic factors in the context of the consumption experience, and indicates a comparison between the information processing (utilitarian) view and the experiential (hedonic) view of consumer behavior. It illustrates the relationships among six variables (i.e. convenience, media richness, subjective norms, self-efficacy, emotion, and consumption experience) in a mobile commerce context.", "title": "" }, { "docid": "d9bd23208ab6eb8688afea408a4c9eba", "text": "A novel ultra-wideband (UWB) bandpass filter with 5 to 6 GHz rejection band is proposed. The multiple coupled line structure is incorporated with multiple-mode resonator (MMR) to provide wide transmission band and enhance out-of band performance. To inhibit the signals ranged from 5- to 6-GHz, four stepped-impedance open stubs are implemented on the MMR without increasing the size of the proposed filter. The design of the proposed UWB filter has two transmission bands. The first passband from 2.8 GHz to 5 GHz has less than 2 dB insertion loss and greater than 18 dB return loss. The second passband within 6 GHz and 10.6 GHz has less than 1.5 dB insertion loss and greater than 15 dB return loss. The rejection at 5.5 GHz is better than 50 dB. This filter can be integrated in UWB radio systems and efficiently enhance the interference immunity from WLAN.", "title": "" } ]
scidocsrr
65fb1fc29df86e7ff17a85eaf8c35e26
Accelerated PSO Swarm Search Feature Selection for Data Stream Mining Big Data
[ { "docid": "eeff4d71a0af418828d5783a041b466f", "text": "In recent years, advances in hardware technology have facilitated ne w ways of collecting data continuously. In many applications such as network monitorin g, the volume of such data is so large that it may be impossible to store the data on disk. Furthermore, even when the data can be stored, the volume of th incoming data may be so large that it may be impossible to process any partic ular record more than once. Therefore, many data mining and database op erati ns such as classification, clustering, frequent pattern mining and indexing b ecome significantly more challenging in this context. In many cases, the data patterns may evolve continuously, as a result of which it is necessary to design the mining algorithms effectively in order to accou nt f r changes in underlying structure of the data stream. This makes the solution s of the underlying problems even more difficult from an algorithmic and computa tion l point of view. This book contains a number of chapters which are caref ully chosen in order to discuss the broad research issues in data streams. The purp ose of this chapter is to provide an overview of the organization of the stream proces sing and mining techniques which are covered in this book.", "title": "" }, { "docid": "b691909add295b32b69e9720076ef850", "text": "Decision trees are considered to be one of the most popular approaches for representing classifiers. Researchers from various disciplines such as statistics, machine learning, pattern recognition, and data mining considered the issue of growing a decision tree from available data. This paper presents an updated survey of current methods for constructing decision tree classifiers in a top-down manner. The paper suggests a unified algorithmic framework for presenting these algorithms and describes the various splitting criteria and pruning methodologies.", "title": "" } ]
[ { "docid": "f8d0929721ba18b2412ca516ac356004", "text": "Because of the fact that vehicle crash tests are complex and complicated experiments it is advisable to establish their mathematical models. This paper contains an overview of the kinematic and dynamic relationships of a vehicle in a collision. There is also presented basic mathematical model representing a collision together with its analysis. The main part of this paper is devoted to methods of establishing parameters of the vehicle crash model and to real crash data investigation i.e. – creation of a Kelvin model for a real experiment, its analysis and validation. After model’s parameters extraction a quick assessment of an occupant crash severity is done. Key-Words: Modeling, vehicle crash, Kelvin model, data processing.", "title": "" }, { "docid": "2ad76db05382d5bbdae27d5192cccd72", "text": "Very large-scale classification taxonomies typically have hundreds of thousands of categories, deep hierarchies, and skewed category distribution over documents. However, it is still an open question whether the state-of-the-art technologies in automated text categorization can scale to (and perform well on) such large taxonomies. In this paper, we report the first evaluation of Support Vector Machines (SVMs) in web-page classification over the full taxonomy of the Yahoo! categories. Our accomplishments include: 1) a data analysis on the Yahoo! taxonomy; 2) the development of a scalable system for large-scale text categorization; 3) theoretical analysis and experimental evaluation of SVMs in hierarchical and non-hierarchical settings for classification; 4) an investigation of threshold tuning algorithms with respect to time complexity and their effect on the classification accuracy of SVMs. We found that, in terms of scalability, the hierarchical use of SVMs is efficient enough for very large-scale classification; however, in terms of effectiveness, the performance of SVMs over the Yahoo! Directory is still far from satisfactory, which indicates that more substantial investigation is needed.", "title": "" }, { "docid": "6c9acb831bc8dc82198aef10761506be", "text": "In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Rules extracted from databases by data mining techniques, such as classification or association rules, when used for decision tasks such as benefit or credit approval, can be discriminatory in the above sense. In this paper, the notion of discriminatory classification rules is introduced and studied. Providing a guarantee of non-discrimination is shown to be a non trivial task. A naive approach, like taking away all discriminatory attributes, is shown to be not enough when other background knowledge is available. Our approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge. An empirical assessment of the results on the German credit dataset is also provided.", "title": "" }, { "docid": "49585da1d2c3102683e73dddb830ba36", "text": "The knowledge pyramid has been used for several years to illustrate the hierarchical relationships between data, information, knowledge, and wisdom. This paper posits that the knowledge pyramid is too basic and fails to represent reality and presents a revised knowledge pyramid. One key difference is that the revised knowledge pyramid includes knowledge management as an extraction of reality with a focus on organizational learning. The model also posits that newer initiatives such as business and/or customer intelligence are the result of confusion in understanding the traditional knowledge pyramid that is resolved in the revised knowledge pyramid.", "title": "" }, { "docid": "3827ee6ebfc7813b566ef1d8f94c0f42", "text": "We provide a simple but novel supervised weighting scheme for adjusting term frequency in tf-idf for sentiment analysis and text classification. We compare our method to baseline weighting schemes and find that it outperforms them on multiple benchmarks. The method is robust and works well on both snippets and longer documents.", "title": "" }, { "docid": "670b58d379b7df273309e55cf8e25db4", "text": "In this paper, we introduce a new large-scale dataset of ships, called SeaShips, which is designed for training and evaluating ship object detection algorithms. The dataset currently consists of 31 455 images and covers six common ship types (ore carrier, bulk cargo carrier, general cargo ship, container ship, fishing boat, and passenger ship). All of the images are from about 10 080 real-world video segments, which are acquired by the monitoring cameras in a deployed coastline video surveillance system. They are carefully selected to mostly cover all possible imaging variations, for example, different scales, hull parts, illumination, viewpoints, backgrounds, and occlusions. All images are annotated with ship-type labels and high-precision bounding boxes. Based on the SeaShips dataset, we present the performance of three detectors as a baseline to do the following: 1) elementarily summarize the difficulties of the dataset for ship detection; 2) show detection results for researchers using the dataset; and 3) make a comparison to identify the strengths and weaknesses of the baseline algorithms. In practice, the SeaShips dataset would hopefully advance research and applications on ship detection.", "title": "" }, { "docid": "313c68843b2521d553772dd024eec202", "text": "In this work we perform an analysis of probabilistic approaches to recommendation upon a different validation perspective, which focuses on accuracy metrics such as recall and precision of the recommendation list. Traditionally, state-of-art approches to recommendations consider the recommendation process from a “missing value prediction” perspective. This approach simplifies the model validation phase that is based on the minimization of standard error metrics such as RMSE. However, recent studies have pointed several limitations of this approach, showing that a lower RMSE does not necessarily imply improvements in terms of specific recommendations. We demonstrate that the underlying probabilistic framework offers several advantages over traditional methods, in terms of flexibility in the generation of the recommendation list and consequently in the accuracy of recommendation.", "title": "" }, { "docid": "87569dd8e1c695624e6513f702e94780", "text": "The object of the experiment was to verify whether cannabidiol (CBD) reduces the anxiety provoked by delta 9-THC in normal volunteers, and whether this effect occurs by a general block of the action of delta 9-THC or by a specific anxiolytic effect. Appropriate measurements and scales were utilized and the eight volunteers received, the following treatments in a double-blind procedure: 0.5 mg/kg delta 9-THC, 1 mg/kg CBD, a mixture containing 0.5 mg/kg delta 9-THC and 1 mg/kg CBD and placebo and diazepam (10 mg) as controls. Each volunteer received the treatments in a different sequence. It was verified that CBD blocks the anxiety provoked by delta 9-THC, however this effect also extended to marihuana-like effects and to other subjective alterations induced by delta 9-THC. This antagonism does not appear to be caused by a general block of delta 9-THC effects, since no change was detected in the pulse-rate measurements. Several further effects were observed typical of CBD and of an opposite nature to those of delta 9-THC. These results suggest that the effects of CBD, as opposed to those of delta 9-THC, might be involved in the antagonism of effects between the two cannabinoids.", "title": "" }, { "docid": "a9cafa9b8788e3fa8bcdec1a7be49582", "text": "Ensuring the safety of fully autonomous vehicles requires a multi-disciplinary approach across all the levels of functional hierarchy, from hardware fault tolerance, to resilient machine learning, to cooperating with humans driving conventional vehicles, to validating systems for operation in highly unstructured environments, to appropriate regulatory approaches. Significant open technical challenges include validating inductive learning in the face of novel environmental inputs and achieving the very high levels of dependability required for full-scale fleet deployment. However, the biggest challenge may be in creating an end-to-end design and deployment process that integrates the safety concerns of a myriad of technical specialties into a unified approach.", "title": "" }, { "docid": "ae3c79cbe4da692903210ad45e964e2f", "text": "The aim of this paper is to present a method for integration of measurements provided by inertial sensors (gyroscopes and accelerometers), GPS and a video system in order to estimate position and attitude of an UAV (Unmanned Aerial Vehicle). Inertial sensors are widely used for aircraft navigation because they represent a low cost and compact solution, but their measurements suffer of several errors which cause a rapid divergence of position and attitude estimates. To avoid divergence inertial sensors are usually coupled with other systems as for example GNSS (Global Navigation Satellite System). In this paper it is examined the possibility to couple the inertial sensors also with a camera. A camera is generally installed on-board UAVs for surveillance purposes, it presents several advantages with respect to GNSS as for example great accuracy and higher data rate. Moreover, it can be used in urban area or, more in general, where multipath effects can forbid the application of GNSS. A camera, coupled with a video processing system, can provide attitude and position (up to a scale factor), but it has lower data rate than inertial sensors and its measurements have latencies which can prejudice the performances and the effectiveness of the flight control system. The integration of inertial sensors with a camera allows exploiting the better features of both the systems, providing better performances in position and attitude estimation.", "title": "" }, { "docid": "804920bbd9ee11cc35e93a53b58e7e79", "text": "Narrative reports in medical records contain a wealth of information that may augment structured data for managing patient information and predicting trends in diseases. Pertinent negatives are evident in text but are not usually indexed in structured databases. The objective of the study reported here was to test a simple algorithm for determining whether a finding or disease mentioned within narrative medical reports is present or absent. We developed a simple regular expression algorithm called NegEx that implements several phrases indicating negation, filters out sentences containing phrases that falsely appear to be negation phrases, and limits the scope of the negation phrases. We compared NegEx against a baseline algorithm that has a limited set of negation phrases and a simpler notion of scope. In a test of 1235 findings and diseases in 1000 sentences taken from discharge summaries indexed by physicians, NegEx had a specificity of 94.5% (versus 85.3% for the baseline), a positive predictive value of 84.5% (versus 68.4% for the baseline) while maintaining a reasonable sensitivity of 77.8% (versus 88.3% for the baseline). We conclude that with little implementation effort a simple regular expression algorithm for determining whether a finding or disease is absent can identify a large portion of the pertinent negatives from discharge summaries.", "title": "" }, { "docid": "d06dc916942498014f9d00498c1d1d1f", "text": "In this paper we propose a state space modeling approach for trust evaluation in wireless sensor networks. In our state space trust model (SSTM), each sensor node is associated with a trust metric, which measures to what extent the data transmitted from this node would better be trusted by the server node. Given the SSTM, we translate the trust evaluation problem to be a nonlinear state filtering problem. To estimate the state based on the SSTM, a component-wise iterative state inference procedure is proposed to work in tandem with the particle filter, and thus the resulting algorithm is termed as iterative particle filter (IPF). The computational complexity of the IPF algorithm is theoretically linearly related with the dimension of the state. This property is desirable especially for high dimensional trust evaluation and state filtering problems. The performance of the proposed algorithm is evaluated by both simulations and real data analysis. Index Terms state space trust model, wireless sensor network, trust evaluation, particle filter, high dimensional. ✦", "title": "" }, { "docid": "47c05e54488884854e6bcd5170ed65e8", "text": "This work is about a novel methodology for window detection in urban environments and its multiple use in vision system applications. The presented method for window detection includes appropriate early image processing, provides a multi-scale Haar wavelet representation for the determination of image tiles which is then fed into a cascaded classifier for the task of window detection. The classifier is learned from a Gentle Adaboost driven cascaded decision tree on masked information from training imagery and is tested towards window based ground truth information which is together with the original building image databases publicly available. The experimental results demonstrate that single window detection is to a sufficient degree successful, e.g., for the purpose of building recognition, and, furthermore, that the classifier is in general capable to provide a region of interest operator for the interpretation of urban environments. The extraction of this categorical information is beneficial to index into search spaces for urban object recognition as well as aiming towards providing a semantic focus for accurate post-processing in 3D information processing systems. Targeted applications are (i) mobile services on uncalibrated imagery, e.g. , for tourist guidance, (ii) sparse 3D city modeling, and (iii) deformation analysis from high resolution imagery.", "title": "" }, { "docid": "39492127ee68a86b33a8a120c8c79f5d", "text": "The Alternating Direction Method of Multipliers (ADMM) has received lots of attention recently due to the tremendous demand from large-scale and data-distributed machine learning applications. In this paper, we present a stochastic setting for optimization problems with non-smooth composite objective functions. To solve this problem, we propose a stochastic ADMM algorithm. Our algorithm applies to a more general class of convex and nonsmooth objective functions, beyond the smooth and separable least squares loss used in lasso. We also demonstrate the rates of convergence for our algorithm under various structural assumptions of the stochastic function: O(1/ √ t) for convex functions and O(log t/t) for strongly convex functions. Compared to previous literature, we establish the convergence rate of ADMM for convex problems in terms of both the objective value and the feasibility violation. A novel application named GraphGuided SVM is proposed to demonstrate the usefulness of our algorithm.", "title": "" }, { "docid": "b0548d8bbb379d996db2fc726b1b40ca", "text": "Despite their enhanced marketplace visibility, validity of wearable photoplethysmographic heart rate monitoring is scarce. Forty-seven healthy participants performed seven, 6-min exercise bouts and completed a valid skin type scale. Participants wore an Omron HR500U (OHR) and a Mio Alpha (MA), two commercial wearable photoplethysmographic heart rate monitors. Data were compared to a Polar RS800CX (PRS). Means and error were calculated between devices using minutes 2-5. Compared to PRS, MA data was significantly different in walking, biking (2.41 ± 3.99 bpm and 3.26 ± 11.38 bpm, p < 0.05) and weight lifting (23.30 ± 31.94 bpm, p < 0.01). OHR differed from PRS in walking (4.95 ± 7.53 bpm, p < 0.05) and weight lifting (4.67 ± 8.95 bpm, p < 0.05). MA during elliptical, stair climbing and biking conditions demonstrated a strong correlation between jogging speed and error (r = 0.55, p < 0.0001), and showed differences in participants with less photosensitive skin.", "title": "" }, { "docid": "79cc023b70a3d2db3d3b5bb3451839a2", "text": "Cloud computing has brought a revolution in the field of computing. Apart from the merits of cloud computing, it possesses several demerits also. There are several types of attacks on cloud. All the attacks are focused on a particular layer of cloud architecture. Cloud computing architecture composed of three layers Infrastructure as a service (IaaS), platform as a service (PaaS) and Application as a service (AaaS). If Iaas is vulnerable then all above layers can't be secure. The principal concern of security in Iaas is Virtualization. There are several attacks on virtualization in IaaS layer like attack on VM image sharing. VM isolation violation, insecure VM migration and VM escape. In this paper all such attacks are studied and the solutions are also discussed.", "title": "" }, { "docid": "8bb0e19b03468313a52a1800a56f21db", "text": "DeSR is a statistical transition-based dependency parser which learns from annotated corpora which actions to perform for building parse trees while scanning a sentence. We describe recent improvements to the parser, in particular stacked parsing, exploiting a beam search strategy and using a Multilayer Perceptron classifier. For the Evalita 2009 Dependency Parsing task DesR was configured to use a combination of stacked parsers. The stacked combination achieved the best accuracy scores in both the main and pilot subtasks. The contribution to the result of various choices is analyzed, in particular for taking advantage of the peculiar features of the TUT Treebank.", "title": "" }, { "docid": "b5a64e072961be91e6ee92e8a6689596", "text": "Cortical bone supports and protects our skeletal functions and it plays an important in determining bone strength and fracture risks. Cortical bone segmentation is needed for quantitative analyses and the task is nontrivial for in vivo multi-row detector CT (MD-CT) imaging due to limited resolution and partial volume effects. An automated cortical bone segmentation algorithm for in vivo MD-CT imaging of distal tibia is presented. It utilizes larger contextual and topologic information of the bone using a modified fuzzy distance transform and connectivity analyses. An accuracy of 95.1% in terms of volume of agreement with true segmentations and a repeat MD-CT scan intra-class correlation of 98.2% were observed in a cadaveric study. An in vivo study involving 45 age-similar and height-matched pairs of male and female volunteers has shown that, on an average, male subjects have 16.3% thicker cortex and 4.7% increased porosity as compared to females.", "title": "" }, { "docid": "13c2c1a1bd4ff886f93d8f89a14e39e2", "text": "One of the key elements in qualitative data analysis is the systematic coding of text (Strauss and Corbin 1990:57%60; Miles and Huberman 1994:56). Codes are the building blocks for theory or model building and the foundation on which the analyst’s arguments rest. Implicitly or explicitly, they embody the assumptions underlying the analysis. Given the context of the interdisciplinary nature of research at the Centers for Disease Control and Prevention (CDC), we have sought to develop explicit guidelines for all aspects of qualitative data analysis, including codebook development.", "title": "" }, { "docid": "8c55e20ae3d116811dba74ee5da3679f", "text": "In this paper we present a Neural Network (NN) architecture for detecting grammatical errors in Statistical Machine Translation (SMT) using monolingual morpho-syntactic word representations in combination with surface and syntactic context windows. We test our approach on two language pairs and two tasks, namely detecting grammatical errors and predicting overall post-editing effort. Our results show that this approach is not only able to accurately detect grammatical errors but it also performs well as a quality estimation system for predicting overall post-editing effort, which is characterised by all types of MT errors. Furthermore, we show that this approach is portable to other languages.", "title": "" } ]
scidocsrr
3de2bb9f44e7ca53fcd55dc4e98f32ec
ANTECEDENTS AND DISTINCTIONS BETWEEN ONLINE TRUST AND DISTRUST : PREDICTING HIGH-AND LOW-RISK INTERNET BEHAVIORS
[ { "docid": "4fa7ee44cdc4b0cd439723e9600131bd", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ucpress.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" }, { "docid": "30a617e3f7e492ba840dfbead690ae39", "text": "Information systems professionals must pay attention to online customer retention. Drawing on the relationship marketing literature, we formulated and tested a model to explain B2C user repurchase intention from the perspective of relationship quality. The model was empirically tested through a survey conducted in Northern Ireland. Results showed that online relationship quality and perceived website usability positively impacted customer repurchase intention. Moreover, online relationship quality was positively influenced by perceived vendor expertise in order fulfillment, perceived vendor reputation, and perceived website usability, whereas distrust in vendor behavior negatively influenced online relationship quality. Implications of these findings are discussed. 2011 Elsevier B.V. All rights reserved. § This work was partially supported by Strategic Research Grant at City University of Hong Kong, China (No. CityU 7002521), and the National Nature Science Foundation of China (No. 70773008). * Corresponding author at: P7722, City University of Hong Kong, Hong Kong, China. Tel.: +852 27887492; fax: +852 34420370. E-mail address: ylfang@cityu.edu.hk (Y. Fang).", "title": "" } ]
[ { "docid": "3f58f24dbc2d75b258c003fd6396f505", "text": "The stochastic multi-armed bandit problem is an important model for studying the explorationexploitation tradeoff in reinforcement learning. Although many algorithms for the problem are well-understood theoretically, empirical confirmation of their effectiveness is generally scarce. This paper presents a thorough empirical study of the most popular multi-armed bandit algorithms. Three important observations can be made from our results. Firstly, simple heuristics such as -greedy and Boltzmann exploration outperform theoretically sound algorithms on most settings by a significant margin. Secondly, the performance of most algorithms varies dramatically with the parameters of the bandit problem. Our study identifies for each algorithm the settings where it performs well, and the settings where it performs poorly. These properties are not described by current theory, even though they can be exploited in practice in the design of heuristics. Thirdly, the algorithms’ performance relative each to other is affected only by the number of bandit arms and the variance of the rewards. This finding may guide the design of subsequent empirical evaluations. In the second part of the paper, we turn our attention to an important area of application of bandit algorithms: clinical trials. Although the design of clinical trials has been one of the principal practical problems motivating research on multi-armed bandits, bandit algorithms have never been evaluated as potential treatment allocation strategies. Using data from a real study, we simulate the outcome that a 2001-2002 clinical trial would have had if bandit algorithms had been used to allocate patients to treatments. We find that an adaptive trial would have successfully treated at least 50% more patients, while significantly reducing the number of adverse effects and increasing patient retention. At the end of the trial, the best treatment could have still been identified with a high level of statistical confidence. Our findings demonstrate that bandit algorithms are attractive alternatives to current adaptive treatment allocation strategies.", "title": "" }, { "docid": "1da4635f5fcfe102b52a9ba9bb032def", "text": "This paper presents a corpus study of evaluative and speculative language. Knowledge of such language would be useful in many applications, such as text categorization and summarization. Analyses of annotator agreement and of characteristics of subjective language are performed. This study yields knowledge needed to design e ective machine learning systems for identifying subjective language.", "title": "" }, { "docid": "18a524545090542af81e0a66df3a1395", "text": "What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process.\n When computers are involved, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the process, we propose making inferences based on the data it uses.\n We present four contributions. First, we link disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on how well the protected class can be predicted from the other attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.", "title": "" }, { "docid": "44582f087f9bb39d6e542ff7b600d1c7", "text": "We propose a new deterministic approach to coreference resolution that combines the global information and precise features of modern machine-learning models with the transparency and modularity of deterministic, rule-based systems. Our sieve architecture applies a battery of deterministic coreference models one at a time from highest to lowest precision, where each model builds on the previous model's cluster output. The two stages of our sieve-based architecture, a mention detection stage that heavily favors recall, followed by coreference sieves that are precision-oriented, offer a powerful way to achieve both high precision and high recall. Further, our approach makes use of global information through an entity-centric model that encourages the sharing of features across all mentions that point to the same real-world entity. Despite its simplicity, our approach gives state-of-the-art performance on several corpora and genres, and has also been incorporated into hybrid state-of-the-art coreference systems for Chinese and Arabic. Our system thus offers a new paradigm for combining knowledge in rule-based systems that has implications throughout computational linguistics.", "title": "" }, { "docid": "8b060d80674bd3f329a675f1a3f4bce2", "text": "Smartphones are ubiquitous devices that offer endless possibilities for health-related applications such as Ambient Assisted Living (AAL). They are rich in sensors that can be used for Human Activity Recognition (HAR) and monitoring. The emerging problem now is the selection of optimal combinations of these sensors and existing methods to accurately and efficiently perform activity recognition in a resource and computationally constrained environment. To accomplish efficient activity recognition on mobile devices, the most discriminative features and classification algorithms must be chosen carefully. In this study, sensor fusion is employed to improve the classification results of a lightweight classifier. Furthermore, the recognition performance of accelerometer, gyroscope and magnetometer when used separately and simultaneously on a feature-level sensor fusion is examined to gain valuable knowledge that can be used in dynamic sensing and data collection. Six ambulatory activities, namely, walking, running, sitting, standing, walking upstairs and walking downstairs, are inferred from low-sensor data collected from the right trousers pocket of the subjects and feature selection is performed to further optimize resource use.", "title": "" }, { "docid": "9b9a04a859b51866930b3fb4d93653b6", "text": "BACKGROUND\nResults of several studies have suggested a probable etiologic association between Epstein-Barr virus (EBV) and leukemias; therefore, the aim of this study was to investigate the association of EBV in childhood leukemia.\n\n\nMETHODS\nA direct isothermal amplification method was developed for detection of the latent membrane protein 1 (LMP1) of EBV in the peripheral blood of 80 patients with leukemia (54 had lymphoid leukemia and 26 had myeloid leukemia) and of 20 hematologically healthy control subjects.\n\n\nRESULTS\nEBV LMP1 gene transcripts were found in 29 (36.3%) of the 80 patients with leukemia but in none of the healthy controls (P < .0001). Of the 29 EBV(+) cases, 23 (79.3%), 5 (17.3%), and 1 (3.4%) were acute lymphoblastic leukemia, acute myeloid leukemia, and chronic myeloid leukemia, respectively.\n\n\nCONCLUSION\nEBV LMP1 gene transcriptional activity was observed in a significant proportion of patients with acute lymphoblastic leukemia. EBV infection in patients with lymphoid leukemia may be a factor involved in the high incidence of pediatric leukemia in the Sudan.", "title": "" }, { "docid": "438747d014f4bc65d7e5c2d7a1abaaa0", "text": "Phishing refers to fraudulent social engineering techniques used to elicit sensitive information from unsuspecting victims. In this paper, our scheme is aimed at detecting phishing mails which do not contain any links but bank on the victim's curiosity by luring them into replying with sensitive information. We exploit the common features among all such phishing emails such as non-mentioning of the victim's name in the email, a mention of monetary incentive and a sentence inducing the recipient to reply. This textual analysis can be further combined with header analysis of the email so that a final combined evaluation on the basis of both these scores can be done. We have shown that this method is far better than the existing Phishing Email Detection techniques as this covers emails without links while the pre-existing methods were based on the presumption of link(s).", "title": "" }, { "docid": "bb896fd511a8b6306cc9f2a17639cd71", "text": "We present the results of a user study that compares different ways of representing Dual-Scale data charts. Dual-Scale charts incorporate two different data resolutions into one chart in order to emphasize data in regions of interest or to enable the comparison of data from distant regions. While some design guidelines exist for these types of charts, there is currently little empirical evidence on which to base their design. We fill this gap by discussing the design space of Dual-Scale cartesian-coordinate charts and by experimentally comparing the performance of different chart types with respect to elementary graphical perception tasks such as comparing lengths and distances. Our study suggests that cut-out charts which include collocated full context and focus are the best alternative, and that superimposed charts in which focus and context overlap on top of each other should be avoided.", "title": "" }, { "docid": "91f1509dd2c6b22d1553b3a5a8a618e9", "text": "Witten and Frank 's textbook was one of two books that 1 used for a data mining class in the Fall o f 2001. T h e book covers all major methods o f data mining that p roduce a knowledge representa t ion as output . Knowledge representa t ion is hereby unders tood as a representat ion that can be studied, unders tood, and interpreted by human beings, at least in principle. Thus , neural networks and genetic a lgor i thms are excluded f rom the topics of this textbook. We need to say \"can be unders tood in pr inciple\" because a large decision tree or a large rule set may be as hard to interpret as a neural network.", "title": "" }, { "docid": "c227cae0ec847a227945f1dec0b224d2", "text": "We present a highly flexible and efficient software pipeline for programmable triangle voxelization. The pipeline, entirely written in CUDA, supports both fully conservative and thin voxelizations, multiple boolean, floating point, vector-typed render targets, user-defined vertex and fragment shaders, and a bucketing mode which can be used to generate 3D A-buffers containing the entire list of fragments belonging to each voxel. For maximum efficiency, voxelization is implemented as a sort-middle tile-based rasterizer, while the A-buffer mode, essentially performing 3D binning of triangles over uniform grids, uses a sort-last pipeline. Despite its major flexibility, the performance of our tile-based rasterizer is always competitive with and sometimes more than an order of magnitude superior to that of state-of-the-art binary voxelizers, whereas our bucketing system is up to 4 times faster than previous implementations. In both cases the results have been achieved through the use of careful load-balancing and high performance sorting primitives.", "title": "" }, { "docid": "a70e664e2fcea37836cc55096295c4f4", "text": "This article reviews published data on familial recurrent hydatidiform mole with particular reference to the genetic basis of this condition, the likely outcome of subsequent pregnancies in affected women and the risk of persistent trophoblastic disease following molar pregnancies in these families. Familial recurrent hydatidiform mole is characterized by recurrent complete hydatidiform moles of biparental, rather than the more usual androgenetic, origin. Although the specific gene defect in these families has not been identified, genetic mapping has shown that in most families the gene responsible is located in a 1.1 Mb region on chromosome 19q13.4. Mutations in this gene result in dysregulation of imprinting in the female germ line with abnormal development of both embryonic and extraembryonic tissue. Subsequent pregnancies in women diagnosed with this condition are likely to be complete hydatidiform moles. In 152 pregnancies in affected women, 113 (74%) were complete hydatidiform moles, 26 (17%) were miscarriages, 6 (4%) were partial hydatidiform moles, and 7 (5%) were normal pregnancies. Molar pregnancies in women with familial recurrent hydatidiform mole have a risk of progressing to persistent trophoblastic disease similar to that of androgenetic complete hydatidiform mole.", "title": "" }, { "docid": "057df3356022c31db27b1f165c827524", "text": "Eating disorders in dancers are thought to be common, but the exact rates remain to be clarified. The aim of this study is to systematically compile and analyse the rates of eating disorders in dancers. A literature search, appraisal and meta-analysis were conducted. Thirty-three relevant studies were published between 1966 and 2013 with sufficient data for extraction. Primary data were extracted as raw numbers or confidence intervals. Risk ratios and 95% confidence intervals were calculated for controlled studies. The overall prevalence of eating disorders was 12.0% (16.4% for ballet dancers), 2.0% (4% for ballet dancers) for anorexia, 4.4% (2% for ballet dancers) for bulimia and 9.5% (14.9% for ballet dancers) for eating disorders not otherwise specified (EDNOS). The dancer group had higher mean scores on the EAT-26 and the Eating Disorder Inventory subscales. Dancers, in general, had a higher risk of suffering from eating disorders in general, anorexia nervosa and EDNOS, but no higher risk of suffering from bulimia nervosa. The study concluded that as dancers had a three times higher risk of suffering from eating disorders, particularly anorexia nervosa and EDNOS, specifically designed services for this population should be considered.", "title": "" }, { "docid": "8c24f4e178ebe403da3f90f05b97ac17", "text": "The success of the Human Genome Project and the powerful tools of molecular biology have ushered in a new era of medicine and nutrition. The pharmaceutical industry expects to leverage data from the Human Genome Project to develop new drugs based on the genetic constitution of the patient; likewise, the food industry has an opportunity to position food and nutritional bioactives to promote health and prevent disease based on the genetic constitution of the consumer. This new era of molecular nutrition--that is, nutrient-gene interaction--can unfold in dichotomous directions. One could focus on the effects of nutrients or food bioactives on the regulation of gene expression (ie, nutrigenomics) or on the impact of variations in gene structure on one's response to nutrients or food bioactives (ie, nutrigenetics). The challenge of the public health nutritionist will be to balance the needs of the community with those of the individual. In this regard, the excitement and promise of molecular nutrition should be tempered by the need to validate the scientific data emerging from the disciplines of nutrigenomics and nutrigenetics and the need to educate practitioners and communicate the value to consumers-and to do it all within a socially responsible bioethical framework.", "title": "" }, { "docid": "e440ad1afbbfbf5845724fd301051d92", "text": "The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of highand low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing. TYPE OF PAPER AND", "title": "" }, { "docid": "d30e2123e3c21823263ceadf3b332485", "text": "In this paper a fractional order proportional-derivative (FO-PD) control strategy is presented and applied to AR. Drone quadrotor system. The controller parameters are calculated based on specifying a certain gain crossover frequency, a phase margin and a robustness to gain variations. Its performance is compared against two other integer order controllers; i) Extended Prediction Self-Adaptive Control (EPSAC) approach to Model Predictive Control (MPC) ii) Integer order PD controller. The closed loop control simulations applied on the AR. Drone system indicate the proposed controller outperforms the integer order PD control. Additionally, the proposed controller has less complexity but similar performance as MPC based control.", "title": "" }, { "docid": "055e41fd6ace430ea9593a30e3dd02d2", "text": "Every day we are exposed to different ideas, or memes, competing with each other for our attention. Previous research explained popularity and persistence heterogeneity of memes by assuming them in competition for limited attention resources, distributed in a heterogeneous social network. Little has been said about what characteristics make a specific meme more likely to be successful. We propose a similarity-based explanation: memes with higher similarity to other memes have a significant disadvantage in their potential popularity. We employ a meme similarity measure based on semantic text analysis and computer vision to prove that a meme is more likely to be successful and to thrive if its characteristics make it unique. Our results show that indeed successful memes are located in the periphery of the meme similarity space and that our similarity measure is a promising predictor of a meme success.", "title": "" }, { "docid": "6706ad68059944988c41ba96e6d67f7c", "text": "This paper investigates the motives, behavior, and characteristics shaping mutual fund managers’ willingness to incorporate Environmental, Social and Governance (ESG) issues into investment decision making. Using survey evidence from fund managers from five different countries, we demonstrate that this predisposition is the stronger, the shorter their average forecasting horizon and the higher their level of reliance on business risk in portfolio management is. We also find that the propensity to incorporate ESG factors is positively related to an increasing level of risk aversion, an increasing importance of salary change and senior management approval/disapproval as motivating factors as well as length of professional experience in current fund and increasing significance of assessment by superiors in remuneration. Overall, our evidence suggests that ESG diligence among fund managers serves mainly as a method for mitigating risk and is typically motivated by herding; it is much less important as a tool for additional value creation. The prevalent use of ESG criteria in mitigating risk is in contrast with traditional approach, but it is in line with behavioral finance theory. Additionally, our results also show a strong difference in the length of the forecasting horizon between continental European and Anglo-Saxon fund managers.", "title": "" }, { "docid": "27a4b74d3c47fc25a8564cd824aa9e66", "text": "Grid computing is increasingly considered as a promising next-generation computational platform that supports wide-area parallel and distributed computing. In grid environments, applications are always regarded as workflows. The problem of scheduling workflows in terms of certain quality of service (QoS) requirements is challenging and it significantly influences the performance of grids. By now, there have been some algorithms for grid workflow scheduling, but most of them can only tackle the problems with a single QoS parameter or with small-scale workflows. In this frame, this paper aims at proposing an ant colony optimization (ACO) algorithm to schedule large-scale workflows with various QoS parameters. This algorithm enables users to specify their QoS preferences as well as define the minimum QoS thresholds for a certain application. The objective of this algorithm is to find a solution that meets all QoS constraints and optimizes the user-preferred QoS parameter. Based on the characteristics of workflow scheduling, we design seven new heuristics for the ACO approach and propose an adaptive scheme that allows artificial ants to select heuristics based on pheromone values. Experiments are done in ten workflow applications with at most 120 tasks, and the results demonstrate the effectiveness of the proposed algorithm.", "title": "" }, { "docid": "e3c7135441b17701caa4f2fee71837be", "text": "We dissected 50 head halves of 25 Japanese cadavers (10 males, 15 females) to investigate the innervations of the levator veli palatini (LVP) and superior constrictor pharyngis. The branches supplying the LVP were classified into the following three types according to their origins: supplying branches that originated from the pharyngeal branch of the glossopharyngeal nerve (type I, four sides, 8%), branches that originated from a communicating branch between the pharyngeal branches of the glossopharyngeal and vagus nerves (type II, 36 sides, 72%), and those that originated from the pharyngeal branch of the vagus nerve (type III, 10 sides, 20%). In previous studies, supplying branches of type I were seldom described. Regarding the innervation of the superior constrictor, some variations were observed, and we consider it likely that there is a close relationship between these variations and the type of innervation of the LVP.", "title": "" }, { "docid": "f827c29bb9dd6073e626b7457775000c", "text": "Inter vehicular communication is a technology where vehicles act as different nodes to form a network. In a vehicular network different vehicles communicate among each other via wireless access .Authentication is very crucial security service for inter vehicular communication (IVC) in Vehicular Information Network. It is because, protecting vehicles from any attempt to cause damage (misuse) to their private data and the attacks on their privacy. In this survey paper, we investigate the authentication issues for vehicular information network architecture based on the communication principle of named data networking (NDN). This paper surveys the most emerging paradigm of NDN in vehicular information network. So, we aims this survey paper helps to improve content naming, addressing, data aggregation and mobility for IVC in the vehicular information network.", "title": "" } ]
scidocsrr
81b5af61dbba7e07ae4a6e0e8b97d1bf
Cross domain distribution adaptation via kernel mapping
[ { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "dc4c5cfb41bfdb84c56183601f922b4f", "text": "Sample selection bias is a common problem encountered when using data mining algorithms for many real-world applications. Traditionally, it is assumed that training and test data are sampled from the same probability distribution, the so called “stationary or non-biased distribution assumption.” However, this assumption is often violated in reality. Typical examples include marketing solicitation, fraud detection, drug testing, loan approval, school enrollment, etc. For these applications the only labeled data available for training is a biased representation, in various ways, of the future data on which the inductive model will predict. Intuitively, some examples sampled frequently into the training data may actually be infrequent in the testing data, and vice versa. When this happens, an inductive model constructed from biased training set may not be as accurate on unbiased testing data if there had not been any selection bias in the training data. In this paper, we first improve and clarify a previously proposed categorization of sample selection bias. In particular, we show that unless under very restricted conditions, sample selection bias is a common problem for many real-world situations. We then analyze various effects of sample selection bias on inductive modeling, in particular, how the “true” conditional probability P (y|x) to be modeled by inductive learners can be misrepresented in the biased training data, that subsequently misleads a learning algorithm. To solve inaccuracy problems due to sample selection bias, we explore how to use model averaging of (1) conditional probabilities P (y|x), (2) feature probabilities P (x), and (3) joint probabilities, P (x, y), to reduce the influence of sample selection bias on model accuracy. In particular, we explore on how to use unlabeled data in a semi-supervised learning framework to improve the accuracy of descriptive models constructed from biased training samples. IBM T.J.Watson Research Center, Hawthorne, NY 10532, weifan@us.ibm.com Department of Computer Science, University at Albany, State University of New York, Albany, NY 12222, davidson@cs.albany.edu", "title": "" } ]
[ { "docid": "86700e13b16936e2e30f0e60a5062b7a", "text": "With consumer concerns increasing over food quality and safety, the food industry has begun to pay much more attention to the development of rapid and reliable food-evaluation systems over the years. As a result, there is a great need for manufacturers and retailers to operate effective real-time assessments for food quality and safety during food production and processing. Computer vision, comprising a nondestructive assessment approach, has the aptitude to estimate the characteristics of food products with its advantages of fast speed, ease of use, and minimal sample preparation. Specifically, computer vision systems are feasible for classifying food products into specific grades, detecting defects, and estimating properties such as color, shape, size, surface defects, and contamination. Therefore, in order to track the latest research developments of this technology in the agri-food industry, this review aims to present the fundamentals and instrumentation of computer vision systems with details of applications in quality assessment of agri-food products from 2007 to 2013 and also discuss its future trends in combination with spectroscopy.", "title": "" }, { "docid": "87e56672751a8eb4d5a08f0459e525ca", "text": "— The Internet of Things (IoT) has transformed many aspects of modern manufacturing, from design to production to quality control. In particular, IoT and digital manufacturing technologies have substantially accelerated product development cycles and manufacturers can now create products of a complexity and precision not heretofore possible. New threats to supply chain security have arisen from connecting machines to the Internet and introducing complex IoT-based systems controlling manufacturing processes. By attacking these IoT-based manufacturing systems and tampering with digital files, attackers can manipulate physical characteristics of parts and change the dimensions, shapes, or mechanical properties of the parts, which can result in parts that fail in the field. These defects increase manufacturing costs and allow silent problems to occur only under certain loads that can threaten safety and/or lives. To understand potential dangers and protect manufacturing system safety, this paper presents two taxonomies: one for classifying cyber-physical attacks against manufacturing processes and another for quality control measures for counteracting these attacks. We systematically identify and classify possible cyber-physical attacks and connect the attacks with variations in manufacturing processes and quality control measures. Our tax-onomies also provide a scheme for linking emerging IoT-based manufacturing system vulnerabilities to possible attacks and quality control measures.", "title": "" }, { "docid": "b6d47dc227f767009c40599f65e25c5f", "text": "Radio frequency (RF) tomography is proposed to detect underground voids, such as tunnels or caches, over relatively wide areas of regard. The RF tomography approach requires a set of low-cost transmitters and receivers arbitrarily deployed on the surface of the ground or slightly buried. Using the principles of inverse scattering and diffraction tomography, a simplified theory for below-ground imaging is developed. In this paper, the principles and motivations in support of RF tomography are introduced. Furthermore, several inversion schemes based on arbitrarily deployed sensors are devised. Then, limitations to performance and system considerations are discussed. Finally, the effectiveness of RF tomography is demonstrated by presenting images reconstructed via the processing of synthetic data.", "title": "" }, { "docid": "182dc182f7c814c18cb83a0515149cec", "text": "This paper discusses about methods for detection of leukemia. Various image processing techniques are used for identification of red blood cell and immature white cells. Different disease like anemia, leukemia, malaria, deficiency of vitamin B12, etc. can be diagnosed accordingly. Objective is to detect the leukemia affected cells and count it. According to detection of immature blast cells, leukemia can be identified and also define that either it is chronic or acute. To detect immature cells, number of methods are used like histogram equalization, linear contrast stretching, some morphological techniques like area opening, area closing, erosion, dilation. Watershed transform, K means, histogram equalization & linear contrast stretching, and shape based features are accurate 72.2%, 72%, 73.7 % and 97.8% respectively.", "title": "" }, { "docid": "c04cf54a40cd84961657bf50153ff68b", "text": "Neural IR models, such as DRMM and PACRR, have achieved strong results by successfully capturing relevance matching signals. We argue that the context of these matching signals is also important. Intuitively, when extracting, modeling, and combining matching signals, one would like to consider the surrounding text(local context) as well as other signals from the same document that can contribute to the overall relevance score. In this work, we highlight three potential shortcomings caused by not considering context information and propose three neural ingredients to address them: a disambiguation component, cascade k-max pooling, and a shuffling combination layer. Incorporating these components into the PACRR model yields Co-PACER, a novel context-aware neural IR model. Extensive comparisons with established models on TREC Web Track data confirm that the proposed model can achieve superior search results. In addition, an ablation analysis is conducted to gain insights into the impact of and interactions between different components. We release our code to enable future comparisons.", "title": "" }, { "docid": "299fb603f3a87d88e7fe8eeb7cf73089", "text": "Interest in using artificial neural networks (ANNs) for forecasting has led to a tremendous surge in research activities in the past decade. While ANNs provide a great deal of promise, they also embody much uncertainty. Researchers to date are still not certain about the effect of key factors on forecasting performance of ANNs. This paper presents a state-of-the-art survey of ANN applications in forecasting. Our purpose is to provide (1) a synthesis of published research in this area, (2) insights on ANN modeling issues, and (3) the future research directions.  1998 Elsevier Science B.V.", "title": "" }, { "docid": "5abd6e6e5cfe38808c0dcfe52a32ea94", "text": "OBJECTIVE\nTo test the hypothesis that 8 wks of partial weight-bearing gait retraining improves functional ambulation to a greater extent than traditional physical therapy in individuals after traumatic brain injury.\n\n\nDESIGN\nA randomized, open-label, controlled, cohort study was conducted at two inpatient university-based rehabilitation hospitals. A total of 38 adults with a primary diagnosis of traumatic brain injury and significant gait abnormalities received either 8 wks of standard physical therapy or physical therapy supplemented with partial weight-bearing gait training twice weekly.\n\n\nRESULTS\nSignificant (P < 0.05) improvements were detected in both groups on Functional Ambulation Category, Standing Balance Scale, Rivermead Mobility Index, and FIM. However, no differences were found between the treatment groups.\n\n\nCONCLUSIONS\nResults did not support the hypothesis that 8 wks of partial weight-bearing gait retraining improves functional ambulation to a greater extent than traditional physical therapy in individuals after traumatic brain injury based on common clinical measures.", "title": "" }, { "docid": "1e7f14531caad40797594f9e4c188697", "text": "The Drosophila melanogaster germ plasm has become the paradigm for understanding both the assembly of a specific cytoplasmic localization during oogenesis and its function. The posterior ooplasm is necessary and sufficient for the induction of germ cells. For its assembly, localization of gurken mRNA and its translation at the posterior pole of early oogenic stages is essential for establishing the posterior pole of the oocyte. Subsequently, oskar mRNA becomes localized to the posterior pole where its translation leads to the assembly of a functional germ plasm. Many gene products are required for producing the posterior polar plasm, but only oskar, tudor, valois, germcell-less and some noncoding RNAs are required for germ cell formation. A key feature of germ cell formation is the precocious segregation of germ cells, which isolates the primordial germ cells from mRNA turnover, new transcription, and continued cell division. nanos is critical for maintaining the transcription quiescent state and it is required to prevent transcription of Sex-lethal in pole cells. In spite of the large body of information about the formation and function of the Drosophila germ plasm, we still do not know what specifically is required to cause the pole cells to be germ cells. A series of unanswered problems is discussed in this chapter.", "title": "" }, { "docid": "8b1bd5243d4512324e451a780c1ec7d3", "text": "If you get the printed book in on-line book store, you may also find the same problem. So, you must move store to store and search for the available there. But, it will not happen here. The book that we will offer right here is the soft file concept. This is what make you can easily find and get this fundamentals of computer security by reading this site. We offer you the best product, always and always.", "title": "" }, { "docid": "1565eecf55648e2d89585e527f33e75a", "text": "In recent years, there has been a growing trend of mandating high-power conversion efficiency, for not only the heavy-load but also the light-load conditions. To achieve this purpose, a ripple-based constant on-time (RBCOT) control for dc-dc converters has received wide attentions because of its natural characteristic of switching frequency reduction under the light-load condition. However, a RBCOT control suffers from an output-voltage offset problem and a subharmonic instability problem. In this paper, a modified RBCOT buck converter circuit is proposed to solve both problems. The circuit uses the concept of virtual inductor current to stabilize the feedback, and an offset-cancellation circuit to eliminate the output dc offset. The modified circuit can be fabricated into an integrated circuit (IC) without adding any pin compared to conventional circuits. A control model based on describing function is developed for the modified converter. The small-signal characteristics and design criteria to meet stability are derived. From the model, it is also found out that it is much easier to accomplish adaptive voltage positioning using the proposed modified RBCOT scheme compared to a conventional constant-frequency controller. Simulation and experimental results are given to verify the proposed scheme.", "title": "" }, { "docid": "c13c97749874fd32972f6e8b75fd20d1", "text": "Text categorization is the task of automatically assigning unlabeled text documents to some predefined category labels by means of an induction algorithm. Since the data in text categorization are high-dimensional, feature selection is broadly used in text categorization systems for reducing the dimensionality. In the literature, there are some widely known metrics such as information gain and document frequency thresholding. Recently, a generative graphical model called latent dirichlet allocation (LDA) that can be used to model and discover the underlying topic structures of textual data, was proposed. In this paper, we use the hidden topic analysis of LDA for feature selection and compare it with the classical feature selection metrics in text categorization. For the experiments, we use SVM as the classifier and tf∗idf weighting for weighting the terms. We observed that almost in all metrics, information gain performs best at all keyword numbers while the LDA-based metrics perform similar to chi-square and document frequency thresholding.", "title": "" }, { "docid": "29a2c5082cf4db4f4dde40f18c88ca85", "text": "Human astrocytes are larger and more complex than those of infraprimate mammals, suggesting that their role in neural processing has expanded with evolution. To assess the cell-autonomous and species-selective properties of human glia, we engrafted human glial progenitor cells (GPCs) into neonatal immunodeficient mice. Upon maturation, the recipient brains exhibited large numbers and high proportions of both human glial progenitors and astrocytes. The engrafted human glia were gap-junction-coupled to host astroglia, yet retained the size and pleomorphism of hominid astroglia, and propagated Ca2+ signals 3-fold faster than their hosts. Long-term potentiation (LTP) was sharply enhanced in the human glial chimeric mice, as was their learning, as assessed by Barnes maze navigation, object-location memory, and both contextual and tone fear conditioning. Mice allografted with murine GPCs showed no enhancement of either LTP or learning. These findings indicate that human glia differentially enhance both activity-dependent plasticity and learning in mice.", "title": "" }, { "docid": "4982f8861eb55d98b2fd577f8dee9bd0", "text": "We introduce the Treebank of Learner English (TLE), the first publicly available syntactic treebank for English as a Second Language (ESL). The TLE provides manually annotated POS tags and Universal Dependency (UD) trees for 5,124 sentences from the Cambridge First Certificate in English (FCE) corpus. The UD annotations are tied to a pre-existing error annotation of the FCE, whereby full syntactic analyses are provided for both the original and error corrected versions of each sentence. Further on, we delineate ESL annotation guidelines that allow for consistent syntactic treatment of ungrammatical English. Finally, we benchmark POS tagging and dependency parsing performance on the TLE dataset and measure the effect of grammatical errors on parsing accuracy. We envision the treebank to support a wide range of linguistic and computational research on second language acquisition as well as automatic processing of ungrammatical language 1.", "title": "" }, { "docid": "ff584005caf32a0d4e5b8101c0df43e3", "text": "Computing resources including mobile devices at the edge of a network are increasingly connected and capable of collaboratively processing what's believed to be too complex to them. Collaboration possibilities with today's feature-rich mobile devices go far beyond simple media content sharing, traditional video conferencing and cloud-based software as a services. The realization of these possibilities for mobile edge computing (MEC) requires non-trivial amounts of efforts in enabling multi-device resource sharing. The current practice of mobile collaborative application development remains largely at the application level. In this paper, we present CollaboRoid, a platform-level solution that provides a set of system services for mobile collaboration. CollaboRoid's platform-level design significantly eases the development of mobile collaborative applications promoting MEC. In particular, it abstracts the sharing of not only hardware resources, but also software resources and multimedia contents between multiple heterogeneous mobile devices. We implement CollaboRoid in the application framework layer of the Android stack and evaluate it with several collaboration scenarios on Nexus 5 and 7 devices. Our experimental results show the feasibility of the platform-level collaboration using CollaboRoid in terms of the latency and energy consumption.", "title": "" }, { "docid": "15177e509ccdb8a3a4d28d24e02fc627", "text": "Extensive evaluation on a large number of word embedding models for language processing applications is conducted in this work. First, we introduce popular word embedding models and discuss desired properties of word models and evaluation methods (or evaluators). Then, we categorize evaluators into intrinsic and extrinsic two types. Intrinsic evaluators test the quality of a representation independent of specific natural language processing tasks while extrinsic evaluators use word embeddings as input features to a downstream task and measure changes in performance metrics specific to that task. We report experimental results of intrinsic and extrinsic evaluators on six word embedding models. It is shown that different evaluators focus on different aspects of word models, and some are more correlated with natural language processing tasks. Finally, we adopt correlation analysis to study performance consistency of extrinsic and intrinsic evalutors.", "title": "" }, { "docid": "fbc3afe22ed7c2cc6d60be5fcb906b90", "text": "The thud of a bouncing ball, the onset of speech as lips open — when visual and audio events occur together, it suggests that there might be a common, underlying event that produced both signals. In this paper, we argue that the visual and audio components of a video signal should be modeled jointly using a fused multisensory representation. We propose to learn such a representation in a self-supervised way, by training a neural network to predict whether video frames and audio are temporally aligned. We use this learned representation for three applications: (a) sound source localization, i.e. visualizing the source of sound in a video; (b) audio-visual action recognition; and (c) on/offscreen audio source separation, e.g. removing the off-screen translator’s voice from a foreign official’s speech. Code, models, and video results are available on our webpage: http://andrewowens.com/multisensory.", "title": "" }, { "docid": "1ce2a5e4aafed56039597524f59e2bcc", "text": "Statistical mediation methods provide valuable information about underlying mediating psychological processes, but the ability to infer that the mediator variable causes the outcome variable is more complex than widely known. Researchers have recently emphasized how violating assumptions about confounder bias severely limits causal inference of the mediator to dependent variable relation. Our article describes and addresses these limitations by drawing on new statistical developments in causal mediation analysis. We first review the assumptions underlying causal inference and discuss three ways to examine the effects of confounder bias when assumptions are violated. We then describe four approaches to address the influence of confounding variables and enhance causal inference, including comprehensive structural equation models, instrumental variable methods, principal stratification, and inverse probability weighting. Our goal is to further the adoption of statistical methods to enhance causal inference in mediation studies.", "title": "" }, { "docid": "f70ab6ad03609ff4388a2e78c8891b31", "text": "The paper describes the design, collection, transcription and analysis of 200 hours of HKUST Mandarin Telephone Speech Corpus (HKUST/MTS) from over 2100 Mandarin speakers in mainland China under the DARPA EARS framework. The corpus includes speech data, transcriptions and speaker demographic information. The speech data include 1206 ten-minute natural Mandarin conversations between either strangers or friends. Each conversation focuses on a single topic. All calls are recorded over public telephone networks. All calls are manually annotated with standard Chinese characters (GBK) as well as specific mark-ups for spontaneous speech. A file with speaker demographic information is also provided. The corpus is the largest and first of its kind for Mandarin conversational telephone speech, providing abundant and diversified samples for Mandarin speech recognition and other applicationdependent tasks, such as topic detection, information retrieval, keyword spotting, speaker recognition, etc. In a 2004 evaluation test by NIST, the corpus is found to improve system performance quite significantly.", "title": "" }, { "docid": "1f3159097ddf38968e8fe03b7391fce5", "text": "Participants presented with auditory, visual, or bi-sensory audio–visual stimuli in a speeded discrimination task, fail to respond to the auditory component of the bi-sensory trials significantly more often than they fail to respond to the visual component—a ‘visual dominance’ effect. The current study investigated further the sensory dominance phenomenon in all combinations of auditory, visual and haptic stimuli. We found a similar visual dominance effect also in bi-sensory trials of combined haptic–visual stimuli, but no bias towards either sensory modality in bi-sensory trials of haptic–auditory stimuli. When presented with tri-sensory trials of combined auditory–visual–haptic stimuli, participants made more errors of responding only to two corresponding sensory signals than errors of responding only to a single sensory modality, however, there were no biases towards either sensory modality (or sensory pairs) in the distribution of both types of errors (i.e. responding only to a single stimulus or to pairs of stimuli). These results suggest that while vision can dominate both the auditory and the haptic sensory modalities, it is limited to bi-sensory combinations in which the visual signal is combined with another single stimulus. However, in a tri-sensory combination when a visual signal is presented simultaneously with both the auditory and the haptic signals, the probability of missing two signals is much smaller than of missing only one signal and therefore the visual dominance disappears.", "title": "" }, { "docid": "494dcf7cda0cc849b8bc9d45d14e82f9", "text": "The paper addresses the problem of using Japanese candlestick methodology to analyze stock or forex market data by neural nets. Self organizing maps are presented as tools for providing maps of known candlestick formations. They may be used to visualize these patterns, and as inputs for more complex trading decision systems. In that case their role is preprocessing, coding and pre-classification of price data. An example of a profitable system based on this method is presented. Simplicity and efficiency of training and network simulating algorithms is emphasized in the context of processing streams of market data.", "title": "" } ]
scidocsrr
1cbcb247311ad2023609b1a76b826e12
Image denoising with block-matching and 3 D fi ltering
[ { "docid": "c6a44d2313c72e785ae749f667d5453c", "text": "Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti) + zi, i = 0; : : : ; n 1, ti = i=n, zi iid N(0; 1). The reconstruction f̂ n is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an amount p 2 log(n) = p n. We prove two results about that estimator. [Smooth]: With high probability f̂ n is at least as smooth as f , in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model.", "title": "" } ]
[ { "docid": "715d5bc3c7a9b4ff9008c609bb79100c", "text": "A new direct method for calculating the electrostatic force of electroadhesive robots generated by interdigital electrodes is presented. Here, series expansion is employed to express the spatial potential, and point matching method is used in dealing with some boundary conditions. The attraction force is calculated using the Maxwell stress tensor formula. The accuracy of this method is verified through comparing our results with that of simulation work as well as reported experimental data, the agreement is found to be very good.", "title": "" }, { "docid": "ca561ab257c495cd2e3e26db0d78cab7", "text": "Congenital cystic eye (anophthalmia with cyst) is an extremely rare anomaly discovered at birth with few reported cases in the literature, resulting from partial or complete failure during invagination of the primary optic vesicle during fetal development. Herein we present the radiographic, ultrasound, and magnetic resonance imaging findings of a unique case of congenital cystic eye associated with dermal appendages and advanced intracranial congenital anomalies in a 3-month-old boy.", "title": "" }, { "docid": "f20391d5eb79b32f06d31d27ad51bb6c", "text": "Fanconi anemia (FA) is a recessively inherited disease characterized by multiple symptoms including growth retardation, skeletal abnormalities, and bone marrow failure. The FA diagnosis is complicated due to the fact that the clinical manifestations are both diverse and variable. A chromosomal breakage test using a DNA cross-linking agent, in which cells from an FA patient typically exhibit an extraordinarily sensitive response, has been considered the gold standard for the ultimate diagnosis of FA. In the majority of FA patients the test results are unambiguous, although in some cases the presence of hematopoietic mosaicism may complicate interpretation of the data. However, some diagnostic overlap with other syndromes has previously been noted in cases with Nijmegen breakage syndrome. Here we present results showing that misdiagnosis may also occur with patients suffering from two of the three currently known cohesinopathies, that is, Roberts syndrome (RBS) and Warsaw breakage syndrome (WABS). This complication may be avoided by scoring metaphase chromosomes-in addition to chromosomal breakage-for spontaneously occurring premature centromere division, which is characteristic for RBS and WABS, but not for FA.", "title": "" }, { "docid": "78fc46165449f94e75e70a2654abf518", "text": "This paper presents a non-photorealistic rendering technique that automatically generates a line drawing from a photograph. We aim at extracting a set of coherent, smooth, and stylistic lines that effectively capture and convey important shapes in the image. We first develop a novel method for constructing a smooth direction field that preserves the flow of the salient image features. We then introduce the notion of flow-guided anisotropic filtering for detecting highly coherent lines while suppressing noise. Our method is simple and easy to implement. A variety of experimental results are presented to show the effectiveness of our method in producing self-contained, high-quality line illustrations.", "title": "" }, { "docid": "5708cf4823fd2d4e6110fde622530f25", "text": "Electric pulse field methods are wildly used in biotechnology and industrial applications. A pulse generator (PG) is the device that provides the desired electric pulse field. In this paper, a novel ac-powered PG (ACPG) that is directly powered by normal ac power (i.e., utility power) is proposed. Since there is no high-voltage dc power supply in the proposed ACPG, the circuit complexity and cost are effectively reduced. Furthermore, the voltage and duration of the generated pulse of the ACPG can easily be controlled to match specific applications. In order to assess the performance of the proposed ACPG, a prototype for providing a unipolar and a bipolar pulse train is designed and implemented. Experimental results show that the ACPG can actually generate a 0-1040-V pulse with a 750-ns to 118-mus duration without overshooting and ringing", "title": "" }, { "docid": "1d4b1015a319612ea802c9179c73c15e", "text": "Nowadays, recommender systems provide essential web services on the Internet. There are mainly two categories of traditional recommendation algorithms: Content-Based (CB) and Collaborative Filtering (CF). CF methods make recommendations mainly according to the historical feedback information. They usually perform better when there is sufficient feedback information but less successful on new users and items, which is called the \"cold-start'' problem. However, CB methods help in this scenario because of using content information. To take both advantages of CF and CB, how to combine them is a challenging issue. To the best of our knowledge, little previous work has been done to solve the problem in one unified recommendation model. In this work, we study how to integrate CF and CB, which utilizes both types of information in model-level but not in result-level and makes recommendations adaptively. A novel attention-based model named Attentional Content&Collaborate Model (ACCM) is proposed. Attention mechanism helps adaptively adjust for each user-item pair from which source information the recommendation is made. Especially, a \"cold sampling'' learning strategy is designed to handle the cold-start problem. Experimental results on two benchmark datasets show that the ACCM performs better on both warm and cold tests compared to the state-of-the-art algorithms.", "title": "" }, { "docid": "19e09b1c0eb3646e5ae6484524f82e10", "text": "Results from 12 switchback field trials involving 1216 cows were combined to assess the effects of a protected B vitamin blend (BVB) upon milk yield (kg), fat percentage (%), protein %, fat yield (kg) and protein yield (kg) in primiparous and multiparous cows. Trials consisted of 3 test periods executed in the order control-test-control. No diet changes other than the inclusion of 3 grams/cow/ day of the BVB during the test period occurred. Means from the two control periods were compared to results obtained during the test period using a paired T test. Cows include in the analysis were between 45 and 300 days in milk (DIM) at the start of the experiment and were continuously available for all periods. The provision of the BVB resulted in increased (P < 0.05) milk, fat %, protein %, fat yield and protein yield. Regression models showed that the amount of milk produced had no effect upon the magnitude of the increase in milk components. The increase in milk was greatest in early lactation and declined with DIM. Protein and fat % increased with DIM in mature cows, but not in first lactation cows. Differences in fat yields between test and control feeding periods did not change with DIM, but the improvement in protein yield in mature cows declined with DIM. These results indicate that the BVB provided economically important advantages throughout lactation, but expected results would vary with cow age and stage of lactation.", "title": "" }, { "docid": "82d62feaa0c88789c44bbdc745ab21dc", "text": "This paper proposes a new approach to solve the problem of real-time vision-based hand gesture recognition with the combination of statistical and syntactic analyses. The fundamental idea is to divide the recognition problem into two levels according to the hierarchical property of hand gestures. The lower level of the approach implements the posture detection with a statistical method based on Haar-like features and the AdaBoost learning algorithm. With this method, a group of hand postures can be detected in real time with high recognition accuracy. The higher level of the approach implements the hand gesture recognition using the syntactic analysis based on a stochastic context-free grammar. The postures that are detected by the lower level are converted into a sequence of terminal strings according to the grammar. Based on the probability that is associated with each production rule, given an input string, the corresponding gesture can be identified by looking for the production rule that has the highest probability of generating the input string.", "title": "" }, { "docid": "e3978d849b1449c40299841bfd70ea69", "text": "New generations of network intrusion detection systems create the need for advanced pattern-matching engines. This paper presents a novel scheme for pattern-matching, called BFPM, that exploits a hardware-based programmable statemachine technology to achieve deterministic processing rates that are independent of input and pattern characteristics on the order of 10 Gb/s for FPGA and at least 20 Gb/s for ASIC implementations. BFPM supports dynamic updates and is one of the most storage-efficient schemes in the industry, supporting two thousand patterns extracted from Snort with a total of 32 K characters in only 128 KB of memory.", "title": "" }, { "docid": "33f3f6ca25b8abec09d961a4ed72770a", "text": "We develop a formal, type-theoretic account of the basic mechanisms of object-oriented programming: encapsulation, message passing, subtyping, and inheritance. By modeling object encapsulation in terms of existential types instead of the recursive records used in other recent studies, we obtain a substantial simpliication both in the model of objects and in the underlying typed-calculus.", "title": "" }, { "docid": "0c1001c6195795885604a2aaa24ddb07", "text": "Recent advances in artificial intelligence (AI) have increased the opportunities for users to interact with the technology. Now, users can even collaborate with AI in creative activities such as art. To understand the user experience in this new user--AI collaboration, we designed a prototype, DuetDraw, an AI interface that allows users and the AI agent to draw pictures collaboratively. We conducted a user study employing both quantitative and qualitative methods. Thirty participants performed a series of drawing tasks with the think-aloud method, followed by post-hoc surveys and interviews. Our findings are as follows: (1) Users were significantly more content with DuetDraw when the tool gave detailed instructions. (2) While users always wanted to lead the task, they also wanted the AI to explain its intentions but only when the users wanted it to do so. (3) Although users rated the AI relatively low in predictability, controllability, and comprehensibility, they enjoyed their interactions with it during the task. Based on these findings, we discuss implications for user interfaces where users can collaborate with AI in creative works.", "title": "" }, { "docid": "c52c6c70ffda274af6a32ed5d1316f08", "text": "Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environments. However, the solutions of MDPs are of limited practical use due to their sensitivity to distributional model parameters, which are typically unknown and have to be estimated by the decision maker. To counter the detrimental effects of estimation errors, we consider robust MDPs that offer probabilistic guarantees in view of the unknown parameters. To this end, we assume that an observation history of the MDP is available. Based on this history, we derive a confidence region that contains the unknown parameters with a pre-specified probability 1− β. Afterwards, we determine a policy that attains the highest worst-case performance over this confidence region. By construction, this policy achieves or exceeds its worst-case performance with a confidence of at least 1 − β. Our method involves the solution of tractable conic programs of moderate size. Notation For a finite set X = {1, . . . , X}, M(X ) denotes the probability simplex in R . An X -valued random variable χ has distribution m ∈ M(X ), denoted by χ ∼ m, if P(χ = x) = mx for all x ∈ X . By default, all vectors are column vectors. We denote by ek the kth canonical basis vector, while e denotes the vector whose components are all ones. In both cases, the dimension will usually be clear from the context. For square matrices A and B, the relation A B indicates that the matrix A − B is positive semidefinite. We denote the space of symmetric n × n matrices by S. The declaration f : X c 7→ Y (f : X a 7→ Y ) implies that f is a continuous (affine) function from X to Y . For a matrix A, we denote its ith row by Ai· (a row vector) and its jth column by A·j .", "title": "" }, { "docid": "9fd56a2261ade748404fcd0c6302771a", "text": "Despite limited scientific knowledge, stretching of human skeletal muscle to improve flexibility is a widespread practice among athletes. This article reviews recent findings regarding passive properties of the hamstring muscle group during stretch based on a model that was developed which could synchronously and continuously measure passive hamstring resistance and electromyographic activity, while the velocity and angle of stretch was controlled. Resistance to stretch was defined as passive torque (Nm) offered by the hamstring muscle group during passive knee extension using an isokinetic dynamometer with a modified thigh pad. To simulate a clinical static stretch, the knee was passively extended to a pre-determined final position (0.0875 rad/s, dynamic phase) where it remained stationary for 90 s (static phase). Alternatively, the knee was extended to the point of discomfort (stretch tolerance). From the torque-angle curve of the dynamic phase of the static stretch, and in the stretch tolerance protocol, passive energy and stiffness were calculated. Torque decline in the static phase was considered to represent viscoelastic stress relaxation. Using the model, studies were conducted which demonstrated that a single static stretch resulted in a 30% viscoelastic stress relaxation. With repeated stretches muscle stiffness declined, but returned to baseline values within 1 h. Long-term stretching (3 weeks) increased joint range of motion as a result of a change in stretch tolerance rather than in the passive properties. Strength training resulted in increased muscle stiffness, which was unaffected by daily stretching. The effectiveness of different stretching techniques was attributed to a change in stretch tolerance rather than passive properties. Inflexible and older subjects have increased muscle stiffness, but a lower stretch tolerance compared to subjects with normal flexibility and younger subjects, respectively. Although far from all questions regarding the passive properties of humans skeletal muscle have been answered in these studies, the measurement technique permitted some initial important examinations of vicoelastic behavior of human skeletal muscle.", "title": "" }, { "docid": "22b1974fa802c9ea224e6b0b6f98cedb", "text": "This paper presents a human-inspired control approach to bipedal robotic walking: utilizing human data and output functions that appear to be intrinsic to human walking in order to formally design controllers that provably result in stable robotic walking. Beginning with human walking data, outputs-or functions of the kinematics-are determined that result in a low-dimensional representation of human locomotion. These same outputs can be considered on a robot, and human-inspired control is used to drive the outputs of the robot to the outputs of the human. The main results of this paper are that, in the case of both under and full actuation, the parameters of this controller can be determined through a human-inspired optimization problem that provides the best fit of the human data while simultaneously provably guaranteeing stable robotic walking for which the initial condition can be computed in closed form. These formal results are demonstrated in simulation by considering two bipedal robots-an underactuated 2-D bipedal robot, AMBER, and fully actuated 3-D bipedal robot, NAO-for which stable robotic walking is automatically obtained using only human data. Moreover, in both cases, these simulated walking gaits are realized experimentally to obtain human-inspired bipedal walking on the actual robots.", "title": "" }, { "docid": "78ec561e9a6eb34972ab238a02fdb40a", "text": "OBJECTIVE\nTo evaluate the safety and efficacy of mass circumcision performed using a plastic clamp.\n\n\nMETHODS\nA total of 2013 males, including infants, children, adolescents, and adults were circumcised during a 7-day period by using a plastic clamp technique. Complications were analyzed retrospectively in regard to 4 different age groups. Postcircumcision sexual function and satisfaction rates of the adult males were also surveyed.\n\n\nRESULTS\nThe mean duration of circumcision was 3.6±1.2 minutes. Twenty-six males who were lost to follow-up were excluded from the study. The total complication rate was found to be 2.47% among the remaining 1987 males, with a mean age of 7.8±2.5 years. The highest complication rate (2.93%) was encountered among the children<2 years age, which was because of the high rate of buried penis (0.98%) and excessive foreskin (0.98%) observed in this group. The complication rates of older children, adolescents, and adults were slightly lower than the children<2 years age, at 2.39%, 2.51%, and 2.40%, respectively. Excessive foreskin (0.7%) was the most common complication observed after mass circumcision. Bleeding (0.6%), infection (0.55%), wound dehiscence (0.25%), buried penis (0.25%), and urine retention (0.1%) were other encountered complications. The erectile function and sexual libido in adolescents and adults was not affected by circumcision and a 96% satisfaction rate was obtained.\n\n\nCONCLUSIONS\nMass circumcision performed by a plastic clamp technique was found to be a safe and time-saving method of circumcising a large number of males at any age.", "title": "" }, { "docid": "b23230f0386f185b7d5eb191034d58ec", "text": "Risk management in global information technology (IT) projects is becoming a critical area of concern for practitioners. Global IT projects usually span multiple locations involving various culturally diverse groups that use multiple standards and technologies. These multiplicities cause dynamic risks through interactions among internal (i.e., people, process, and technology) and external elements (i.e., business and natural environments) of global IT projects. This study proposes an agile risk-management framework for global IT project settings. By analyzing the dynamic interactions among multiplicities (e.g., multi-locations, multi-cultures, multi-groups, and multi-interests) embedded in the project elements, we identify the dynamic risks threatening the success of a global IT project. Adopting the principles of service-oriented architecture (SOA), we further propose a set of agile management strategies for mitigating the dynamic risks. The mitigation strategies are conceptually validated. The proposed framework will help practitioners understand the potential risks in their global IT projects and resolve their complex situations when certain types of dynamic risks arise.", "title": "" }, { "docid": "04b32423acd23c03188ca8bf208a24fd", "text": "We extend the notion of memristive systems to capacitive and inductive elements, namely, capacitors and inductors whose properties depend on the state and history of the system. All these elements typically show pinched hysteretic loops in the two constitutive variables that define them: current-voltage for the memristor, charge-voltage for the memcapacitor, and current-flux for the meminductor. We argue that these devices are common at the nanoscale, where the dynamical properties of electrons and ions are likely to depend on the history of the system, at least within certain time scales. These elements and their combination in circuits open up new functionalities in electronics and are likely to find applications in neuromorphic devices to simulate learning, adaptive, and spontaneous behavior.", "title": "" }, { "docid": "e53de7a588d61f513a77573b7b27f514", "text": "In the past, there have been dozens of studies on automatic authorship classification, and many of these studies concluded that the writing style is one of the best indicators for original authorship. From among the hundreds of features which were developed, syntactic features were best able to reflect an author's writing style. However, due to the high computational complexity for extracting and computing syntactic features, only simple variations of basic syntactic features such as function words, POS(Part of Speech) tags, and rewrite rules were considered. In this paper, we propose a new feature set of k-embedded-edge subtree patterns that holds more syntactic information than previous feature sets. We also propose a novel approach to directly mining them from a given set of syntactic trees. We show that this approach reduces the computational burden of using complex syntactic structures as the feature set. Comprehensive experiments on real-world datasets demonstrate that our approach is reliable and more accurate than previous studies.", "title": "" }, { "docid": "6bb4600498b34121c32b5d428ec3e49f", "text": "Parametric surfaces are an essential modeling tool in computer aided design and movie production. Even though their use is well established in industry, generating ray-traced images adds significant cost in time and memory consumption. Ray tracing such surfaces is usually accomplished by subdividing the surfaces on the fly, or by conversion to a polygonal representation. However, on-the-fly subdivision is computationally very expensive, whereas polygonal meshes require large amounts of memory. This is a particular problem for parametric surfaces with displacement, where very fine tessellation is required to faithfully represent the shape. Hence, memory restrictions are the major challenge in production rendering. In this article, we present a novel solution to this problem. We propose a compression scheme for a priori Bounding Volume Hierarchies (BVHs) on parametric patches, that reduces the data required for the hierarchy by a factor of up to 48. We further propose an approximate evaluation method that does not require leaf geometry, yielding an overall reduction of memory consumption by a factor of 60 over regular BVHs on indexed face sets and by a factor of 16 over established state-of-the-art compression schemes. Alternatively, our compression can simply be applied to a standard BVH while keeping the leaf geometry, resulting in a compression rate of up to 2:1 over current methods. Although decompression generates additional costs during traversal, we can manage very complex scenes even on the memory restrictive GPU at competitive render times.", "title": "" } ]
scidocsrr
1cc9a70cfaeccb02ce0268b6343f06d0
Formative Assessment: A Critical Review
[ { "docid": "d4d309d48404fb498c4a7c716804a80a", "text": "There has been a recent upsurge of interest in exploring how choices of methods and timing of instruction affect the rate and persistence of learning. The authors review three lines of experimentation—all conducted using educationally relevant materials and time intervals— that call into question important aspects of common instructional practices. First, research reveals that testing, although typically used merely as an assessment device, directly potentiates learning and does so more effectively than other modes of study. Second, recent analysis of the temporal dynamics of learning show that learning is most durable when study time is distributed over much greater periods of time than is customary in educational settings. Third, the inter-leaving of different types of practice problems (which is quite rare in math and science texts) markedly improves learning. The authors conclude by discussing the frequently observed dissociation between people's perceptions of which learning procedures are most effective and which procedures actually promote durable learning. T he experimental study of human learning and memory began more than 100 years ago and has developed into a major enterprise in behavioral science. Although this work has revealed some striking laboratory phenomena and elegant quantitative principles, it is disappointing that it has not thus far given teachers, learners, and curriculum designers much in the way of concrete and nonobvious advice that they can use to make learning more efficient and durable. In the past several years, however, there has been a new burst of effort by researchers to identify and test concrete principles that have this potential, yielding a slew of recommended strategies that have been listed in recent reports (e. Some of the most promising results involve the effects of testing on learning and different ways of scheduling study events. Those skeptical of behavioral research might assume that principles of learning would already be fairly obvious to anyone who has been a student, yet the results of recent experimentation challenge some of the most widely used study practices. We discuss three topics, focusing on the effects of testing, the role of temporal spacing, and the effects of interleaving different types of materials. Tests of student mastery of content material are customarily viewed as assessment devices, used to provide incentives for students (and in some cases teachers and school systems as well). However, memory research going back some years has revealed that a test that requires a learner to retrieve some piece of …", "title": "" }, { "docid": "a9c120f7d3d71fb8f1d35ded1bce17ea", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aera.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" } ]
[ { "docid": "72e5b92632824d3633539727125763bc", "text": "NB-IoT system focues on indoor coverage, low cost, long battery life, and enabling a large number of connected devices. The NB-IoT system in the inband mode should share the antenna with the LTE system and support mult-PRB to cover many terminals. Also, the number of used antennas should be minimized for price competitiveness. In this paper, the structure and implementation of the NB-IoT base station system will be describe.", "title": "" }, { "docid": "e1b9795030dac51172c20a49113fac23", "text": "Bin packing problems are a class of optimization problems that have numerous applications in the industrial world, ranging from efficient cutting of material to packing various items in a larger container. We consider here only rectangular items cut off an infinite strip of material as well as off larger sheets of fixed dimensions. This problem has been around for many years and a great number of publications can be found on the subject. Nevertheless, it is often difficult to reconcile a theoretical paper and practical application of it. The present work aims to create simple but, at the same time, fast and efficient algorithms, which would allow one to write high-speed and capable software that can be used in a real-time application.", "title": "" }, { "docid": "c3c3add0c42f3b98962c4682a72b1865", "text": "This paper compares to investigate output characteristics according to a conventional and novel stator structure of axial flux permanent magnet (AFPM) motor for cooling fan drive system. Segmented core of stator has advantages such as easy winding and fast manufacture speed. However, a unit cost increase due to cutting off tooth tip to constant slot width. To solve the problem, this paper proposes a novel stator structure with three-step segmented core. The characteristics of AFPM were analyzed by time-stepping three dimensional finite element analysis (3D FEA) in two stator models, when stator cores are cutting off tooth tips from rectangular core and three step segmented core. Prototype motors were manufactured based on analysis results, and were tested as a motor.", "title": "" }, { "docid": "d87abfd50876da09bce301831f71605f", "text": "Recent advances in topic models have explored complicated structured distributions to represent topic correlation. For example, the pachinko allocation model (PAM) captures arbitrary, nested, and possibly sparse correlations between topics using a directed acyclic graph (DAG). While PAM provides more flexibility and greater expressive power than previous models like latent Dirichlet allocation (LDA), it is also more difficult to determine the appropriate topic structure for a specific dataset. In this paper, we propose a nonparametric Bayesian prior for PAM based on a variant of the hierarchical Dirichlet process (HDP). Although the HDP can capture topic correlations defined by nested data structure, it does not automatically discover such correlations from unstructured data. By assuming an HDP-based prior for PAM, we are able to learn both the number of topics and how the topics are correlated. We evaluate our model on synthetic and real-world text datasets, and show that nonparametric PAM achieves performance matching the best of PAM without manually tuning the number of topics.", "title": "" }, { "docid": "652b91b4ca941bcc53bb22a714c13b52", "text": "As social media has permeated large parts of the population it simultaneously has become a way to reach many people e.g. with political messages. One way to efficiently reach those people is the application of automated computer programs that aim to simulate human behaviour so called social bots. These bots are thought to be able to potentially influence users’ opinion about a topic. To gain insight in the use of these bots in the run-up to the German Bundestag elections, we collected a dataset from Twitter consisting of tweets regarding a German state election in May 2017. The strategies and influence of social bots were analysed based on relevant features and network visualization. 61 social bots were identified. Possibly due to the concentration on German language as well as the elections regionality, identified bots showed no signs of collective political strategies and low to none influence. Implications are discussed.", "title": "" }, { "docid": "523983cad60a81e0e6694c8d90ab9c3d", "text": "Cognition and comportment are subserved by interconnected neural networks that allow high-level computational architectures including parallel distributed processing. Cognitive problems are not resolved by a sequential and hierarchical progression toward predetermined goals but instead by a simultaneous and interactive consideration of multiple possibilities and constraints until a satisfactory fit is achieved. The resultant texture of mental activity is characterized by almost infinite richness and flexibility. According to this model, complex behavior is mapped at the level of multifocal neural systems rather than specific anatomical sites, giving rise to brain-behavior relationships that are both localized and distributed. Each network contains anatomically addressed channels for transferring information content and chemically addressed pathways for modulating behavioral tone. This approach provides a blueprint for reexploring the neurological foundations of attention, language, memory, and frontal lobe function.", "title": "" }, { "docid": "195b68a3d0d12354c256c2a1ddeb2b28", "text": "Reinforcement learning (RL) is a popular machine learning technique that has many successes in learning how to play classic style games. Applying RL to first person shooter (FPS) games is an interesting area of research as it has the potential to create diverse behaviors without the need to implicitly code them. This paper investigates the tabular Sarsa (λ) RL algorithm applied to a purpose built FPS game. The first part of the research investigates using RL to learn bot controllers for the tasks of navigation, item collection, and combat individually. Results showed that the RL algorithm was able to learn a satisfactory strategy for navigation control, but not to the quality of the industry standard pathfinding algorithm. The combat controller performed well against a rule-based bot, indicating promising preliminary results for using RL in FPS games. The second part of the research used pretrained RL controllers and then combined them by a number of different methods to create a more generalized bot artificial intelligence (AI). The experimental results indicated that RL can be used in a generalized way to control a combination of tasks in FPS bots such as navigation, item collection, and combat.", "title": "" }, { "docid": "c62742c65b105a83fa756af9b1a45a37", "text": "This article treats numerical methods for tracking an implicitly defined path. The numerical precision required to successfully track such a path is difficult to predict a priori, and indeed, it may change dramatically through the course of the path. In current practice, one must either choose a conservatively large numerical precision at the outset or re-run paths multiple times in successively higher precision until success is achieved. To avoid unnecessary computational cost, it would be preferable to adaptively adjust the precision as the tracking proceeds in response to the local conditioning of the path. We present an algorithm that can be set to either reactively adjust precision in response to step failure or proactively set the precision using error estimates. We then test the relative merits of reactive and proactive adaptation on several examples arising as homotopies for solving systems of polynomial equations.", "title": "" }, { "docid": "3ae880019b1954a2de5ab0d52519caab", "text": "We propose a simple yet effective structural patch decomposition approach for multi-exposure image fusion (MEF) that is robust to ghosting effect. We decompose an image patch into three conceptually independent components: signal strength, signal structure, and mean intensity. Upon fusing these three components separately, we reconstruct a desired patch and place it back into the fused image. This novel patch decomposition approach benefits MEF in many aspects. First, as opposed to most pixel-wise MEF methods, the proposed algorithm does not require post-processing steps to improve visual quality or to reduce spatial artifacts. Second, it handles RGB color channels jointly, and thus produces fused images with more vivid color appearance. Third and most importantly, the direction of the signal structure component in the patch vector space provides ideal information for ghost removal. It allows us to reliably and efficiently reject inconsistent object motions with respect to a chosen reference image without performing computationally expensive motion estimation. We compare the proposed algorithm with 12 MEF methods on 21 static scenes and 12 deghosting schemes on 19 dynamic scenes (with camera and object motion). Extensive experimental results demonstrate that the proposed algorithm not only outperforms previous MEF algorithms on static scenes but also consistently produces high quality fused images with little ghosting artifacts for dynamic scenes. Moreover, it maintains a lower computational cost compared with the state-of-the-art deghosting schemes.11The MATLAB code of the proposed algorithm will be made available online. Preliminary results of Section III-A [1] were presented at the IEEE International Conference on Image Processing, Canada, 2015.", "title": "" }, { "docid": "24a6ad4d167290bec62a044580635aa0", "text": "We introduce HyperLex—a data set and evaluation resource that quantifies the extent of the semantic category membership, that is, type-of relation, also known as hyponymy–hypernymy or lexical entailment (LE) relation between 2,616 concept pairs. Cognitive psychology research has established that typicality and category/class membership are computed in human semantic memory as a gradual rather than binary relation. Nevertheless, most NLP research and existing large-scale inventories of concept category membership (WordNet, DBPedia, etc.) treat category membership and LE as binary. To address this, we asked hundreds of native English speakers to indicate typicality and strength of category membership between a diverse range of concept pairs on a crowdsourcing platform. Our results confirm that category membership and LE are indeed more gradual than binary. We then compare these human judgments with the predictions of automatic systems, which reveals a huge gap between human performance and state-of-the-art LE, distributional and representation learning models, and substantial differences between the models themselves. We discuss a pathway for improving semantic models to overcome this discrepancy, and indicate future application areas for improved graded LE systems.", "title": "" }, { "docid": "b4e9cfc0dbac4a5d7f76001e73e8973d", "text": "Style transfer aims to apply the style of an exemplar model to a target one, while retaining the target’s structure. The main challenge in this process is to algorithmically distinguish style from structure, a high-level, potentially ill-posed cognitive task. Inspired by cognitive science research we recast style transfer in terms of shape analogies. In IQ testing, shape analogy queries present the subject with three shapes: source, target and exemplar, and ask them to select an output such that the transformation, or analogy, from the exemplar to the output is similar to that from the source to the target. The logical process involved in identifying the source-to-target analogies implicitly detects the structural differences between the source and target and can be used effectively to facilitate style transfer. Since the exemplar has a similar structure to the source, applying the analogy to the exemplar will provide the output we seek. The main technical challenge we address is to compute the source to target analogies, consistent with human logic. We observe that the typical analogies we look for consist of a small set of simple transformations, which when applied to the exemplar generate a continuous, seamless output model. To assemble a shape analogy, we compute an optimal set of source-to-target transformations, such that the assembled analogy best fits these criteria. The assembled analogy is then applied to the exemplar shape to produce the desired output model. We use the proposed framework to seamlessly transfer a variety of style properties between 2D and 3D objects and demonstrate significant improvements over the state of the art in style transfer. We further show that our framework can be used to successfully complete partial scans with the help of a user provided structural template, coherently propagating scan style across the completed surfaces.", "title": "" }, { "docid": "8b66ffe2afae5f1f46b7803d80422248", "text": "This paper describes the torque production capabilities of electrical machines with planar windings and presents an automated procedure for coils conductors' arrangement. The procedure has been applied on an ironless axial flux slotless permanent magnet machines having stator windings realized using printed circuit board (PCB) coils. An optimization algorithm has been implemented to find a proper arrangement of PCB traces in order to find the best compromise between the maximization of average torque and the minimization of torque ripple. A time-efficient numerical model has been developed to reduce computational load and thus make the optimization based design feasible.", "title": "" }, { "docid": "59e3a7004bd2e1e75d0b1c6f6d2a67d0", "text": "Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model. We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models.", "title": "" }, { "docid": "cdca4a6cb35cbc674c06465c742dfe50", "text": "The generation of new lymphatic vessels through lymphangiogenesis and the remodelling of existing lymphatics are thought to be important steps in cancer metastasis. The past decade has been exciting in terms of research into the molecular and cellular biology of lymphatic vessels in cancer, and it has been shown that the molecular control of tumour lymphangiogenesis has similarities to that of tumour angiogenesis. Nevertheless, there are significant mechanistic differences between these biological processes. We are now developing a greater understanding of the specific roles of distinct lymphatic vessel subtypes in cancer, and this provides opportunities to improve diagnostic and therapeutic approaches that aim to restrict the progression of cancer.", "title": "" }, { "docid": "2e2e8219b7870529e8ca17025190aa1b", "text": "M multitasking competes with television advertising for consumers’ attention, but may also facilitate immediate and measurable response to some advertisements. This paper explores whether and how television advertising influences online shopping. We construct a massive data set spanning $3.4 billion in spending by 20 brands, measures of brands’ website traffic and transactions, and ad content measures for 1,224 commercials. We use a quasi-experimental design to estimate whether and how TV advertising influences changes in online shopping within two-minute pre/post windows of time. We use nonadvertising competitors’ online shopping in a difference-in-differences approach to measure the same effects in two-hour windows around the time of the ad. The findings indicate that television advertising does influence online shopping and that advertising content plays a key role. Action-focus content increases direct website traffic and sales. Information-focus and emotion-focus ad content actually reduce website traffic while simultaneously increasing purchases, with a positive net effect on sales for most brands. These results imply that brands seeking to attract multitaskers’ attention and dollars must select their advertising copy carefully.", "title": "" }, { "docid": "d6976361b44aab044c563e75056744d6", "text": "Five adrenoceptor subtypes are involved in the adrenergic regulation of white and brown fat cell function. The effects on cAMP production and cAMP-related cellular responses are mediated through the control of adenylyl cyclase activity by the stimulatory beta 1-, beta 2-, and beta 3-adrenergic receptors and the inhibitory alpha 2-adrenoceptors. Activation of alpha 1-adrenoceptors stimulates phosphoinositidase C activity leading to inositol 1,4,5-triphosphate and diacylglycerol formation with a consequent mobilization of intracellular Ca2+ stores and protein kinase C activation which trigger cell responsiveness. The balance between the various adrenoceptor subtypes is the point of regulation that determines the final effect of physiological amines on adipocytes in vitro and in vivo. Large species-specific differences exist in brown and white fat cell adrenoceptor distribution and in their relative importance in the control of the fat cell. Functional beta 3-adrenoceptors coexist with beta 1- and beta 2-adrenoceptors in a number of fat cells; they are weakly active in guinea pig, primate, and human fat cells. Physiological hormones and transmitters operate, in fact, through differential recruitment of all these multiple alpha- and beta-adrenoceptors on the basis of their relative affinity for the different subtypes. The affinity of the beta 3-adrenoceptor for catecholamines is less than that of the classical beta 1- and beta 2-adrenoceptors. Conversely, epinephrine and norepinephrine have a higher affinity for the alpha 2-adrenoceptors than for beta 1-, 2-, or 3-adrenoceptors. Antagonistic actions exist between alpha 2- and beta-adrenoceptor-mediated effects in white fat cells while positive cooperation has been revealed between alpha 1- and beta-adrenoceptors in brown fat cells. Homologous down-regulation of beta 1- and beta 2-adrenoceptors is observed after administration of physiological amines and beta-agonists. Conversely, beta 3- and alpha 2-adrenoceptors are much more resistant to agonist-induced desensitization and down-regulation. Heterologous regulation of beta-adrenoceptors was reported with glucocorticoids while sex-steroid hormones were shown to regulate alpha 2-adrenoceptor expression (androgens) and to alter adenylyl cyclase activity (estrogens).", "title": "" }, { "docid": "332a30e8d03d4f8cc03e7ab9b809ec9f", "text": "The study of electromyographic (EMG) signals has gained increased attention in the last decades since the proper analysis and processing of these signals can be instrumental for the diagnosis of neuromuscular diseases and the adaptive control of prosthetic devices. As a consequence, various pattern recognition approaches, consisting of different modules for feature extraction and classification of EMG signals, have been proposed. In this paper, we conduct a systematic empirical study on the use of Fractal Dimension (FD) estimation methods as feature extractors from EMG signals. The usage of FD as feature extraction mechanism is justified by the fact that EMG signals usually show traces of selfsimilarity and by the ability of FD to characterize and measure the complexity inherent to different types of muscle contraction. In total, eight different methods for calculating the FD of an EMG waveform are considered here, and their performance as feature extractors is comparatively assessed taking into account nine well-known classifiers of different types and complexities. Results of experiments conducted on a dataset involving seven distinct types of limb motions are reported whereby we could observe that the normalized version of the Katz's estimation method and the Hurst exponent significantly outperform the others according to a class separability measure and five well-known accuracy measures calculated over the induced classifiers. & 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "27d05b4a9a766e17b6a49879e983f93c", "text": "Data mining of Social networks is a new but interesting field within Data Mining. We leverage the power of sentiment analysis to detect bullying instances in Twitter. We are interested in understanding bullying in social networks, especially in Twitter. To best of our understanding, there is no previous work on using sentiment analysis to detect bullying instances. Our training data set consists of Twitter messages containing commonly used terms of abuse, which are considered noisy labels. These data are publicly available and can be easily retrieved by directly accessing the Twitter streaming API. For the classification of Twitter messages, also known as tweets, we use the Naïve Bayes classifier. It‟s accuracy was close to 70% when trained with “commonly terms of abuse” data. The main contribution of this paper is the idea of using sentiment analysis to detect bullying instances.", "title": "" }, { "docid": "93bc26aa1a020f178692f40f4542b691", "text": "The \"Fast Fourier Transform\" has now been widely known for about a year. During that time it has had a major effect on several areas of computing, the most striking example being techniques of numerical convolution, which have been completely revolutionized. What exactly is the \"Fast Fourier Transform\"?", "title": "" }, { "docid": "745451b3ca65f3388332232b370ea504", "text": "This article develops a framework that applies to single securities to test whether asset pricing models can explain the size, value, and momentum anomalies. Stock level beta is allowed to vary with firm-level size and book-to-market as well as with macroeconomic variables. With constant beta, none of the models examined capture any of the market anomalies. When beta is allowed to vary, the size and value effects are often explained, but the explanatory power of past return remains robust. The past return effect is captured by model mispricing that varies with macroeconomic variables.", "title": "" } ]
scidocsrr
c9fad2fad59192a15a471c09f339c8c5
Potential Use of Bacillus coagulans in the Food Industry
[ { "docid": "57856c122a6f8a0db8423a1af9378b3e", "text": "Probiotics are defined as live microorganisms, which when administered in adequate amounts, confer a health benefit on the host. Health benefits have mainly been demonstrated for specific probiotic strains of the following genera: Lactobacillus, Bifidobacterium, Saccharomyces, Enterococcus, Streptococcus, Pediococcus, Leuconostoc, Bacillus, Escherichia coli. The human microbiota is getting a lot of attention today and research has already demonstrated that alteration of this microbiota may have far-reaching consequences. One of the possible routes for correcting dysbiosis is by consuming probiotics. The credibility of specific health claims of probiotics and their safety must be established through science-based clinical studies. This overview summarizes the most commonly used probiotic microorganisms and their demonstrated health claims. As probiotic properties have been shown to be strain specific, accurate identification of particular strains is also very important. On the other hand, it is also demonstrated that the use of various probiotics for immunocompromised patients or patients with a leaky gut has also yielded infections, sepsis, fungemia, bacteraemia. Although the vast majority of probiotics that are used today are generally regarded as safe and beneficial for healthy individuals, caution in selecting and monitoring of probiotics for patients is needed and complete consideration of risk-benefit ratio before prescribing is recommended.", "title": "" }, { "docid": "f9355d27f36d7ecfbd77385968ac95e2", "text": "The present study was conducted to investigate the effects of dietary supplementation of Bacillus coagulans on growth, feed utilization, digestive enzyme activity, innate immune response and disease resistance of freshwater prawn Macrobrachium rosenbergii. Three treatment groups (designated as T1, T2 and T3) and a control group (C), each in triplicates, were established. The prawn in the control were fed a basal diet and those in T1, T2 and T3 were fed basal diet containing B. coagulans at 105, 107 and 109 cfu g−1, respectively. After 60 days, growth performance and feed utilization were found significantly higher (P < 0.05) in prawn fed T3 diet. The specific activities of protease, amylase and lipase digestive enzymes were significantly higher (P < 0.05) for T3. Innate immunity in terms of lysozyme and respiratory burst activities were significantly elevated (P < 0.05) in all the probiotic treatment groups as compared to control. Challenge study with Vibrio harveyi revealed significant increase (P < 0.05) in disease resistance of freshwater prawn in T2 and T3 groups. The results collectively suggested that supplementation of B. coagulans as probiotic in the diet at approximately 109 cfu g−1 can improve the growth performance, feed utilization, digestive enzyme activity, innate immune response and disease resistance of freshwater prawn.", "title": "" } ]
[ { "docid": "f5b500c143fd584423ee8f0467071793", "text": "Drug-Drug Interactions (DDIs) are major causes of morbidity and treatment inefficacy. The prediction of DDIs for avoiding the adverse effects is an important issue. There are many drug-drug interaction pairs, it is impossible to do in vitro or in vivo experiments for all the possible pairs. The limitation of DDIs research is the high costs. Many drug interactions are due to alterations in drug metabolism by enzymes. The most common among these enzymes are cytochrome P450 enzymes (CYP450). Drugs can be substrate, inhibitor or inducer of CYP450 which will affect metabolite of other drugs. This paper proposes enzyme action crossing attribute creation for DDIs prediction. Machine learning techniques, k-Nearest Neighbor (k-NN), Neural Networks (NNs), and Support Vector Machine (SVM) were used to find DDIs for simvastatin based on enzyme action crossing. SVM preformed the best providing the predictions at the accuracy of 70.40 % and of 81.85 % with balance and unbalance class label datasets respectively. Enzyme action crossing method provided the new attribute that can be used to predict drug-drug interactions.", "title": "" }, { "docid": "8cc12987072c983bc45406a033a467aa", "text": "Vehicular drivers and shift workers in industry are at most risk of handling life critical tasks. The drivers traveling long distances or when they are tired, are at risk of a meeting an accident. The early hours of the morning and the middle of the afternoon are the peak times for fatigue driven accidents. The difficulty in determining the incidence of fatigue-related accidents is due, at least in part, to the difficulty in identifying fatigue as a causal or causative factor in accidents. In this paper we propose an alternative approach for fatigue detection in vehicular drivers using Respiration (RSP) signal to reduce the losses of the lives and vehicular accidents those occur due to cognitive fatigue of the driver. We are using basic K-means algorithm with proposed two modifications as classifier for detection of Respiration signal two state fatigue data recorded from the driver. The K-means classifiers [11] were trained and tested for wavelet feature of Respiration signal. The extracted features were treated as individual decision making parameters. From test results it could be found that some of the wavelet features could fetch 100 % classification accuracy.", "title": "" }, { "docid": "885281566381b396594a7508e5f255c8", "text": "The last decade has witnessed the emergence and aesthetic maturation of amateur multimedia on an unprecedented scale, from video podcasts to machinima, and Flash animations to user-created metaverses. Today, especially in academic circles, this pop culture phenomenon is little recognized and even less understood. This paper explores creativity in amateur multimedia using three theorizations of creativity—those of HCI, postructuralism, and technological determinism. These theorizations frame a semiotic analysis of numerous commonly used multimedia authoring platforms, which demonstrates a deep convergence of multimedia authoring tool strategies that collectively project a conceptualization and practice of digital creativity. This conceptualization of digital creativity in authoring tools is then compared with hundreds of amateur-created artifacts. These analyses reveal relationships among emerging amateur multimedia aesthetics, common software authoring tools, and the three theorizations of creativity discussed.", "title": "" }, { "docid": "a35f014424d952de95fbbd4ccab696b1", "text": "Stroke can cause high morbidity and mortality, and ischemic stroke (IS) and transient ischemic attack (TIA) patients have a high stroke recurrence rate. Antiplatelet agents are the standard therapy for these patients, but it is often difficult for clinicians to select the best therapy from among the multiple treatment options. We therefore performed a network meta-analysis to estimate the efficacy of antiplatelet agents for secondary prevention of recurrent stroke. We systematically searched 3 databases (PubMed, Embase, and Cochrane) for relevant studies published through August 2015. The primary end points of this meta-analysis were overall stroke, hemorrhagic stroke, and fatal stroke. A total of 30 trials were included in our network meta-analysis and abstracted data. Among the therapies evaluated in the included trials, the estimates for overall stroke and hemorrhagic stroke for cilostazol (Cilo) were significantly better than those for aspirin (odds ratio [OR] = .64, 95% credibility interval [CrI], .45-.91; OR = .23, 95% CrI, .08-.58). The estimate for fatal stroke was highest for Cilo plus aspirin combination therapy, followed by Cilo therapy. The results of our meta-analysis indicate that Cilo significantly improves overall stroke and hemorrhagic stroke in IS or TIA patients and reduces fatal stroke, but with low statistical significance. Our results also show that Cilo was significantly more efficient than other therapies in Asian patients; therefore, future trials should focus on Cilo treatment for secondary prevention of recurrent stroke in non-Asian patients.", "title": "" }, { "docid": "9f6adc749faf41f182eff752b7c80c63", "text": "s Physicists use differential equations to describe the physical dynamical world, and the solutions of these equations constitute our understanding of the world. During the hundreds of years, scientists developed several ways to solve these equations, i.e., the analytical solutions and the numerical solutions. However, for some complex equations, there may be no analytical solutions, and the numerical solutions may encounter the curse of the extreme computational cost if the accuracy is the first consideration. Solving equations is a high-level human intelligence work and a crucial step towards general artificial intelligence (AI), where deep reinforcement learning (DRL) may contribute. This work makes the first attempt of applying (DRL) to solve nonlinear differential equations both in discretized and continuous format with the governing equations (physical laws) embedded in the DRL network, including ordinary differential equations (ODEs) and partial differential equations (PDEs). The DRL network consists of an actor that outputs solution approximations policy and a critic that outputs the critic of the actor's output solution. Deterministic policy network is employed as the actor, and governing equations are embedded in the critic. The effectiveness of the DRL solver in Schrödinger equation, Navier-Stocks, Van der Pol equation, Burgers' equation and the equation of motion are discussed. * These authors contributed to the work equally and should be regarded as co-first authors. † Corresponding author. E-mail address: lihui@hit.edu.cn. 2 Introduction Differential equations, including ordinary differential equations (ODEs) and partial differential equations (PDEs), formalize the description of the dynamical nature of the world around us. However, solving these equations is a challenge due to extreme computational cost, although limited cases have analytical or numerical solutions1-3. Solving equations is a high-level human intelligence work and a crucial step towards general artificial intelligence. Therefore, the obstacle of extreme computational cost in numerical solution may be bypassed by using general AI techniques, such as deep learning and reinforcement learning4, 5, which are rapidly developed during the last decades. Recent years such efforts have been made, and three main kinds of the existed efforts using deep learning can be categorized into: 1) directly map to the solution represented by the deep neural network in the continuous manner as in the analytical solution6, data used to train the network is randomly sampled within the entire solution domain in each training batch, including initial conditions and boundary conditions; 2) directly map to the solution in the discretized manner as in the numerical solution7-9; and 3) indirectly map to the internal results or parameters of the numerical solutions, and use the internal results to derive the numerical solutions6, 10. The essence is to take advantage of the nonlinear representing ability of deep neural networks. The solutions are either directly output by the network or numerically derived from the outputs of the neural network, and the solution task is regarded as a weak-label task while the governing equation is treated as the weak-label to calculate the loss function of the network. The term ‘weaklabel’ is emphasized to make difference with the label in supervised learning, i.e., the true solutions are not known in these tasks, however, when we get a candidate solution by the neural network output, we can tell how far the output solution is to the true solution by the imbalance of the physical law. Because of the weak-label property, the solution using deep learning may be unstable for highdimensional ODEs/PDEs tasks. Hence, we propose a deep reinforcement learning (DRL) paradigm for the ODEs/PDEs solution. DRL is naturally suitable for weak-label tasks by the trial-error learning mechanism5, 11. Take the game of Go for example12, the only prior information about the task is the playing rules that defines win or lose, the label (or score) of each step is whether win or lose after the whole episode of playing rather than an exact score. 3 While employing reinforcement learning, we are essentially treating the solving of differential equations as a control task. The state is the known current-step solution (either the given initial condition or the intermediate DRL solution) of the differential equations, the action is the solution of the task, and the goal is to find a proper action to balance the governing equation with an acceptable error. A deep deterministic policy network is used to output action policy given a state, and the governing equation is used as the critic, gradients of the policy network is calculated based on the critic.", "title": "" }, { "docid": "928f64f8ef9b3ea5e107ae9c49840b2c", "text": "Mass spectrometry-based proteomics has greatly benefitted from enormous advances in high resolution instrumentation in recent years. In particular, the combination of a linear ion trap with the Orbitrap analyzer has proven to be a popular instrument configuration. Complementing this hybrid trap-trap instrument, as well as the standalone Orbitrap analyzer termed Exactive, we here present coupling of a quadrupole mass filter to an Orbitrap analyzer. This \"Q Exactive\" instrument features high ion currents because of an S-lens, and fast high-energy collision-induced dissociation peptide fragmentation because of parallel filling and detection modes. The image current from the detector is processed by an \"enhanced Fourier Transformation\" algorithm, doubling mass spectrometric resolution. Together with almost instantaneous isolation and fragmentation, the instrument achieves overall cycle times of 1 s for a top 10 higher energy collisional dissociation method. More than 2500 proteins can be identified in standard 90-min gradients of tryptic digests of mammalian cell lysate- a significant improvement over previous Orbitrap mass spectrometers. Furthermore, the quadrupole Orbitrap analyzer combination enables multiplexed operation at the MS and tandem MS levels. This is demonstrated in a multiplexed single ion monitoring mode, in which the quadrupole rapidly switches among different narrow mass ranges that are analyzed in a single composite MS spectrum. Similarly, the quadrupole allows fragmentation of different precursor masses in rapid succession, followed by joint analysis of the higher energy collisional dissociation fragment ions in the Orbitrap analyzer. High performance in a robust benchtop format together with the ability to perform complex multiplexed scan modes make the Q Exactive an exciting new instrument for the proteomics and general analytical communities.", "title": "" }, { "docid": "e9b5dc63f981cc101521d8bbda1847d5", "text": "The unsupervised image-to-image translation aims at finding a mapping between the source (A) and target (B) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings FAB : A → B and FBA : B → A is commonly used by the state-of-the-art methods, like CycleGAN (Zhu et al., 2017), to learn this translation by introducing cycle consistency requirement to the learning problem, i.e. FAB(FBA(B)) ≈ B and FBA(FAB(A)) ≈ A. Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce FBA to be an inverse operation to FAB. We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark data sets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-toend learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.", "title": "" }, { "docid": "b50498964a73a59f54b3a213f2626935", "text": "To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering the statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that for a pruned network to retain its predictive power, it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the \"final response layer\" (FRL), which is the second-to-last layer before classification. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, formulate network pruning as a binary integer optimization problem, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and it is then fine-tuned to recover its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.", "title": "" }, { "docid": "6c5cabfa5ee5b9d67ef25658a4b737af", "text": "Sentence compression is the task of producing a summary of a single sentence. The compressed sentence should be shorter, contain the important content from the original, and itself be grammatical. The three papers discussed here take different approaches to identifying important content, determining which sentences are grammatical, and jointly optimizing these objectives. One family of approaches we will discuss is those that are tree-based, which create a compressed sentence by making edits to the syntactic tree of the original sentence. A second type of approach is sentence-based, which generates strings directly. Orthogonal to either of these two approaches is whether sentences are treated in isolation or if the surrounding discourse affects compressions. We compare a tree-based, a sentence-based, and a discourse-based approach and conclude with ideas for future work in this area. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-10-20. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/929 Methods for Sentence Compression", "title": "" }, { "docid": "d468946ac66cb4889acd11a48cdebc66", "text": "In this article, e-NOTIFY system is presented, which allows fast detection of traffic accidents, improving the assistance to injured passengers by reducing the response time of emergency services through the efficient communication of relevant information about the accident using a combination of V2V and V2I communications. The proposed system requires installing OBUs in the vehicles, in charge of detecting accidents and notifying them to an external CU, which will estimate the severity of the accident and inform the appropriate emergency services about the incident. This architecture replaces the current mechanisms for notification of accidents based on witnesses, who may provide incomplete or incorrect information after a long time. The development of a low-cost prototype shows that it is feasible to massively incorporate this system in existing vehicles.", "title": "" }, { "docid": "1ade3a53c754ec35758282c9c51ced3d", "text": "Radical hysterectomy represents the treatment of choice for FIGO stage IA2–IIA cervical cancer. It is associated with several serious complications such as urinary and anorectal dysfunction due to surgical trauma to the autonomous nervous system. In order to determine those surgical steps involving the risk of nerve injury during both classical and nerve-sparing radical hysterectomy, we investigated the relationships between pelvic fascial, vascular and nervous structures in a large series of embalmed and fresh female cadavers. We showed that the extent of potential denervation after classical radical hysterectomy is directly correlated with the radicality of the operation. The surgical steps that carry a high risk of nerve injury are the resection of the uterosacral and vesicouterine ligaments and of the paracervix. A nerve-sparing approach to radical hysterectomy for cervical cancer is feasible if specific resection limits, such as the deep uterine vein, are carefully identified and respected. However, a nerve-sparing surgical effort should be balanced with the oncological priorities of removal of disease and all its potential routes of local spread. L'hystérectomie radicale est le traitement de choix pour les cancers du col utérin de stade IA2–IIA de la Fédération Internationale de Gynécologie Obstétrique (FIGO). Cette intervention comporte plusieurs séquelles graves, telles que les dysfonctions urinaires ou ano-rectales, par traumatisme chirurgical des nerfs végétatifs pelviens. Pour mettre en évidence les temps chirurgicaux impliquant un risque de lésion nerveuse lors d'une hystérectomie radicale classique et avec préservation nerveuse, nous avons recherché les rapports entre le fascia pelvien, les structures vasculaires et nerveuses sur une large série de sujets anatomiques féminins embaumés et non embaumés. Nous avons montré que l'étendue de la dénervation potentielle après hystérectomie radicale classique était directement en rapport avec le caractère radical de l'intervention. Les temps chirurgicaux à haut risque pour des lésions nerveuses sont la résection des ligaments utéro-sacraux, des ligaments vésico-utérins et du paracervix. L'hystérectomie radicale avec préservation nerveuse est possible si des limites de résection spécifiques telle que la veine utérine profonde sont soigneusement identifiées et respectées. Cependant une chirurgie de préservation nerveuse doit être mise en balance avec les priorités carcinologiques d'exérèse du cancer et de toutes ses voies potentielles de dissémination locale.", "title": "" }, { "docid": "3d78d929b1e11b918119abba4ef8348d", "text": "Recent developments in mobile technologies have produced a new kind of device, a programmable mobile phone, the smartphone. Generally, smartphone users can program any application which is customized for needs. Furthermore, they can share these applications in online market. Therefore, smartphone and its application are now most popular keywords in mobile technology. However, to provide these customized services, smartphone needs more private information and this can cause security vulnerabilities. Therefore, in this work, we analyze security of smartphone based on its environments and describe countermeasures.", "title": "" }, { "docid": "dcf038090e8423d4919fd0260635c8c4", "text": "Automatic extraction of liver and tumor from CT volumes is a challenging task due to their heterogeneous and diffusive shapes. Recently, 2D and 3D deep convolutional neural networks have become popular in medical image segmentation tasks because of the utilization of large labeled datasets to learn hierarchical features. However, 3D networks have some drawbacks due to their high cost on computational resources. In this paper, we propose a 3D hybrid residual attention-aware segmentation method, named RA-UNet, to precisely extract the liver volume of interests (VOI) and segment tumors from the liver VOI. The proposed network has a basic architecture as a 3D U-Net which extracts contextual information combining lowlevel feature maps with high-level ones. Attention modules are stacked so that the attention-aware features change adaptively as the network goes “very deep” and this is made possible by residual learning. This is the first work that an attention residual mechanism is used to process medical volumetric images. We evaluated our framework on the public MICCAI 2017 Liver Tumor Segmentation dataset and the 3DIRCADb dataset. The results show that our architecture outperforms other state-ofthe-art methods. We also extend our RA-UNet to brain tumor segmentation on the BraTS2018 and BraTS2017 datasets, and the results indicate that RA-UNet achieves good performance on a brain tumor segmentation task as well.", "title": "" }, { "docid": "47992375dbd3c5d0960c114d5a4854b2", "text": "A new method is developed to represent probabilistic relations on multiple random events. Where previously knowledge bases containing probabilistic rules were used for this purpose, here a probabilitydistributionover the relations is directly represented by a Bayesian network. By using a powerful way of specifying conditional probability distributions in these networks, the resulting formalism is more expressive than the previous ones. Particularly, it provides for constraints on equalities of events, and it allows to define complex, nested combination functions.", "title": "" }, { "docid": "a05ee39269d1022560d1024805c8d055", "text": "Clean air is one of the most important needs for the well-being of human being health. In smart cities, timely and precise air pollution levels knowledge is vital for the successful setup of smart pollution systems. Recently, pollution and weather data in smart city have been bursting, and we have truly got into the era of big data. Ozone is considered as one of the most air pollutants with hurtful impact to human health. Existing methods used to predict the level of ozone uses shallow pollution prediction models and are still unsatisfactory in their accuracy to be used in many real-world applications. In order to increase the accuracy of prediction models we come up with the concept of using deep architecture models tested on big pollution and weather data. In this paper, a new deep learning-based ozone level prediction model is proposed, which considers the pollution and weather correlations integrally. This deep learning model is used to learn ozone level features, and it is trained using a grid search technique. A deep architecture model is utilized to represent ozone level features for prediction. Moreover, experiments demonstrate that the proposed method for ozone level prediction has superior performance. The outcome of this study can be helpful in predicting the ozone level pollution in Aarhus city as a model of smart cities for improving accuracy of ozone forecasting tools.", "title": "" }, { "docid": "cc10178729ca27c413223472f1aa08be", "text": "The automatic classification of ships from aerial images is a considerable challenge. Previous works have usually applied image processing and computer vision techniques to extract meaningful features from visible spectrum images in order to use them as the input for traditional supervised classifiers. We present a method for determining if an aerial image of visible spectrum contains a ship or not. The proposed architecture is based on Convolutional Neural Networks (CNN), and it combines neural codes extracted from a CNN with a k-Nearest Neighbor method so as to improve performance. The kNN results are compared to those obtained with the CNN Softmax output. Several CNN models have been configured and evaluated in order to seek the best hyperparameters, and the most suitable setting for this task was found by using transfer learning at different levels. A new dataset (named MASATI) composed of aerial imagery with more than 6000 samples has also been created to train and evaluate our architecture. The experimentation shows a success rate of over 99% for our approach, in contrast with the 79% obtained with traditional methods in classification of ship images, also outperforming other methods based on CNNs. A dataset of images (MWPU VHR-10) used in previous works was additionally used to evaluate the proposed approach. Our best setup achieves a success ratio of 86% with these data, significantly outperforming previous state-of-the-art ship classification methods.", "title": "" }, { "docid": "2cab3b3bed055eff92703d23b1edc69d", "text": "Due to their nonvolatile nature, excellent scalability, and high density, memristive nanodevices provide a promising solution for low-cost on-chip storage. Integrating memristor-based synaptic crossbars into digital neuromorphic processors (DNPs) may facilitate efficient realization of brain-inspired computing. This article investigates architectural design exploration of DNPs with memristive synapses by proposing two synapse readout schemes. The key design tradeoffs involving different analog-to-digital conversions and memory accessing styles are thoroughly investigated. A novel storage strategy optimized for feedforward neural networks is proposed in this work, which greatly reduces the energy and area cost of the memristor array and its peripherals.", "title": "" }, { "docid": "e9ed26434ac4e17548a08a40ace99a0c", "text": "An analytical study on air flow effects and resulting dynamics on the PACE Formula 1 race car is presented. The study incorporates Computational Fluid Dynamic analysis and simulation to maximize down force and minimize drag during high speed maneuvers of the race car. Using Star CCM+ software and mentoring provided by CD – Adapco, the simulation employs efficient meshing techniques and realistic loading conditions to understand down force on front and rear wing portions of the car as well as drag created by all exterior surfaces. Wing and external surface loading under high velocity runs of the car are illustrated. Optimization of wing orientations (direct angle of attack) and geometry modifications on outer surfaces of the car are performed to enhance down force and lessen drag for maximum stability and control during operation. The use of Surface Wrapper saved months of time in preparing the CAD model. The Transform tool and Contact Prevention tool in Star CCM+ proved to be an efficient means of correcting and modifying geometry instead of going back to the CAD model. The CFD simulations point out that the current front and rear wings do not generate the desired downforce and that the rear wing should be redesigned.", "title": "" }, { "docid": "64770c350dc1d260e24a43760d4e641b", "text": "A first step in the task of automatically generating questions for testing reading comprehension is to identify questionworthy sentences, i.e. sentences in a text passage that humans find it worthwhile to ask questions about. We propose a hierarchical neural sentence-level sequence tagging model for this task, which existing approaches to question generation have ignored. The approach is fully data-driven — with no sophisticated NLP pipelines or any hand-crafted rules/features — and compares favorably to a number of baselines when evaluated on the SQuAD data set. When incorporated into an existing neural question generation system, the resulting end-to-end system achieves stateof-the-art performance for paragraph-level question generation for reading comprehension.", "title": "" }, { "docid": "3e2e2aace1ddade88f3c8a6b7157af6b", "text": "Verb learning is clearly a function of observation of real-world contingencies; however, it is argued that such observational information is insufficient to account fully for vocabulary acquisition. This paper provides an experimental validation of Landau & Gleitman's (1985) syntactic bootstrapping procedure; namely, that children may use syntactic information to learn new verbs. Pairs of actions were presented simultaneously with a nonsense verb in one of two syntactic structures. The actions were subsequently separated, and the children (MA = 2;1) were asked to select which action was the referent for the verb. The children's choice of referent was found to be a function of the syntactic structure in which the verb had appeared.", "title": "" } ]
scidocsrr
72d91e43d8a595b27174cb45d77d63fb
Computational Drug Discovery with Dyadic Positive-Unlabeled Learning
[ { "docid": "4cd605375f5d27c754e4a21b81b39f1a", "text": "The dominant paradigm in drug discovery is the concept of designing maximally selective ligands to act on individual drug targets. However, many effective drugs act via modulation of multiple proteins rather than single targets. Advances in systems biology are revealing a phenotypic robustness and a network structure that strongly suggests that exquisitely selective compounds, compared with multitarget drugs, may exhibit lower than desired clinical efficacy. This new appreciation of the role of polypharmacology has significant implications for tackling the two major sources of attrition in drug development--efficacy and toxicity. Integrating network biology and polypharmacology holds the promise of expanding the current opportunity space for druggable targets. However, the rational design of polypharmacology faces considerable challenges in the need for new methods to validate target combinations and optimize multiple structure-activity relationships while maintaining drug-like properties. Advances in these areas are creating the foundation of the next paradigm in drug discovery: network pharmacology.", "title": "" }, { "docid": "67f13c2b686593398320d8273d53852f", "text": "Drug-drug interactions (DDIs) may cause serious side-effects that draw great attention from both academia and industry. Since some DDIs are mediated by unexpected drug-human protein interactions, it is reasonable to analyze the chemical-protein interactome (CPI) profiles of the drugs to predict their DDIs. Here we introduce the DDI-CPI server, which can make real-time DDI predictions based only on molecular structure. When the user submits a molecule, the server will dock user's molecule across 611 human proteins, generating a CPI profile that can be used as a feature vector for the pre-constructed prediction model. It can suggest potential DDIs between the user's molecule and our library of 2515 drug molecules. In cross-validation and independent validation, the server achieved an AUC greater than 0.85. Additionally, by investigating the CPI profiles of predicted DDI, users can explore the PK/PD proteins that might be involved in a particular DDI. A 3D visualization of the drug-protein interaction will be provided as well. The DDI-CPI is freely accessible at http://cpi.bio-x.cn/ddi/.", "title": "" } ]
[ { "docid": "bdffdfe92df254d0b13c1a1c985c0400", "text": "We propose a model to automatically describe changes introduced in the source code of a program using natural language. Our method receives as input a set of code commits, which contains both the modifications and message introduced by an user. These two modalities are used to train an encoder-decoder architecture. We evaluated our approach on twelve real world open source projects from four different programming languages. Quantitative and qualitative results showed that the proposed approach can generate feasible and semantically sound descriptions not only in standard in-project settings, but also in a cross-project setting.", "title": "" }, { "docid": "f202e380dfd1022e77a04212394be7e1", "text": "As usage of cloud computing increases, customers are mainly concerned about choosing cloud infrastructure with sufficient security. Concerns are greater in the multitenant environment on a public cloud. This paper addresses the security assessment of OpenStack open source cloud solution and virtual machine instances with different operating systems hosted in the cloud. The methodology and realized experiments target vulnerabilities from both inside and outside the cloud. We tested four different platforms and analyzed the security assessment. The main conclusions of the realized experiments show that multi-tenant environment raises new security challenges, there are more vulnerabilities from inside than outside and that Linux based Ubuntu, CentOS and Fedora are less vulnerable than Windows. We discuss details about these vulnerabilities and show how they can be solved by appropriate patches and other solutions. Keywords-Cloud Computing; Security Assessment; Virtualization.", "title": "" }, { "docid": "71a9394d995cefb8027bed3c56ec176c", "text": "A broadband microstrip-fed printed antenna is proposed for phased antenna array systems. The antenna consists of two parallel-modified dipoles of different lengths. The regular dipole shape is modified to a quasi-rhombus shape by adding two triangular patches. Using two dipoles helps maintain stable radiation patterns close to their resonance frequencies. A modified array configuration is proposed to further enhance the antenna radiation characteristics and usable bandwidth. Scanning capabilities are studied for a four-element array. The proposed antenna provides endfire radiation patterns with high gain, high front-to-back (F-to-B) ratio, low cross-polarization level, wide beamwidth, and wide scanning angles in a wide bandwidth of 103%", "title": "" }, { "docid": "4405611eafc1f6df4c4fa0b60a50f90d", "text": "Balancing robot which is proposed in this paper is a robot that relies on two wheels in the process of movement. Unlike the other mobile robot which is mechanically stable in its standing position, balancing robot need a balancing control which requires an angle value to be used as tilt feedback. The balancing control will control the robot, so it can maintain its standing position. Beside the balancing control itself, the movement of balancing robot needs its own control in order to control the movement while keeping the robot balanced. Both controllers will be combined since will both of them control the same wheel as the actuator. In this paper we proposed a cascaded PID control algorithm to combine the balancing and movement or distance controller. The movement of the robot is controlled using a distance controller that use rotary encoder sensor to measure its traveled distance. The experiment shows that the robot is able to climb up on 30 degree sloping board. By cascading the distance control to the balancing control, the robot is able to move forward, turning, and reach the desired position by calculating the body's tilt angle.", "title": "" }, { "docid": "5090070d6d928b83bd22d380f162b0a6", "text": "The Federal Aviation Administration (FAA) has been increasing the National Airspace System (NAS) capacity to accommodate the predicted rapid growth of air traffic. One method to increase the capacity is reducing air traffic controller workload so that they can handle more air traffic. It is crucial to measure the impact of the increasing future air traffic on controller workload. Our experimental data show a linear relationship between the number of aircraft in the en route center sector and controllers’ perceived workload. Based on the extensive range of aircraft count from 14 to 38 in the experiment, we can predict en route center controllers working as a team of Radar and Data controllers with the automation tools available in the our experiment could handle up to about 28 aircraft. This is 33% more than the 21 aircraft that en route center controllers typically handle in a busy sector.", "title": "" }, { "docid": "2bf0219394d87654d2824c805844fcaa", "text": "Wei-yu Kevin Chiang • Dilip Chhajed • James D. Hess Department of Information Systems, University of Maryland at Baltimore County, Baltimore, Maryland 21250 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 kevin@wchiang.net • chhajed@uiuc.edu • jhess@uiuc.edu", "title": "" }, { "docid": "7cb58462e6388a67376f5f0e95f8a8c4", "text": "In 2008 Bitcoin [Nak09] was introduced as the first decentralized digital currency. Its core underlying technology is the blockchain which is essentially a distributed append-only database. In particular, blockchain solves the key issue in decentralized digital currencies – the double spending problem – which asks: “if there is no central authority, what stops a malicious party from spending the same unit of currency multiple times”. Blockchain solves this problem by keeping track of each transaction that has been ever made while being robust against adversarial modifications.", "title": "" }, { "docid": "96aa1f19a00226af7b5bbe0bb080582e", "text": "CONTEXT\nComprehensive discharge planning by advanced practice nurses has demonstrated short-term reductions in readmissions of elderly patients, but the benefits of more intensive follow-up of hospitalized elders at risk for poor outcomes after discharge has not been studied.\n\n\nOBJECTIVE\nTo examine the effectiveness of an advanced practice nurse-centered discharge planning and home follow-up intervention for elders at risk for hospital readmissions.\n\n\nDESIGN\nRandomized clinical trial with follow-up at 2, 6, 12, and 24 weeks after index hospital discharge.\n\n\nSETTING\nTwo urban, academically affiliated hospitals in Philadelphia, Pa.\n\n\nPARTICIPANTS\nEligible patients were 65 years or older, hospitalized between August 1992 and March 1996, and had 1 of several medical and surgical reasons for admission.\n\n\nINTERVENTION\nIntervention group patients received a comprehensive discharge planning and home follow-up protocol designed specifically for elders at risk for poor outcomes after discharge and implemented by advanced practice nurses.\n\n\nMAIN OUTCOME MEASURES\nReadmissions, time to first readmission, acute care visits after discharge, costs, functional status, depression, and patient satisfaction.\n\n\nRESULTS\nA total of 363 patients (186 in the control group and 177 in the intervention group) were enrolled in the study; 70% of intervention and 74% of control subjects completed the trial. Mean age of sample was 75 years; 50% were men and 45% were black. By week 24 after the index hospital discharge, control group patients were more likely than intervention group patients to be readmitted at least once (37.1 % vs 20.3 %; P<.001). Fewer intervention group patients had multiple readmissions (6.2% vs 14.5%; P = .01) and the intervention group had fewer hospital days per patient (1.53 vs 4.09 days; P<.001). Time to first readmission was increased in the intervention group (P<.001). At 24 weeks after discharge, total Medicare reimbursements for health services were about $1.2 million in the control group vs about $0.6 million in the intervention group (P<.001). There were no significant group differences in post-discharge acute care visits, functional status, depression, or patient satisfaction.\n\n\nCONCLUSIONS\nAn advanced practice nurse-centered discharge planning and home care intervention for at-risk hospitalized elders reduced readmissions, lengthened the time between discharge and readmission, and decreased the costs of providing health care. Thus, the intervention demonstrated great potential in promoting positive outcomes for hospitalized elders at high risk for rehospitalization while reducing costs.", "title": "" }, { "docid": "5601a0da8cfaf42d30b139c535ae37db", "text": "This article presents some key achievements and recommendations from the IoT6 European research project on IPv6 exploitation for the Internet of Things (IoT). It highlights the potential of IPv6 to support the integration of a global IoT deployment including legacy systems by overcoming horizontal fragmentation as well as more direct vertical integration between communicating devices and the cloud.", "title": "" }, { "docid": "cc57e42da57af33edc53ba64f33e0178", "text": "This paper focuses on the design and development of a low-cost QFN package that is based on wirebond interconnects. One of the design goals is to extend the frequency at which the package can be used to 40-50 GHz (above the K band), in the millimeter-wave range. Owing to the use of mass production assembly protocols and materials, such as commercially available QFN in a mold compound, the design that is outlined in this paper significantly reduces the cost of assembly of millimeter wave modules. To operate the package at 50 GHz or a higher frequency, several key design features are proposed. They include the use of through vias (backside vias) and ground bondwires to provide ground return currents. This paper also provides rigorous validation steps that we took to obtain the key high frequency characteristics. Since a molding compound is used in conventional QFN packages, the material and its effectiveness in determining the signal propagation have to be incorporated in the overall design. However, the mold compound creates some extra challenges in the de-embedding task. For example, the mold compound must be removed to expose the probing pads so the effect of the microstrip on the GaAs chip can be obtained and de-embedded. Careful simulation and experimental validation reveal that the proposed QFN design achieves a return loss of -10 dB and an insertion loss of -1.5 dB up to 50 GHz.", "title": "" }, { "docid": "26c58183e71f916f37d67f1cf848f021", "text": "With the increasing popularity of herbomineral preparations in healthcare, a new proprietary herbomineral formulation was formulated with ashwagandha root extract and three minerals viz. zinc, magnesium, and selenium. The aim of the study was to evaluate the immunomodulatory potential of Biofield Energy Healing (The Trivedi Effect ® ) on the herbomineral formulation using murine splenocyte cells. The test formulation was divided into two parts. One was the control without the Biofield Energy Treatment. The other part was labelled the Biofield Energy Treated sample, which received the Biofield Energy Healing Treatment remotely by twenty renowned Biofield Energy Healers. Through MTT assay, all the test formulation concentrations from 0.00001053 to 10.53 μg/mL were found to be safe with cell viability ranging from 102.61% to 194.57% using splenocyte cells. The Biofield Treated test formulation showed a significant (p≤0.01) inhibition of TNF-α expression by 15.87%, 20.64%, 18.65%, and 20.34% at 0.00001053, 0.0001053, 0.01053, and 0.1053, μg/mL, respectively as compared to the vehicle control (VC) group. The level of TNF-α was reduced by 8.73%, 19.54%, and 14.19% at 0.001053, 0.01053, and 0.1053 μg/mL, respectively in the Biofield Treated test formulation compared to the untreated test formulation. The expression of IL-1β reduced by 22.08%, 23.69%, 23.00%, 16.33%, 25.76%, 16.10%, and 23.69% at 0.00001053, 0.0001053, 0.001053, 0.01053, 0.1053, 1.053 and 10.53 μg/mL, respectively compared to the VC. Additionally, the expression of MIP-1α significantly (p≤0.001) reduced by 13.35%, 22.96%, 25.11%, 22.71%, and 21.83% at 0.00001053, 0.0001053, 0.01053, 1.053, and 10.53 μg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation significantly down-regulated the MIP-1α expression by 10.75%, 9.53%, 9.57%, and 10.87% at 0.00001053, 0.01053, 0.1053 and 1.053 μg/mL, respectively compared to the untreated test formulation. The results showed the IFN-γ expression was also significantly (p≤0.001) reduced by 39.16%, 40.34%, 27.57%, 26.06%, 42.53%, and 48.91% at 0.0001053, 0.001053, 0.01053, 0.1053, 1.053, and 10.53 μg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation showed better suppression of IFN-γ expression by 15.46%, 13.78%, International Journal of Biomedical Engineering and Clinical Science 2016; 2(1): 8-17 9 17.14%, and 13.11% at concentrations 0.001053, 0.01053, 0.1053, and 10.53 μg/mL, respectively compared to the untreated test formulation. Overall, the results demonstrated that The Trivedi Effect ® Biofield Energy Healing (TEBEH) has the capacity to potentiate the immunomodulatory and anti-inflammatory activity of the test formulation. Biofield Energy may also be useful in organ transplants, anti-aging, and stress management by improving overall health and quality of life.", "title": "" }, { "docid": "172561db4f6d4bfe2b15c8d26adc3d91", "text": "\"Big Data\" in map-reduce (M-R) clusters is often fundamentally temporal in nature, as are many analytics tasks over such data. For instance, display advertising uses Behavioral Targeting (BT) to select ads for users based on prior searches, page views, etc. Previous work on BT has focused on techniques that scale well for offline data using M-R. However, this approach has limitations for BT-style applications that deal with temporal data: (1) many queries are temporal and not easily expressible in M-R, and moreover, the set-oriented nature of M-R front-ends such as SCOPE is not suitable for temporal processing, (2) as commercial systems mature, they may need to also directly analyze and react to real-time data feeds since a high turnaround time can result in missed opportunities, but it is difficult for current solutions to naturally also operate over real-time streams. Our contributions are twofold. First, we propose a novel framework called TiMR (pronounced timer), that combines a time-oriented data processing system with a M-R framework. Users write and submit analysis algorithms as temporal queries - these queries are succinct, scale-out-agnostic, and easy to write. They scale well on large-scale offline data using TiMR, and can work unmodified over real-time streams. We also propose new cost-based query fragmentation and temporal partitioning schemes for improving efficiency with TiMR. Second, we show the feasibility of this approach for BT, with new temporal algorithms that exploit new targeting opportunities. Experiments using real data from a commercial ad platform show that TiMR is very efficient and incurs orders-of-magnitude lower development effort. Our BT solution is easy and succinct, and performs up to several times better than current schemes in terms of memory, learning time, and click-through-rate/coverage.", "title": "" }, { "docid": "3d81cdfc3d9266d08dc6c28099397668", "text": "We address the problem of predicting new drug-target interactions from three inputs: known interactions, similarities over drugs and those over targets. This setting has been considered by many methods, which however have a common problem of allowing to have only one similarity matrix over drugs and that over targets. The key idea of our approach is to use more than one similarity matrices over drugs as well as those over targets, where weights over the multiple similarity matrices are estimated from data to automatically select similarities, which are effective for improving the performance of predicting drug-target interactions. We propose a factor model, named Multiple Similarities Collaborative Matrix Factorization(MSCMF), which projects drugs and targets into a common low-rank feature space, which is further consistent with weighted similarity matrices over drugs and those over targets. These two low-rank matrices and weights over similarity matrices are estimated by an alternating least squares algorithm. Our approach allows to predict drug-target interactions by the two low-rank matrices collaboratively and to detect similarities which are important for predicting drug-target interactions. This approach is general and applicable to any binary relations with similarities over elements, being found in many applications, such as recommender systems. In fact, MSCMF is an extension of weighted low-rank approximation for one-class collaborative filtering. We extensively evaluated the performance of MSCMF by using both synthetic and real datasets. Experimental results showed nice properties of MSCMF on selecting similarities useful in improving the predictive performance and the performance advantage of MSCMF over six state-of-the-art methods for predicting drug-target interactions.", "title": "" }, { "docid": "8dcfd08d5684ec9fd7d5a438a8086f23", "text": "We consider the problem of predicting semantic segmentation of future frames in a video. Given several observed frames in a video, our goal is to predict the semantic segmentation map of future frames that are not yet observed. A reliable solution to this problem is useful in many applications that require real-time decision making, such as autonomous driving. We propose a novel model that uses convolutional LSTM (ConvLSTM) to encode the spatiotemporal information of observed frames for future prediction. We also extend our model to use bidirectional ConvLSTM to capture temporal information in both directions. Our proposed approach outperforms other state-of-the-art methods on the benchmark dataset.", "title": "" }, { "docid": "63e3be30835fd8f544adbff7f23e13ab", "text": "Deaths due to plastic bag suffocation or plastic bag asphyxia are not reported in Malaysia. In the West many suicides by plastic bag asphyxia, particularly in the elderly and those who are chronically and terminally ill, have been reported. Accidental deaths too are not uncommon in the West, both among small children who play with shopping bags and adolescents who are solvent abusers. Another well-known but not so common form of accidental death from plastic bag asphyxia is sexual asphyxia, which is mostly seen among adult males. Homicide by plastic bag asphyxia too is reported in the West and the victims are invariably infants or adults who are frail or terminally ill and who cannot struggle. Two deaths due to plastic bag asphyxia are presented. Both the autopsies were performed at the University Hospital Mortuary, Kuala Lumpur. Both victims were 50-year old married Chinese males. One death was diagnosed as suicide and the other as sexual asphyxia. Sexual asphyxia is generally believed to be a problem associated exclusively with the West. Specific autopsy findings are often absent in deaths due to plastic bag asphyxia and therefore such deaths could be missed when some interested parties have altered the scene and most importantly have removed the plastic bag. A visit to the scene of death is invariably useful.", "title": "" }, { "docid": "b226b612db064f720e32e5a7fd9d9dec", "text": "Clustering is a fundamental technique widely used for exploring the inherent data structure in pattern recognition and machine learning. Most of the existing methods focus on modeling the similarity/dissimilarity relationship among instances, such as k-means and spectral clustering, and ignore to extract more effective representation for clustering. In this paper, we propose a deep embedding network for representation learning, which is more beneficial for clustering by considering two constraints on learned representations. We first utilize a deep auto encoder to learn the reduced representations from the raw data. To make the learned representations suitable for clustering, we first impose a locality-persevering constraint on the learned representations, which aims to embed original data into its underlying manifold space. Then, different from spectral clustering which extracts representations from the block diagonal similarity matrix, we apply a group sparsity constraint for the learned representations, and aim to learn block diagonal representations in which the nonzero groups correspond to its cluster. After obtaining the learned representations, we use k-means to cluster them. To evaluate the proposed deep embedding network, we compare its performance with k-means and spectral clustering on three commonly-used datasets. The experiments demonstrate that the proposed method achieves promising performance.", "title": "" }, { "docid": "ff20e5cd554cd628eba07776fa9a5853", "text": "We describe our early experience in applying our console log mining techniques [19, 20] to logs from production Google systems with thousands of nodes. This data set is five orders of magnitude in size and contains almost 20 times as many messages types as the Hadoop data set we used in [19]. It also has many properties that are unique to large scale production deployments (e.g., the system stays on for several months and multiple versions of the software can run concurrently). Our early experience shows that our techniques, including source code based log parsing, state and sequence based feature creation and problem detection, work well on this production data set. We also discuss our experience in using our log parser to assist the log sanitization.", "title": "" }, { "docid": "ae85cf24c079ff446b76f0ba81146369", "text": "Subgraph Isomorphism is a fundamental problem in graph data processing. Most existing subgraph isomorphism algorithms are based on a backtracking framework which computes the solutions by incrementally matching all query vertices to candidate data vertices. However, we observe that extensive duplicate computation exists in these algorithms, and such duplicate computation can be avoided by exploiting relationships between data vertices. Motivated by this, we propose a novel approach, BoostIso, to reduce duplicate computation. Our extensive experiments with real datasets show that, after integrating our approach, most existing subgraph isomorphism algorithms can be speeded up significantly, especially for some graphs with intensive vertex relationships, where the improvement can be up to several orders of magnitude.", "title": "" }, { "docid": "ec19face14810817bfd824d70a11c746", "text": "The article deals with various ways of memristor modeling and simulation in the MATLAB&Simulink environment. Recently used and published mathematical memristor model serves as a base, regarding all known features of its behavior. Three different approaches in the MATLAB&Simulink system are used for the differential and other equations formulation. The first one employs the standard system core offer for the Ordinary Differential Equations solutions (ODE) in the form of an m-file. The second approach is the model construction in Simulink environment. The third approach employs so-called physical modeling using the built-in Simscape system. The output data are the basic memristor characteristics and appropriate time courses. The features of all models are discussed, especially regarding the computer simulation. Possible problems that may occur during modeling are pointed. Key-Words: memristor, modeling and simulation, MATLAB, Simulink, Simscape, physical model", "title": "" }, { "docid": "51ef96b352d36f5ab933c10184bb385b", "text": "We present a language agnostic, unsupervised method for inducing morphological transformations between words. The method relies on certain regularities manifest in highdimensional vector spaces. We show that this method is capable of discovering a wide range of morphological rules, which in turn are used to build morphological analyzers. We evaluate this method across six different languages and nine datasets, and show significant improvements across all languages.", "title": "" } ]
scidocsrr
ed6273210338b7e1259db03a0b7f8533
Fraud detection in international calls using fuzzy logic
[ { "docid": "c5d5dfaa7af58dcd7c0ddc412e08bec2", "text": "Telecommunications fraud is a problem that affects operators all around the world. Operators know that fraud cannot be completely eradicated. The solution to deal with this problem is to minimize the damages and cut down losses by detecting fraud situations as early as possible. Computer systems were developed or acquired, and experts were trained to detect these situations. Still, the operators have the need to evolve this process, in order to detect fraud earlier and also get a better understanding of the fraud attacks they suffer. In this paper the fraud problem is analyzed and a new approach to the problem is designed. This new approach, based on the profiling and KDD (Knowledge Discovery in Data) techniques, supported in a MAS (Multiagent System), does not replace the existing fraud detection systems; it uses them and their results to provide operators new fraud detection methods and new knowledge.", "title": "" }, { "docid": "2b97e03fa089cdee0bf504dd85e5e4bb", "text": "One of the most severe threats to revenue and quality of service in telecom providers is fraud. The advent of new technologies has provided fraudsters new techniques to commit fraud. SIM box fraud is one of such fraud that has emerged with the use of VOIP technologies. In this work, a total of nine features found to be useful in identifying SIM box fraud subscriber are derived from the attributes of the Customer Database Record (CDR). Artificial Neural Networks (ANN) has shown promising solutions in classification problems due to their generalization capabilities. Therefore, supervised learning method was applied using Multi layer perceptron (MLP) as a classifier. Dataset obtained from real mobile communication company was used for the experiments. ANN had shown classification accuracy of 98.71 %.", "title": "" }, { "docid": "1a13a0d13e0925e327c9b151b3e5b32d", "text": "The topic of this thesis is fraud detection in mobile communications networks by means of user profiling and classification techniques. The goal is to first identify relevant user groups based on call data and then to assign a user to a relevant group. Fraud may be defined as a dishonest or illegal use of services, with the intention to avoid service charges. Fraud detection is an important application, since network operators lose a relevant portion of their revenue to fraud. Whereas the intentions of the mobile phone users cannot be observed, it is assumed that the intentions are reflected in the call data. The call data is subsequently used in describing behavioral patterns of users. Neural networks and probabilistic models are employed in learning these usage patterns from call data. These models are used either to detect abrupt changes in established usage patterns or to recognize typical usage patterns of fraud. The methods are shown to be effective in detecting fraudulent behavior by empirically testing the methods with data from real mobile communications networks. © All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the author.", "title": "" } ]
[ { "docid": "b91b42da0e7ffe838bf9d7ab0bd54bea", "text": "When creating line drawings, artists frequently depict intended curves using multiple, tightly clustered, or overdrawn, strokes. Given such sketches, human observers can readily envision these intended, aggregate, curves, and mentally assemble the artist's envisioned 2D imagery. Algorithmic stroke consolidation---replacement of overdrawn stroke clusters by corresponding aggregate curves---can benefit a range of sketch processing and sketch-based modeling applications which are designed to operate on consolidated, intended curves. We propose StrokeAggregator, a novel stroke consolidation method that significantly improves on the state of the art, and produces aggregate curve drawings validated to be consistent with viewer expectations. Our framework clusters strokes into groups that jointly define intended aggregate curves by leveraging principles derived from human perception research and observation of artistic practices. We employ these principles within a coarse-to-fine clustering method that starts with an initial clustering based on pairwise stroke compatibility analysis, and then refines it by analyzing interactions both within and in-between clusters of strokes. We facilitate this analysis by computing a common 1D parameterization for groups of strokes via common aggregate curve fitting. We demonstrate our method on a large range of line drawings, and validate its ability to generate consolidated drawings that are consistent with viewer perception via qualitative user evaluation, and comparisons to manually consolidated drawings and algorithmic alternatives.", "title": "" }, { "docid": "534a3885c710bc9a65fa2d66e2937dd4", "text": "This paper examines the concept of culture, and the potential impact of intercultural dynamics of software development. Many of the difficulties confronting today's global software development (GSD) environment have little to do with technical issues; rather, they are \"human\" issues that occur when extensive collaboration and communication among developers with distinct cultural backgrounds are required. Although project managers are reporting that intercultural factors are impacting software practices and artifacts and deserve more detailed study, little analytical research has been conducted in this area other than anecdotal testimonials by software professionals. This paper presents an introductory analysis of the effect that intercultural factors have on global software development. The paper first establishes a framework for intercultural variations by introducing several models commonly used to define culture. Cross-cultural issues that often arise in software development are then identified. The paper continues by explaining the importance of taking intercultural issues seriously and proposes some ideas for future research in the area", "title": "" }, { "docid": "7a0cec9d0e1f865a639db4f65626b5c2", "text": "Over the past century, academic performance has become the gatekeeper to institutions of higher education, shaping career paths and individual life trajectories. Accordingly, much psychological research has focused on identifying predictors of academic performance, with intelligence and effort emerging as core determinants. In this article, we propose expanding on the traditional set of predictors by adding a third agency: intellectual curiosity. A series of path models based on a meta-analytically derived correlation matrix showed that (a) intelligence is the single most powerful predictor of academic performance; (b) the effects of intelligence on academic performance are not mediated by personality traits; (c) intelligence, Conscientiousness (as marker of effort), and Typical Intellectual Engagement (as marker of intellectual curiosity) are direct, correlated predictors of academic performance; and (d) the additive predictive effect of the personality traits of intellectual curiosity and effort rival that the influence of intelligence. Our results highlight that a \"hungry mind\" is a core determinant of individual differences in academic achievement.", "title": "" }, { "docid": "66c132250df2d08fa707f86035bfd073", "text": "Morphing, fusion and stitching of digital photographs from multiple sources is a common problem in the recent era. While images may depict visual normalcy despite a splicing operation, there are domains in which a consistency or an anomaly check can be performed to detect a covert digital stitching process. Most digital and low-end mobile cameras have certain intrinsic sensor aberrations such as purple fringing (PF), seen in image regions where there are contrast variations and shadowing. This paper proposes an approach based on Fuzzy clustering to first identify regions which contain Purple Fringing and is then used as a forensic tool to detect splicing operations. The accuracy of the Fuzzy clustering approach is comparable with the state-of-the-art PF detection methods and has been shown to penetrate standard interpolation and stitching operations performed using ADOBE PHOTOSHOP.", "title": "" }, { "docid": "07ec8379b9a51faed0b050d7b1d85922", "text": "In this paper we propose a Deep Neural Network (D NN) based Speech Enhancement (SE) system that is designed to maximize an approximation of the Short-Time Objective Intelligibility (STOI) measure. We formalize an approximate-STOI cost function and derive analytical expressions for the gradients required for DNN training and show that these gradients have desirable properties when used together with gradient based optimization techniques. We show through simulation experiments that the proposed SE system achieves large improvements in estimated speech intelligibility, when tested on matched and unmatched natural noise types, at multiple signal-to-noise ratios. Furthermore, we show that the SE system, when trained using an approximate-STOI cost function performs on par with a system trained with a mean square error cost applied to short-time temporal envelopes. Finally, we show that the proposed SE system performs on par with a traditional DNN based Short- Time Spectral Amplitude (STSA) SE system in terms of estimated speech intelligibility. These results are important because they suggest that traditional DNN based STSA SE systems might be optimal in terms of estimated speech intelligibility.", "title": "" }, { "docid": "8d4007b4d769c2d90ae07b5fdaee8688", "text": "In this project, we implement the semi-supervised Recursive Autoencoders (RAE), and achieve the result comparable with result in [1] on the Movie Review Polarity dataset1. We achieve 76.08% accuracy, which is slightly lower than [1] ’s result 76.8%, with less vector length. Experiments show that the model can learn sentiment and build reasonable structure from sentence.We find longer word vector and adjustment of words’ meaning vector is beneficial, while normalization of transfer function brings some improvement. We also find normalization of the input word vector may be beneficial for training.", "title": "" }, { "docid": "2e32d668383eaaed096aa2e34a10d8e9", "text": "Splicing and copy-move are two well known methods of passive image forgery. In this paper, splicing and copy-move forgery detection are performed simultaneously on the same database CASIA v1.0 and CASIA v2.0. Initially, a suspicious image is taken and features are extracted through BDCT and enhanced threshold method. The proposed technique decides whether the given image is manipulated or not. If it is manipulated then support vector machine (SVM) classify that the given image is gone through splicing forgery or copy-move forgery. For copy-move detection, ZM-polar (Zernike Moment) is used to locate the duplicated regions in image. Experimental results depict the performance of the proposed method.", "title": "" }, { "docid": "e0836eb305f54283ced106528e5102a0", "text": "Face attributes are interesting due to their detailed description of human faces. Unlike prior researches working on attribute prediction, we address an inverse and more challenging problem called face attribute manipulation which aims at modifying a face image according to a given attribute value. Instead of manipulating the whole image, we propose to learn the corresponding residual image defined as the difference between images before and after the manipulation. In this way, the manipulation can be operated efficiently with modest pixel modification. The framework of our approach is based on the Generative Adversarial Network. It consists of two image transformation networks and a discriminative network. The transformation networks are responsible for the attribute manipulation and its dual operation and the discriminative network is used to distinguish the generated images from real images. We also apply dual learning to allow transformation networks to learn from each other. Experiments show that residual images can be effectively learned and used for attribute manipulations. The generated images remain most of the details in attribute-irrelevant areas.", "title": "" }, { "docid": "1783f837b61013391f3ff4f03ac6742e", "text": "Nowadays, many methods have been applied for data transmission of MWD system. Magnetic induction is one of the alternative technique. In this paper, detailed discussion on magnetic induction communication system is provided. The optimal coil configuration is obtained by theoretical analysis and software simulations. Based on this coil arrangement, communication characteristics of path loss and bit error rate are derived.", "title": "" }, { "docid": "fd5f3a14f731b4af60c86d7bac95e997", "text": "(Document Summary) Direct selling as a type of non-store retailing continues to increase internationally and in Australia in its use and popularity. One non-store retailing method, multilevel marketing or network marketing, has recently incurred a degree of consumer suspicion and negative perceptions. A study was developed to investigate consumer perceptions and concerns in New South Wales and Victoria. Consumers were surveyed to determine their perception of direct selling and its relationship to consumer purchasing decisions. Responses indicate consumers had a negative perceptions towards network marketing, while holding a low positive view of direct selling. There appears to be no influence of network marketing on consumer purchase decisions. Direct selling, as a method of non-store retailing, has continued to increase in popularity in Australia and internationally. This study investigated network marketing as a type of direct selling in Australia, by examining consumers' perceptions. The results indicate that Australian consumers were generally negative and suspicious towards network marketing in Australia.", "title": "" }, { "docid": "f16676f00cd50173d75bd61936ec200c", "text": "Training of the neural autoregressive density estimator (NADE) can be viewed as doing one step of probabilistic inference on missing values in data. We propose a new model that extends this inference scheme to multiple steps, arguing that it is easier to learn to improve a reconstruction in k steps rather than to learn to reconstruct in a single inference step. The proposed model is an unsupervised building block for deep learning that combines the desirable properties of NADE and multi-prediction training: (1) Its test likelihood can be computed analytically, (2) it is easy to generate independent samples from it, and (3) it uses an inference engine that is a superset of variational inference for Boltzmann machines. The proposed NADE-k is competitive with the state-of-the-art in density estimation on the two datasets tested.", "title": "" }, { "docid": "0759d6bd8c46a5ea5ce16c3675e07784", "text": "Because context has a robust influence on the processing of subsequent words, the idea that readers and listeners predict upcoming words has attracted research attention, but prediction has fallen in and out of favor as a likely factor in normal comprehension. We note that the common sense of this word includes both benefits for confirmed predictions and costs for disconfirmed predictions. The N400 component of the event-related potential (ERP) reliably indexes the benefits of semantic context. Evidence that the N400 is sensitive to the other half of prediction--a cost for failure--is largely absent from the literature. This raises the possibility that \"prediction\" is not a good description of what comprehenders do. However, it need not be the case that the benefits and costs of prediction are evident in a single ERP component. Research outside of language processing indicates that late positive components of the ERP are very sensitive to disconfirmed predictions. We review late positive components elicited by words that are potentially more or less predictable from preceding sentence context. This survey suggests that late positive responses to unexpected words are fairly common, but that these consist of two distinct components with different scalp topographies, one associated with semantically incongruent words and one associated with congruent words. We conclude with a discussion of the possible cognitive correlates of these distinct late positivities and their relationships with more thoroughly characterized ERP components, namely the P300, P600 response to syntactic errors, and the \"old/new effect\" in studies of recognition memory.", "title": "" }, { "docid": "7f6738aeccf7bc0e490d62e3030fdaf3", "text": "Customer churn prediction is becoming an increasingly important business analytics problem for telecom operators. In order to increase the efficiency of customer retention campaigns, churn prediction models need to be accurate as well as compact and interpretable. Although a myriad of techniques for churn prediction has been examined, there has been little attention for the use of Bayesian Network classifiers. This paper investigates the predictive power of a number of Bayesian Network algorithms, ranging from the Naive Bayes classifier to General Bayesian Network classifiers. Furthermore, a feature selection method based on the concept of the Markov Blanket, which is genuinely related to Bayesian Networks, is tested. The performance of the classifiers is evaluated with both the Area under the Receiver Operating Characteristic Curve and the recently introduced Maximum Profit criterion. The Maximum Profit criterion performs an intelligent optimization by targeting this fraction of the customer base which would maximize the profit generated by a retention campaign. The results of the experiments are rigorously tested and indicate that most of the analyzed techniques have a comparable performance. Some methods, however, are more preferred since they lead to compact networks, which enhances the interpretability and comprehensibility of the churn prediction models.", "title": "" }, { "docid": "5a1a40a965d05d0eb898d9ff5595618c", "text": "BACKGROUND\nKeratosis pilaris is a common skin disorder of childhood that often improves with age. Less common variants of keratosis pilaris include keratosis pilaris atrophicans and atrophodermia vermiculata.\n\n\nOBSERVATIONS\nIn this case series from dermatology practices in the United States, Canada, Israel, and Australia, the clinical characteristics of 27 patients with keratosis pilaris rubra are described. Marked erythema with follicular prominence was noted in all patients, most commonly affecting the lateral aspects of the cheeks and the proximal arms and legs, with both more marked erythema and widespread extent of disease than in keratosis pilaris. The mean age at onset was 5 years (range, birth to 12 years). Sixty-three percent of patients were male. No patients had atrophy or scarring from their lesions. Various treatments were used, with minimal or no improvement in most cases.\n\n\nCONCLUSIONS\nKeratosis pilaris rubra is a variant of keratosis pilaris, with more prominent erythema and with more widespread areas of skin involvement in some cases, but without the atrophy or hyperpigmentation noted in certain keratosis pilaris variants. It seems to be a relatively common but uncommonly reported condition.", "title": "" }, { "docid": "1a99b71b6c3c33d97c235a4d72013034", "text": "Crowdfunding systems are social media websites that allow people to donate small amounts of money that add up to fund valuable larger projects. These websites are structured around projects: finite campaigns with welldefined goals, end dates, and completion criteria. We use a dataset from an existing crowdfunding website — the school charity Donors Choose — to understand the value of completing projects. We find that completing a project is an important act that leads to larger donations (over twice as large), greater likelihood of returning to donate again, and few projects that expire close but not complete. A conservative estimate suggests that this completion bias led to over $15 million in increased donations to Donors Choose, representing approximately 16% of the total donations for the period under study. This bias suggests that structuring many types of collaborative work as a series of projects might increase contribution significantly. Many social media creators find it rather difficult to motivate users to actively participate and contribute their time, energy, or money to make a site valuable to others. The value in social media largely derives from interactions between and among people who are working together to achieve common goals. To encourage people to participate and contribute, social media creators regularly look for different ways of structuring participation. Some use a blog-type format, such as Facebook, Twitter, or Tumblr. Some use a collaborative document format like Wikipedia. And some use a project-based format. A project is a well-defined set of tasks that needs to be accomplished. Projects usually have a well-defined end goal — something that needs to be accomplished for the project to be considered a success — and an end date — a day by which the project needs to be completed. Much work in society is structured around projects; for example, Hollywood makes movies by organizing each movie’s production as a project, hiring a new crew for each movie. Construction companies organize their work as a sequence of projects. And projects are common in knowledge-work based businesses (?). Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Another important place we see project-based organization is in crowdfunding websites. Crowdfunding is a relatively new phenomenon that merges modern social web technologies with project-based fundraising. It is a new form of social media that publicizes projects that need money, and allows the crowd to each make a small contribution toward the larger project. By aggregating many small donations, crowdfunding websites can fund large and interesting projects of all kinds. Kickstarter, IndieGoGo, Spot.Us, and Donors Choose are examples of crowdfunding websites targeted at specific types of projects (creative, entrepreneurial, journalism, and classroom projects respectively). Crowdfunding is becoming an increasingly popular tool for enabling project-based work. Kickstarter, for example, has raised over $400 million for over 35,000 creative projects, and Donors Choose has raised over $90 million for over 200,000 classroom projects. Additionally, crowdfunding websites represent potential new business models for a number of industries, including some struggling to find viable revenue streams: Sellaband has proven successful in helping musicians fund the creation and distribution of their music; and Spot.Us enables journalists to fund and publish investigative news. In this paper, I seek to understand why crowdfunding systems that are organized around projects are successful. Using a dataset from Donors Choose, a crowdfunding charity that funds classroom projects for K–12 school teachers, I find that completing a project is a powerful motivator that helps projects succeed in the presence of a crowd: donations that complete a project are over twice as large as normal donations. People who make these donations are more likely to return and donate in the future, and their future donations are larger. And few projects get close to completion but fail. Together, these results suggest that completing the funding for a project is an important act for the crowd, and structuring the fundraising around completable projects helps enable success. This also has implications for other types of collaborative technologies. Background and Related Ideas", "title": "" }, { "docid": "36165cb8c6690863ed98c490ba889a9e", "text": "This paper presents a new low-cost digital control solution that maximizes the AC/DC flyback power supply efficiency. This intelligent digital approach achieves the combined benefits of high performance, low cost and high reliability in a single controller. It introduces unique multiple PWM and PFM operational modes adaptively based on the power supply load changes. While the multi-mode PWM/PFM control significantly improves the light-load efficiency and thus the overall average efficiency, it does not bring compromise to other system performance, such as audible noise, voltage ripples or regulations. It also seamlessly integrated an improved quasi-resonant switching scheme that enables valley-mode turn on in every switching cycle without causing modification to the main PWM/PFM control schemes. A digital integrated circuit (IC) that implements this solution, namely iW1696, has been fabricated and introduced to the industry recently. In addition to outlining the approach, this paper provides experimental results obtained on a 3-W (5V/550mA) cell phone charger that is built with the iW1696.", "title": "" }, { "docid": "6de3aca18d6c68f0250c8090ee042a4e", "text": "JavaScript is widely used by web developers and the complexity of JavaScript programs has increased over the last year. Therefore, the need for program analysis for JavaScript is evident. Points-to analysis for JavaScript is to determine the set of objects to which a reference variable or an object property may point. Points-to analysis for JavaScript is a basis for further program analyses for JavaScript. It has a wide range of applications in code optimization and software engineering tools. However, points-to analysis for JavaScript has not yet been developed.\n JavaScript has dynamic features such as the runtime modification of objects through addition of properties or updating of methods. We propose a points-to analysis for JavaScript which precisely handles the dynamic features of JavaScript. Our work is the first attempt to analyze the points-to behavior of JavaScript. We evaluate the analysis on a set of JavaScript programs. We also apply the analysis to a code optimization technique to show that the analysis can be practically useful.", "title": "" }, { "docid": "759b5a86bc70147842a106cf20b3a0cd", "text": "This article reviews recent advances in convex optimization algorithms for big data, which aim to reduce the computational, storage, and communications bottlenecks. We provide an overview of this emerging field, describe contemporary approximation techniques such as first-order methods and randomization for scalability, and survey the important role of parallel and distributed computation. The new big data algorithms are based on surprisingly simple principles and attain staggering accelerations even on classical problems.", "title": "" }, { "docid": "abc2d0757184f5c50e4f2b3a6dabb56c", "text": "This paper describes the hardware implementation of the RANdom Sample Consensus (RANSAC) algorithm for featured-based image registration applications. The Multiple-Input Signature Register (MISR) and the index register are used to achieve the random sampling effect. The systolic array architecture is adopted to implement the forward elimination step in the Gaussian elimination. The computational complexity in the forward elimination is reduced by sharing the coefficient matrix. As a result, the area of the hardware cost is reduced by more than 50%. The proposed architecture is realized using Verilog and achieves real-time calculation on 30 fps 1024 * 1024 video stream on 100 MHz clock.", "title": "" }, { "docid": "00280615cb28a6f16bde541af2bc356d", "text": "Querying with an example image is a simple and intuitive interface to retrieve information from a visual database. Most of the research in image retrieval has focused on the task of instance-level image retrieval, where the goal is to retrieve images that contain the same object instance as the query image. In this work we move beyond instance-level retrieval and consider the task of semantic image retrieval in complex scenes, where the goal is to retrieve images that share the same semantics as the query image. We show that, despite its subjective nature, the task of semantically ranking visual scenes is consistently implemented across a pool of human annotators. We also show that a similarity based on human-annotated region-level captions is highly correlated with the human ranking and constitutes a good computable surrogate. Following this observation, we learn a visual embedding of the images where the similarity in the visual space is correlated with their semantic similarity surrogate. We further extend our model to learn a joint embedding of visual and textual cues that allows one to query the database using a text modifier in addition to the query image, adapting the results to the modifier. Finally, our model can ground the ranking decisions by showing regions that contributed the most to the similarity between pairs of images, providing a visual explanation of the similarity.", "title": "" } ]
scidocsrr
c8fa010fab778c41682fd01a07f9433f
Size Estimation of Cloud Migration Projects with Cloud Migration Point (CMP)
[ { "docid": "2e9b2eccefe56b9cbf8d5793cc3f1cbb", "text": "This paper summarizes several classes of software cost estimation models and techniques: parametric models, expertise-based techniques, learning-oriented techniques, dynamics-based models, regression-based models, and composite-Bayesian techniques for integrating expertisebased and regression-based models. Experience to date indicates that neural-net and dynamics-based techniques are less mature than the other classes of techniques, but that all classes of techniques are challenged by the rapid pace of change in software technology. The primary conclusion is that no single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.", "title": "" }, { "docid": "37679e0fdb6ba2a8629cc7792e2df17e", "text": "This presentation summarizes the results of experiments conducted over the past year to empirically validate extensions made in an attempt to use the COCOMO I1 Early Design model to accurately estimate web development effort and duration. The presentation starts by summarizing the challenges associated with estimating resources for web-based developments. Next, it describes a new sizing metric, web objects, and an adaptation of the Early Design model. WEBMO (Web Model) developed to meet these challenges. Both the size metric and model adaptation have been developed to address unique estimating issues identified as data from more than 40 projects was collected, normalized and analyzed in order to get a handle on the resources needed for quick-to-market software developments. The presentation concludes by discussing lessons learned from the effort and the next steps.", "title": "" } ]
[ { "docid": "1a393c0789f4dddab690ec65d145424d", "text": "INTRODUCTION: Microneedling procedures are growing in popularity for a wide variety of skin conditions. This paper comprehensively reviews the medical literature regarding skin needling efficacy and safety in all skin types and in multiple dermatologic conditions. METHODS: A PubMed literature search was conducted in all languages without restriction and bibliographies of relevant articles reviewed. Search terms included: \"microneedling,\" \"percutaneous collagen induction,\" \"needling,\" \"skin needling,\" and \"dermaroller.\" RESULTS: Microneedling is most commonly used for acne scars and cosmetic rejuvenation, however, treatment benefit has also been seen in varicella scars, burn scars, keloids, acne, alopecia, and periorbital melanosis, and has improved flap and graft survival, and enhanced transdermal delivery of topical products. Side effects were mild and self-limited, with few reports of post-inflammatory hyperpigmentation, and isolated reports of tram tracking, facial allergic granuloma, and systemic hypersensitivity. DISCUSS: Microneedling represents a safe, cost-effective, and efficacious treatment option for a variety of dermatologic conditions in all skin types. More double-blinded, randomized, controlled trials are required to make more definitive conclusions. J Drugs Dermatol. 2017;16(4):308-314..", "title": "" }, { "docid": "bdb49f702123031d2ee935a387c9888e", "text": "Standard state-machine replication involves consensus on a sequence of totally ordered requests through, for example, the Paxos protocol. Such a sequential execution model is becoming outdated on prevalent multi-core servers. Highly concurrent executions on multi-core architectures introduce non-determinism related to thread scheduling and lock contentions, and fundamentally break the assumption in state-machine replication. This tension between concurrency and consistency is not inherent because the total-ordering of requests is merely a simplifying convenience that is unnecessary for consistency. Concurrent executions of the application can be decoupled with a sequence of consensus decisions through consensus on partial-order traces, rather than on totally ordered requests, that capture the non-deterministic decisions in one replica execution and to be replayed with the same decisions on others. The result is a new multi-core friendly replicated state-machine framework that achieves strong consistency while preserving parallelism in multi-thread applications. On 12-core machines with hyper-threading, evaluations on typical applications show that we can scale with the number of cores, achieving up to 16 times the throughput of standard replicated state machines.", "title": "" }, { "docid": "f7aac91b892013cfdc1302890cb7a263", "text": "We study the problem of learning a generalizable action policy for an intelligent agent to actively approach an object of interest in indoor environment solely from its visual inputs. While scene-driven or recognition-driven visual navigation has been widely studied, prior efforts suffer severely from the limited generalization capability. In this paper, we first argue the object searching task is environment dependent while the approaching ability is general. To learn a generalizable approaching policy, we present a novel solution dubbed as GAPLE which adopts two channels of visual features: depth and semantic segmentation, as the inputs to the policy learning module. The empirical studies conducted on the House3D dataset as well as on a physical platform in a real world scenario validate our hypothesis, and we further provide indepth qualitative analysis.", "title": "" }, { "docid": "2633bfb54b09ec28d4e123199a1ddb37", "text": "Software complexity has increased the need for automated software testing. Most research on automating testing, however, has focused on creating test input data. While careful selection of input data is necessary to reach faulty states in a system under test, test oracles are needed to actually detect failures. In this work, we describe Dodona, a system that supports the generation of test oracles. Dodona ranks program variables based on the interactions and dependencies observed between them during program execution. Using this ranking, Dodona proposes a set of variables to be monitored, that can be used by engineers to construct assertion-based oracles. Our empirical study of Dodona reveals that it is more effective and efficient than the current state-of-the-art approach for generating oracle data sets, and can often yield oracles that are almost as effective as oracles hand-crafted by engineers without support.", "title": "" }, { "docid": "160a27e958b5e853efb090f93bf006e8", "text": "Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone.", "title": "" }, { "docid": "dc5693b92b0c91ef3e9239da9fd089d9", "text": "This paper surveys approaches and up-to-date information of RDF data management and then categorizes them into four main RDF storages. Then, the survey restricts the discussion to those methods that solve RDF data management using a RDBMS, since it gives better performance and query optimization as a result of the large quantity of work required to induce relational query efficiency and also the scalability of its storage comes into play, with respect to scalability and various characteristics of performance.", "title": "" }, { "docid": "4b09424630d5e27f1ed32b5798674595", "text": "Tampering detection has been increasingly attracting attention in the field of digital forensics. As a popular nonlinear smoothing filter, median filtering is often used as a post-processing operation after image forgeries such as copy-paste forgery (including copy-move and image splicing), which is of particular interest to researchers. To implement the blind detection of median filtering, this paper proposes a novel approach based on a frequency-domain feature coined the annular accumulated points (AAP). Experimental results obtained on widely used databases, which consists of various real-world photos, show that the proposed method achieves outstanding performance in distinguishing median-filtered images from original images or images that have undergone other types of manipulations, especially in the scenarios of low resolution and JPEG compression with a low quality factor. Moreover, our approach remains reliable even when the feature dimension decreases to 5, which is significant to save the computing time required for classification, demonstrating its great advantage to be applied in real-time processing of big multimedia data.", "title": "" }, { "docid": "06d05d4cbfd443d45993d6cc98ab22cb", "text": "Genetic deficiency of ectodysplasin A (EDA) causes X-linked hypohidrotic ectodermal dysplasia (XLHED), in which the development of sweat glands is irreversibly impaired, an condition that can lead to life-threatening hyperthermia. We observed normal development of mouse fetuses with Eda mutations after they had been exposed in utero to a recombinant protein that includes the receptor-binding domain of EDA. We administered this protein intraamniotically to two affected human twins at gestational weeks 26 and 31 and to a single affected human fetus at gestational week 26; the infants, born in week 33 (twins) and week 39 (singleton), were able to sweat normally, and XLHED-related illness had not developed by 14 to 22 months of age. (Funded by Edimer Pharmaceuticals and others.).", "title": "" }, { "docid": "d2cbf33cdd8fcc051fbc6ed53a70cdc0", "text": "is book focuses on the core question of the necessary architectural support provided by hardware to efficiently run virtual machines, and of the corresponding design of the hypervisors that run them. Virtualization is still possible when the instruction set architecture lacks such support, but the hypervisor remains more complex and must rely on additional techniques. Despite the focus on architectural support in current architectures, some historical perspective is necessary to appropriately frame the problem. e first half of the book provides the historical perspective of the theoretical framework developed four decades ago by Popek and Goldberg. It also describes earlier systems that enabled virtualization despite the lack of architectural support in hardware. As is often the case, theory defines a necessary—but not sufficient—set of features, and modern architectures are the result of the combination of the theoretical framework with insights derived from practical systems. e second half of the book describes state-of-the-art support for virtualization in both x86-64 and ARM processors. is book includes an in-depth description of the CPU, memory, and I/O virtualization of these two processor architectures, as well as case studies on the Linux/KVM, VMware, and Xen hypervisors. It concludes with a performance comparison of virtualization on current-generation x86and ARM-based systems across multiple hypervisors.", "title": "" }, { "docid": "da989da66f8c2019adf49eae97fc2131", "text": "Psychedelic drugs are making waves as modern trials support their therapeutic potential and various media continue to pique public interest. In this opinion piece, we draw attention to a long-recognised component of the psychedelic treatment model, namely ‘set’ and ‘setting’ – subsumed here under the umbrella term ‘context’. We highlight: (a) the pharmacological mechanisms of classic psychedelics (5-HT2A receptor agonism and associated plasticity) that we believe render their effects exceptionally sensitive to context, (b) a study design for testing assumptions regarding positive interactions between psychedelics and context, and (c) new findings from our group regarding contextual determinants of the quality of a psychedelic experience and how acute experience predicts subsequent long-term mental health outcomes. We hope that this article can: (a) inform on good practice in psychedelic research, (b) provide a roadmap for optimising treatment models, and (c) help tackle unhelpful stigma still surrounding these compounds, while developing an evidence base for long-held assumptions about the critical importance of context in relation to psychedelic use that can help minimise harms and maximise potential benefits.", "title": "" }, { "docid": "06b9f83845f3125272115894676b5e5d", "text": "For aligning DNA sequences that differ only by sequencing errors, or by equivalent errors from other sources, a greedy algorithm can be much faster than traditional dynamic programming approaches and yet produce an alignment that is guaranteed to be theoretically optimal. We introduce a new greedy alignment algorithm with particularly good performance and show that it computes the same alignment as does a certain dynamic programming algorithm, while executing over 10 times faster on appropriate data. An implementation of this algorithm is currently used in a program that assembles the UniGene database at the National Center for Biotechnology Information.", "title": "" }, { "docid": "0930ec4162eec816379ca24808768ddd", "text": "Cloud-integrated Internet of Things (IoT) is emerging as the next-generation service platform that enables smart functionality worldwide. IoT applications such as smart grid and power systems, e-health, and body monitoring applications along with large-scale environmental and industrial monitoring are increasingly generating large amounts of data that can conveniently be analyzed through cloud service provisioning. However, the nature of these applications mandates the use of secure and privacy-preserving implementation of services that ensures the integrity of data without any unwarranted exposure. This article explores the unique challenges and issues within this context of enabling secure cloud-based data analytics for the IoT. Three main applications are discussed in detail, with solutions outlined based on the use of fully homomorphic encryption systems to achieve data security and privacy over cloud-based analytical phases. The limitations of existing technologies are discussed and models proposed with regard to achieving high efficiency and accuracy in the provisioning of analytic services for encrypted data over a cloud platform.", "title": "" }, { "docid": "c89ca701d947ba6594be753470f152ac", "text": "The visualization of an image collection is the process of displaying a collection of images on a screen under some specific layout requirements. This paper focuses on an important problem that is not well addressed by the previous methods: visualizing image collections into arbitrary layout shapes while arranging images according to user-defined semantic or visual correlations (e.g., color or object category). To this end, we first propose a property-based tree construction scheme to organize images of a collection into a tree structure according to user-defined properties. In this way, images can be adaptively placed with the desired semantic or visual correlations in the final visualization layout. Then, we design a two-step visualization optimization scheme to further optimize image layouts. As a result, multiple layout effects including layout shape and image overlap ratio can be effectively controlled to guarantee a satisfactory visualization. Finally, we also propose a tree-transfer scheme such that visualization layouts can be adaptively changed when users select different “images of interest.” We demonstrate the effectiveness of our proposed approach through the comparisons with state-of-the-art visualization techniques.", "title": "" }, { "docid": "9a3d90ecbd12f6ef5ee9348c4af90d0b", "text": "The gene encoding the forkhead box transcription factor, FOXP2, is essential for developing the full articulatory power of human language. Mutations of FOXP2 cause developmental verbal dyspraxia (DVD), a speech and language disorder that compromises the fluent production of words and the correct use and comprehension of grammar. FOXP2 patients have structural and functional abnormalities in the striatum of the basal ganglia, which also express high levels of FOXP2. Since human speech and learned vocalizations in songbirds bear behavioral and neural parallels, songbirds provide a genuine model for investigating the basic principles of speech and its pathologies. In zebra finch Area X, a basal ganglia structure necessary for song learning, FoxP2 expression increases during the time when song learning occurs. Here, we used lentivirus-mediated RNA interference (RNAi) to reduce FoxP2 levels in Area X during song development. Knockdown of FoxP2 resulted in an incomplete and inaccurate imitation of tutor song. Inaccurate vocal imitation was already evident early during song ontogeny and persisted into adulthood. The acoustic structure and the duration of adult song syllables were abnormally variable, similar to word production in children with DVD. Our findings provide the first example of a functional gene analysis in songbirds and suggest that normal auditory-guided vocal motor learning requires FoxP2.", "title": "" }, { "docid": "9009f20f639de20d28ba01fac60db9d0", "text": "We propose strategies for selecting a good neural network architecture for modeling any spe-ciic data set. Our approach involves eeciently searching the space of possible architectures and selecting a \\best\" architecture based on estimates of generalization performance. Since an exhaustive search over the space of architectures is computationally infeasible, we propose heuristic strategies which dramatically reduce the search complexity. These employ directed search algorithms, including selecting the number of nodes via sequential network construction (SNC), sensitivity based pruning (SBP) of inputs, and optimal brain damage (OBD) pruning for weights. A selection criterion, the estimated generalization performance or prediction risk, is used to guide the heuristic search and to choose the nal network. Both predicted squared error (PSE) and nonlinear cross{validation (NCV) are used for estimating the prediction risk from the available data. We apply these heuristic search and prediction risk estimation techniques to the problem of predicting corporate bond ratings. This problem is very attractive as a case study, since it is characterized by a limited set of data and by the lack of a complete a priori model which could be used to impose a structure to the network architecture.", "title": "" }, { "docid": "944f3e499b77e7ed50c74a786f9e218b", "text": "This paper describes EMBER: a labeled benchmark dataset for training machine learning models to statically detect malicious Windows portable executable files. The dataset includes features extracted from 1.1M binary files: 900K training samples (300K malicious, 300K benign, 300K unlabeled) and 200K test samples (100K malicious, 100K benign). To accompany the dataset, we also release open source code for extracting features from additional binaries so that additional sample features can be appended to the dataset. This dataset fills a void in the information security machine learning community: a benign/malicious dataset that is large, open and general enough to cover several interesting use cases. We enumerate several use cases that we considered when structuring the dataset. Additionally, we demonstrate one use case wherein we compare a baseline gradient boosted decision tree model trained using LightGBM with default settings to MalConv, a recently published end-to-end (featureless) deep learning model for malware detection. Results show that even without hyperparameter optimization, the baseline EMBER model outperforms MalConv. The authors hope that the dataset, code and baseline model provided by EMBER will help invigorate machine learning research for malware detection, in much the same way that benchmark datasets have advanced computer vision research.", "title": "" }, { "docid": "ba920ed04c20125f5975519367bebd02", "text": "Tensor and matrix factorization methods have attracted a lot of attention recently thanks to their successful applications to information extraction, knowledge base population, lexical semantics and dependency parsing. In the first part, we will first cover the basics of matrix and tensor factorization theory and optimization, and then proceed to more advanced topics involving convex surrogates and alternative losses. In the second part we will discuss recent NLP applications of these methods and show the connections with other popular methods such as transductive learning, topic models and neural networks. The aim of this tutorial is to present in detail applied factorization methods, as well as to introduce more recently proposed methods that are likely to be useful to NLP applications.", "title": "" }, { "docid": "a5c9de4127df50d495c7372b363691cf", "text": "This book is an accompaniment to the computer software package mathStatica (which runs as an add-on to Mathematica). The book comes with two CD-ROMS: mathStatica, and a 30-day trial version of Mathematica 4.1. The mathStatica CD-ROM includes an applications pack for doing mathematical statistics, custom Mathematica palettes and an electronic version of the book that is identical to the printed text, but can be used interactively to generate animations of some of the book's figures (e.g. as a parameter is varied). (I found this last feature particularly valuable.) MathStatica has statistical operators for determining expectations (and hence characteristic functions, for example) and probabilities, for finding the distributions of transformations of random variables and generally for dealing with the kinds of problems and questions that arise in mathematical statistics. Applications include estimation, curve-fitting, asymptotics, decision theory and moment conversion formulae (e.g. central to cumulant). To give an idea of the coverage of the book: after an introductory chapter, there are three chapters on random variables, then chapters on systems of distributions (e.g. Pearson), multivariate distributions, moments, asymptotic theory, decision theory and then three chapters on estimation. There is an appendix, which deals with technical Mathematica details. What distinguishes mathStatica from statistical packages such as S-PLUS, R, SPSS and SAS is its ability to deal with the algebraic/symbolic problems that are the main concern of mathematical statistics. This is, of course, because it is based on Mathematica, and this is also the reason that it has a note–book interface (which enables one to incorporate text, equations and pictures into a single line), and why arbitrary-precision calculations can be performed. According to the authors, 'this book can be used as a course text in mathematical statistics or as an accompaniment to a more traditional text'. Assumed knowledge includes preliminary courses in statistics, probability and calculus. The emphasis is on problem solving. The material is supposedly pitched at the same level as Hogg and Craig (1995). However some topics are treated in much more depth than in Hogg and Craig (characteristic functions for instance, which rate less than one page in Hogg and Craig). Also, the coverage is far broader than that of Hogg and Craig; additional topics include for instance stable distributions, cumulants, Pearson families, Gram-Charlier expansions and copulae. Hogg and Craig can be used as a textbook for a third-year course in mathematical statistics in some Australian universities , whereas there is …", "title": "" }, { "docid": "9e466a4414125c0b2a41565eaeffd602", "text": "In this work, we present a part-based grasp planning approach that is capable of generating grasps that are applicable to multiple familiar objects. We show how object models can be decomposed according to their shape and local volumetric information. The resulting object parts are labeled with semantic information and used for generating robotic grasping information. We investigate how the transfer of such grasping information to familiar objects can be achieved and how the transferability of grasps can be measured. We show that the grasp transferability measure provides valuable information about how successful planned grasps can be applied to novel object instances of the same object category. We evaluate the approach in simulation, by applying it to multiple object categories and determine how successful the planned grasps can be transferred to novel, but familiar objects. In addition, we present a use case on the humanoid robot ARMAR-III.", "title": "" }, { "docid": "2901aaa10d8e7aa23f372f4e715686d5", "text": "This article describes a model of communication known as crisis and emergency risk communication (CERC). The model is outlined as a merger of many traditional notions of health and risk communication with work in crisis and disaster communication. The specific kinds of communication activities that should be called for at various stages of disaster or crisis development are outlined. Although crises are by definition uncertain, equivocal, and often chaotic situations, the CERC model is presented as a tool health communicators can use to help manage these complex events.", "title": "" } ]
scidocsrr
ac33627fa41fd9866ff98a2bab6f5668
Spectral Ensemble Clustering via Weighted K-Means: Theoretical and Practical Evidence
[ { "docid": "274f9e9f20a7ba3b29a5ab939aea68a2", "text": "Clustering validation is a long standing challenge in the clustering literature. While many validation measures have been developed for evaluating the performance of clustering algorithms, these measures often provide inconsistent information about the clustering performance and the best suitable measures to use in practice remain unknown. This paper thus fills this crucial void by giving an organized study of 16 external validation measures for K-means clustering. Specifically, we first introduce the importance of measure normalization in the evaluation of the clustering performance on data with imbalanced class distributions. We also provide normalization solutions for several measures. In addition, we summarize the major properties of these external measures. These properties can serve as the guidance for the selection of validation measures in different application scenarios. Finally, we reveal the interrelationships among these external measures. By mathematical transformation, we show that some validation measures are equivalent. Also, some measures have consistent validation performances. Most importantly, we provide a guide line to select the most suitable validation measures for K-means clustering.", "title": "" } ]
[ { "docid": "5c8c391a10f32069849d743abc5e8210", "text": "We present a massively parallel coprocessor for accelerating Convolutional Neural Networks (CNNs), a class of important machine learning algorithms. The coprocessor functional units, consisting of parallel 2D convolution primitives and programmable units performing sub-sampling and non-linear functions specific to CNNs, implement a “meta-operator” to which a CNN may be compiled to. The coprocessor is serviced by distributed off-chip memory banks with large data bandwidth. As a key feature, we use low precision data and further increase the effective memory bandwidth by packing multiple words in every memory operation, and leverage the algorithm’s simple data access patterns to use off-chip memory as a scratchpad for intermediate data, critical for CNNs. A CNN is mapped to the coprocessor hardware primitives with instructions to transfer data between the memory and coprocessor. We have implemented a prototype of the CNN coprocessor on an off-the-shelf PCI FPGA card with a single Xilinx Virtex5 LX330T FPGA and 4 DDR2 memory banks totaling 1GB. The coprocessor prototype can process at the rate of 3.4 billion multiply accumulates per second (GMACs) for CNN forward propagation, a speed that is 31x faster than a software implementation on a 2.2 GHz AMD Opteron processor. For a complete face recognition application with the CNN on the coprocessor and the rest of the image processing tasks on the host, the prototype is 6-10x faster, depending on the host-coprocessor bandwidth.", "title": "" }, { "docid": "169ea06b2ec47b77d01fe9a4d4f8a265", "text": "One of the main challenges in security today is defending against malware attacks. As trends and anecdotal evidence show, preventing these attacks, regardless of their indiscriminate or targeted nature, has proven difficult: intrusions happen and devices get compromised, even at security-conscious organizations. As a consequence, an alternative line of work has focused on detecting and disrupting the individual steps that follow an initial compromise and are essential for the successful progression of the attack. In particular, several approaches and techniques have been proposed to identify the command and control (C8C) channel that a compromised system establishes to communicate with its controller.\n A major oversight of many of these detection techniques is the design’s resilience to evasion attempts by the well-motivated attacker. C8C detection techniques make widespread use of a machine learning (ML) component. Therefore, to analyze the evasion resilience of these detection techniques, we first systematize works in the field of C8C detection and then, using existing models from the literature, go on to systematize attacks against the ML components used in these approaches.", "title": "" }, { "docid": "d357b6401cd3f33061c860adafea7feb", "text": "Six-degrees-of-freedom (6-DOF) trackers, which were mainly for professional computer applications, are now in demand by everyday consumer applications. With the requirements of consumer electronics in mind, we designed an optical 6-DOF tracker where a few photo-sensors can track the position and orientation of an LED cluster. The operating principle of the tracker is basically source localization by solving an inverse problem. We implemented a prototype system for a TV viewing environment, verified the feasibility of the operating principle, and evaluated the basic performance of the prototype system in terms of accuracy and speed. We also examined its application possibility to different environments, such as a tabletop computer, a tablet computer, and a mobile spatial interaction environment.", "title": "" }, { "docid": "bf5363b14779a7167be5ab8e45cd8fd4", "text": "In Information-Centric Internet of Things (ICIoT), Internet of Things (IoT) data can be cached throughout a network for close data copy retrievals. Such a distributed data caching environment, however, poses a challenge to flexible authorization in the network. To address this challenge, Ciphertext-Policy Attribute-Based Encryption (CP-ABE) has been identified as a promising approach. However, in the existing CP-ABE scheme, publishers need to retrieve attributes from a centralized server for encrypting data, which leads to high communication overhead. To solve this problem, we incorporate CP-ABE and propose a novel Distributed Publisher-Driven secure data sharing for ICIoT (DPD-ICIoT) to enable only authorized users to retrieve IoT data from distributed cache. In DPD-ICIoT, newly introduced attribute manifest is cached in the network, through which publishers can retrieve the attributes from nearby copy holders instead of a centralized attribute server. In addition, a key chain mechanism is utilized for efficient cryptographic operations, and an automatic attribute self-update mechanism is proposed to enable fast updates of attributes without querying centralized servers. According to the performance evaluation, DPD-ICIoT achieves lower bandwidth cost compared to the existing CP-ABE scheme.", "title": "" }, { "docid": "b5da410382e8ad27f012f3adac17592e", "text": "In this paper, we propose a new routing protocol, the Zone Routing Protocol (ZRP), for the Reconfigurable Wireless Networks, a large scale, highly mobile ad-hoc networking environment. The novelty of the ZRP protocol is that it is applicable to large flat-routed networks. Furthermore, through the use of the zone radius parameter, the scheme exhibits adjustable hybrid behavior of proactive and reactive routing schemes. We evaluate the performance of the protocol, showing the reduction in the number of control messages, as compared with other reactive schemes, such as flooding. INTRODUCTION Recently, there has been an increased interest in ad-hoc networking [1]. In general, ad-hoc networks are network architecture that can be rapidly deployed, without preexistence of any fixed infrastructure. A special case of ad-hoc networks, the Reconfigurable Wireless Networks (RWN), was previously introduced [2,3] to emphasize a number of special characteristics of the RWN communication environment: 3⁄4 large network coverage; large network radius, net r , 3⁄4 large number of network nodes, and 3⁄4 large range of nodal velocities (from stationary to highly mobile). In particular, the topology of the RWN is quite frequently changing, while self-adapting to the connectivity and propagation conditions and to the traffic and mobility patterns. Examples of the use of the RWNs are: • military (tactical) communication for fast establishment of communication infrastructure during deployment of forces in a foreign (hostile) terrain • rescue missions for communication in areas without adequate wireless coverage • national security for communication in times of national crisis, when the existing communication infrastructure is non-operational due to a natural disasters or a global war • law enforcement similar to tactical communication 1 For example, the maximal nodal velocity is such that the lifetime of a link can be between hundreds of milliseconds to few seconds only. • commercial use for setting up communication in exhibitions, conferences, or sale presentations • education for operation of virtual classrooms • sensor networks for communication between intelligent sensors (e.g., MEMS) mounted on mobile platforms. Basically, there are two approaching in providing ad-hoc network connectivity: flat-routed or hierarchical network architectures. An example of a flat-routed network is shown in Figure 1 and of a two-tiered hierarFigure 1: A flat-routed ad-hoc network chical network in Figure 2. In flat-routed networks, all the nodes are “equal” and the packet routing is done based on peer-to-peer connections, restricted only by the propagation conditions. In hierarchical networks, there are at least two tiers; on the lower tier, nodes in geographical proximity create peer-to-peer networks. In each one of these lower-tier networks, at least one node is designated to serve as a \"gateway” to the higher tier. These “gateway” nodes create the highertier network, which usually requires more powerful transmitters/receivers. Although routing between nodes that belong to the same lower-tier network is based on peer-to-peer routing, routing between nodes that belong to different lower-tier networks is through the gateway nodes. Figure 2: A two-tiered ad-hoc network tier-1 network tier-2 network tier-1 network tier-1 network tier-1 network cluster cluster head We will omit here the comparison of the two architectures. Nevertheless, we note that the flat-routed networks are more suitable for the highly versatile communication environment as the RWN-s. The reason is that the maintenance of the hierarchies (and the associated cluster heads) is too costly in network resources when the lifetime of the links is quite short. Thus, we chose to concentrate on the flat-routed network architecture in our study of the routing protocols for the RWN. PREVIOUS AND RELATED WORK The currently available routing protocols are inadequate for the RWN. The main problem is that they do not support either fast-changeable network architecture or that they do not scale well with the size of the network (number of nodes). Surprisingly, these shortcomings are present even in some routing protocols that were proposed for ad-hoc networks. More specifically, the challenge stems from the fact that, on one hand, in-order to route packets in a network, the network topology needs to be known to the traversed nodes. On the other hand, in a RWN, this topology may change quite often. Also, the number of nodes may be very large. Thus, the cost of updates is quite high, in contradiction with the fact that updates are expensive in the wireless communication environment. Furthermore, as the number of network nodes may be large, the potential number of destinations is also large, requiring large and frequent exchange of data (e.g., routes, routes updates, or routing tables) between network nodes. The wired Internet uses routing protocols based on topological broadcast, such as the OSPF [4]. These protocols are not suitable for the RWN due to the relatively large bandwidth required for update messages. In the past, routing in multi-hop packet radio networks was based on shortest-path routing algorithms [5], such as Distributed Bellman-Ford (DBF) algorithm. These algorithms suffer from very slow convergence (the “counting to infinity” problem). Besides, DBF-like algorithms incur large update message penalty. Protocols that attempted to cure some of the shortcoming of DFB, such as DestinationSequenced Distance-Vector Routing (DSDV) [6], were proposed and studied. Nevertheless, synchronization problems and extra processing overhead are common in these protocols. Other protocols that rely on the information from the predecessor of the shortest path solve the slow convergence problem of DBF (e.g., [7]). However, the processing requirements of these protocols may be quite high, because of the way they process the update messages. Use of dynamic source routing protocol, which utilizes flooding to discover a route to a destination, is described in [8]. A number of optimization techniques, such as route caching are also presented that reduce the route determination/maintenance overhead. In a highly dynamic environment, such as the RWN is, this type of protocols lead to a large delay and the techniques to reduce overhead may not perform well. A query-reply based routing protocol has been introduced recently in [9]. Practical implementation of this protocol in the RWN-s can lead, however, to high communication requirements. A new distance-vector routing protocol for packet radio networks (WRP) is presented in [10]. Upon change in the network topology, WRP relies on communicating the change to its neighbors, which effectively propagates throughout the whole network. The salient advantage of WRP is the considerable reduction in the probability of loops in the calculated routes. The main disadvantage of WRP for the RWN is in the fact that routing nodes constantly maintain full routing information in each network node, which was obtained at relatively high cost in wireless resources In [11], routing is based on temporary addresses assigned to nodes. These addresses are concatenation of the node’s addresses on a physical and a virtual networks. However, routing requires full connectivity among all the physical network nodes. Furthermore, the routing may not be optimal, as it is based on addresses, which may not be related to the geographical locations, producing a long path for communication between two close-by nodes. The above routing protocols can be classified either as proactive or as reactive. Proactive protocols attempt to continuously evaluate the routes within the network, so that when a packet needs to be forwarded, the route is already known and can be immediately used. Reactive protocols, on the other hand, invoke the route determination procedures on demand only. Thus, when a route is needed, some sort of global search procedure is employed. The advantage of the proactive schemes is that, once a route is requested, there is little delay until route is determined. In reactive protocols, because route information may not be available at the time a routing request is received, the delay to determine a route can be quite significant. Because of this long delay, pure reactive routing protocols may not be applicable to realtime communication. However, pure proactive schemes are likewise not appropriate for the RWN environment, as they continuously use large portion of the network capacity to keep the routing information current. Since in an RWN nodes move quite fast, and as the changes may be more frequent than the routing requests, most of this routing information is never used! This results in an excessive waste of the network capacity. What is needed is a protocol that, on one hand, initiates the route-determination procedure on-demand, but with limited cost of the global search. The introduced here routing protocol, which is based on the notion of routing zones, incurs very low overhead in route determination. It requires maintaining a small amount of routing information in each node. There is no overhead of wireless resources to maintain routing information of inactive routes. Moreover, it identifies multiple routes with no looping problems. The ZONE ROUTING PROTOCOL (ZRP) Our approach to routing in the RWN is based on the notion of a routing zone, which is defined for each node and includes the nodes whose distance (e.g., in hops) is at most some predefined number. This distance is referred to here as the zone radius, zone r . Each node is required to know the topology of the network within its routing zone only and nodes are updated about topological changes only within their routing zone. Thus, even though a network can be quite large, the updates are only locally propagated. Since for radius greater than 1 the routing zones heavily overlap, the routing tends to be extremely robust. The rout", "title": "" }, { "docid": "773d02f9ba577948cde5bb837e4cffe6", "text": "A ring oscillator physical unclonable function (RO PUF) is an application-constrained hardware security primitive that can be used for authentication and key generation. PUFs depend on variability during the fabrication process to produce random outputs that are nevertheless stable across multiple measurements. Unfortunately, RO PUFs are known to be unstable especially when implemented on an Field Programmable Gate Array (FPGA). In this work, we comprehensively evaluate the RO PUF's stability on FPGAs, and we propose a phase calibration process to improve the stability of RO PUFs. The results show that the bit errors in our PUFs are reduced to less than 1%.", "title": "" }, { "docid": "c481baeab2091672c044c889b1179b1f", "text": "Our research is based on an innovative approach that integrates computational thinking and creative thinking in CS1 to improve student learning performance. Referencing Epstein's Generativity Theory, we designed and deployed a suite of creative thinking exercises with linkages to concepts in computer science and computational thinking, with the premise that students can leverage their creative thinking skills to \"unlock\" their understanding of computational thinking. In this paper, we focus on our study on differential impacts of the exercises on different student populations. For all students there was a linear \"dosage effect\" where completion of each additional exercise increased retention of course content. The impacts on course grades, however, were more nuanced. CS majors had a consistent increase for each exercise, while non-majors benefited more from completing at least three exercises. It was also important for freshmen to complete all four exercises. We did find differences between women and men but cannot draw conclusions.", "title": "" }, { "docid": "3fa70c2667c6dbe179a7e17e44571727", "text": "A~tract--For the past decade, many image segmentation techniques have been proposed. These segmentation techniques can be categorized into three classes, (I) characteristic feature thresholding or clustering, (2) edge detection, and (3) region extraction. This survey summarizes some of these techniques, in the area of biomedical image segmentation, most proposed techniques fall into the categories of characteristic feature thresholding or clustering and edge detection.", "title": "" }, { "docid": "24a6ad4d167290bec62a044580635aa0", "text": "We introduce HyperLex—a data set and evaluation resource that quantifies the extent of the semantic category membership, that is, type-of relation, also known as hyponymy–hypernymy or lexical entailment (LE) relation between 2,616 concept pairs. Cognitive psychology research has established that typicality and category/class membership are computed in human semantic memory as a gradual rather than binary relation. Nevertheless, most NLP research and existing large-scale inventories of concept category membership (WordNet, DBPedia, etc.) treat category membership and LE as binary. To address this, we asked hundreds of native English speakers to indicate typicality and strength of category membership between a diverse range of concept pairs on a crowdsourcing platform. Our results confirm that category membership and LE are indeed more gradual than binary. We then compare these human judgments with the predictions of automatic systems, which reveals a huge gap between human performance and state-of-the-art LE, distributional and representation learning models, and substantial differences between the models themselves. We discuss a pathway for improving semantic models to overcome this discrepancy, and indicate future application areas for improved graded LE systems.", "title": "" }, { "docid": "f0d55892fb927c5c5324cfb7b8380bda", "text": "The paper presents application of data mining methods for recognizing the most significant genes and gene sequences (treated as features) stored in a dataset of gene expression microarray. The investigations are performed for autism data. Few chosen methods of feature selection have been applied and their results integrated in the final outcome. In this way we find the contents of small set of the most important genes associated with autism. They have been applied in the classification procedure aimed on recognition of autism from reference group members. The results of numerical experiments concerning selection of the most important genes and classification of the cases on the basis of the selected genes will be discussed. The main contribution of the paper is in developing the fusion system of the results of many selection approaches into the final set, most closely associated with autism. We have also proposed special procedure of estimating the number of highest rank genes used in classification procedure. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d1e953d260c6bfe6f8e52417d90010fd", "text": "The 2011 UN high-level meeting on non-communicable diseases (NCDs) called for multisectoral action including with the private sector and industry. However, through the sale and promotion of tobacco, alcohol, and ultra-processed food and drink (unhealthy commodities), transnational corporations are major drivers of global epidemics of NCDs. What role then should these industries have in NCD prevention and control? We emphasise the rise in sales of these unhealthy commodities in low-income and middle-income countries, and consider the common strategies that the transnational corporations use to undermine NCD prevention and control. We assess the effectiveness of self-regulation, public-private partnerships, and public regulation models of interaction with these industries and conclude that unhealthy commodity industries should have no role in the formation of national or international NCD policy. Despite the common reliance on industry self-regulation and public-private partnerships, there is no evidence of their effectiveness or safety. Public regulation and market intervention are the only evidence-based mechanisms to prevent harm caused by the unhealthy commodity industries.", "title": "" }, { "docid": "b148cfa9a0c03c6ca0af7aa8e007d39b", "text": "Feedforward deep neural networks (DNNs), artificial neural networks with multiple hidden layers, have recently demonstrated a record-breaking performance in multiple areas of applications in computer vision and speech processing. Following the success, DNNs have been applied to neuroimaging modalities including functional/structural magnetic resonance imaging (MRI) and positron-emission tomography data. However, no study has explicitly applied DNNs to 3D whole-brain fMRI volumes and thereby extracted hidden volumetric representations of fMRI that are discriminative for a task performed as the fMRI volume was acquired. Our study applied fully connected feedforward DNN to fMRI volumes collected in four sensorimotor tasks (i.e., left-hand clenching, right-hand clenching, auditory attention, and visual stimulus) undertaken by 12 healthy participants. Using a leave-one-subject-out cross-validation scheme, a restricted Boltzmann machine-based deep belief network was pretrained and used to initialize weights of the DNN. The pretrained DNN was fine-tuned while systematically controlling weight-sparsity levels across hidden layers. Optimal weight-sparsity levels were determined from a minimum validation error rate of fMRI volume classification. Minimum error rates (mean±standard deviation; %) of 6.9 (±3.8) were obtained from the three-layer DNN with the sparsest condition of weights across the three hidden layers. These error rates were even lower than the error rates from the single-layer network (9.4±4.6) and the two-layer network (7.4±4.1). The estimated DNN weights showed spatial patterns that are remarkably task-specific, particularly in the higher layers. The output values of the third hidden layer represented distinct patterns/codes of the 3D whole-brain fMRI volume and encoded the information of the tasks as evaluated from representational similarity analysis. Our reported findings show the ability of the DNN to classify a single fMRI volume based on the extraction of hidden representations of fMRI volumes associated with tasks across multiple hidden layers. Our study may be beneficial to the automatic classification/diagnosis of neuropsychiatric and neurological diseases and prediction of disease severity and recovery in (pre-) clinical settings using fMRI volumes without requiring an estimation of activation patterns or ad hoc statistical evaluation.", "title": "" }, { "docid": "ca56254b31e69745603a23ce0fc8ecb8", "text": "We propose the use of nonlocal operators to define new types of flows and functionals for image processing and elsewhere. A main advantage over classical PDE-based algorithms is the ability to handle better textures and repetitive structures. This topic can be viewed as an extension of spectral graph theory and the diffusion geometry framework to functional analysis and PDE-like evolutions. Some possible application and numerical examples are given, as is a general framework for approximating Hamilton-Jacobi equations on arbitrary grids in high demensions, e.g., for control theory.", "title": "" }, { "docid": "a02a53a7fe03bc687d841e67ee08f641", "text": "Spontaneous gestures that accompany speech are related to both verbal and spatial processes. We argue that gestures emerge from perceptual and motor simulations that underlie embodied language and mental imagery. We first review current thinking about embodied cognition, embodied language, and embodied mental imagery. We then provide evidence that gestures stem from spatial representations and mental images. We then propose the gestures-as-simulated-action framework to explain how gestures might arise from an embodied cognitive system. Finally, we compare this framework with other current models of gesture production, and we briefly outline predictions that derive from the framework.", "title": "" }, { "docid": "7f18fbfa3260c06a8d8f04a14f92a90d", "text": "Resonant Bits proposes giving digital information resonant dynamic properties, requiring skill and concerted e↵ort for interaction. This paper applies resonant interaction to musical control, exploring musical instruments that are controlled through both purposeful and subconscious resonance. We detail three exploratory prototypes, the first two illustrating the use of resonant gestures and the third focusing on the detection and use of the ideomotor (subconscious micro-movement) e↵ect.", "title": "" }, { "docid": "8439dbba880179895ab98a521b4c254f", "text": "Given the increase in demand for sustainable livelihoods for coastal villagers in developing countries and for the commercial eucheumoid Kappaphycus alvarezii (Doty) Doty, for the carrageenan industry, there is a trend towards introducing K. alvarezii to more countries in the tropical world for the purpose of cultivation. However, there is also increasing concern over the impact exotic species have on endemic ecosystems and biodiversity. Quarantine and introduction procedures were tested in northern Madagascar and are proposed for all future introductions of commercial eucheumoids (K. alvarezii, K. striatum and Eucheuma denticulatum). In addition, the impact and extent of introduction of K. alvarezii was measured on an isolated lagoon in the southern Lau group of Fiji. It is suggested that, in areas with high human population density, the overwhelming benefits to coastal ecosystems by commercial eucheumoid cultivation far outweigh potential negative impacts. However, quarantine and introduction procedures should be followed. In addition, introduction should only take place if a thorough survey has been conducted and indicates the site is appropriate. Subsequently, the project requires that a well designed and funded cultivation development programme, with a management plan and an assured market, is in place in order to make certain cultivation, and subsequently the introduced algae, will not be abandoned at a later date. KAPPAPHYCUS ALVAREZI", "title": "" }, { "docid": "2f7d487059a77b582c3e0a33fd5d38af", "text": "Disturbance regimes are changing rapidly, and the consequences of such changes for ecosystems and linked social-ecological systems will be profound. This paper synthesizes current understanding of disturbance with an emphasis on fundamental contributions to contemporary landscape and ecosystem ecology, then identifies future research priorities. Studies of disturbance led to insights about heterogeneity, scale, and thresholds in space and time and catalyzed new paradigms in ecology. Because they create vegetation patterns, disturbances also establish spatial patterns of many ecosystem processes on the landscape. Drivers of global change will produce new spatial patterns, altered disturbance regimes, novel trajectories of change, and surprises. Future disturbances will continue to provide valuable opportunities for studying pattern-process interactions. Changing disturbance regimes will produce acute changes in ecosystems and ecosystem services over the short (years to decades) and long-term (centuries and beyond). Future research should address questions related to (1) disturbances as catalysts of rapid ecological change, (2) interactions among disturbances, (3) relationships between disturbance and society, especially the intersection of land use and disturbance, and (4) feedbacks from disturbance to other global drivers. Ecologists should make a renewed and concerted effort to understand and anticipate the causes and consequences of changing disturbance regimes.", "title": "" }, { "docid": "e32068682c313637f97718e457914381", "text": "Optimal load shedding is a very critical issue in power systems. It plays a vital role, especially in third world countries. A sudden increase in load can affect the important parameters of the power system like voltage, frequency and phase angle. This paper presents a case study of Pakistan’s power system, where the generated power, the load demand, frequency deviation and load shedding during a 24-hour period have been provided. An artificial neural network ensemble is aimed for optimal load shedding. The objective of this paper is to maintain power system frequency stability by shedding an accurate amount of load. Due to its fast convergence and improved generalization ability, the proposed algorithm helps to deal with load shedding in an efficient manner.", "title": "" }, { "docid": "5f007e018f9abc74d1d7d188cd077fe7", "text": "Due to the intensified need for improved information security, many organisations have established information security awareness programs to ensure that their employees are informed and aware of security risks, thereby protecting themselves and their profitability. In order for a security awareness program to add value to an organisation and at the same time make a contribution to the field of information security, it is necessary to have a set of methods to study and measure its effect. The objective of this paper is to report on the development of a prototype model for measuring information security awareness in an international mining company. Following a description of the model, a brief discussion of the application results is presented. a 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a7cdf016168071998f8dd3e37cdf708e", "text": "Wireless power transfer is commonly realized by means of near-field inductive coupling and is critical to many existing and emerging applications in biomedical engineering. This paper presents a closed form analytical solution for the optimum load that achieves the maximum possible power efficiency under arbitrary input impedance conditions based on the general two-port parameters of the network. The two-port approach allows one to predict the power transfer efficiency at any frequency, any type of coil geometry and through any type of media surrounding the coils. Moreover, the results are applicable to any form of passive power transfer such as provided by inductive or capacitive coupling. Our results generalize several well-known special cases. The formulation allows the design of an optimized wireless power transfer link through biological media using readily available EM simulation software. The proposed method effectively decouples the design of the inductive coupling two-port from the problem of loading and power amplifier design. Several case studies are provided for typical applications.", "title": "" } ]
scidocsrr
0a1647ae4e390b64691e9762ad06cba8
Object tracking using firefly algorithm
[ { "docid": "d90b68b84294d0a56d71b3c5b1a5eeb7", "text": "Nature-inspired algorithms are among the most powerful algorithms for optimization. This paper intends to provide a detailed description of a new Firefly Algorithm (FA) for multimodal optimization applications. We will compare the proposed firefly algorithm with other metaheuristic algorithms such as particle swarm optimization (PSO). Simulations and results indicate that the proposed firefly algorithm is superior to existing metaheuristic algorithms. Finally we will discuss its applications and implications for further research.", "title": "" } ]
[ { "docid": "7a12a94d1d96e07b45bcf1577e4df360", "text": "one becomes available. The Master must also inform each Reduce task that the location of its input from that Map task has changed. Dealing with a failure at the node of a Reduce worker is simpler. The Master simply sets the status of its currently executing Reduce tasks to idle. These will be rescheduled on another reduce worker later. Exercise 2.2.1 : Suppose we execute the word-count map-reduce program described in this section on a large repository such as a copy of the Web. We shall use 100 Map tasks and some number of Reduce tasks. (a) Suppose we do not use a combiner at the Map tasks. Do you expect there to be significant skew in the times taken by the various reducers to process their value list? Why or why not? (b) If we combine the reducers into a small number of Reduce tasks, say 10 tasks, at random, do you expect the skew to be significant? What if we instead combine the reducers into 10,000 Reduce tasks? ! (c) Suppose we do use a combiner at the 100 Map tasks. Do you expect skew to be significant? Why or why not? 2.3 Algorithms Using Map-Reduce Map-reduce is not a solution to every problem, not even every problem that profitably can use many compute nodes operating in parallel. As we mentioned in Section 2.1.2, the entire distributed-file-system milieu makes sense only when files are very large and are rarely updated in place. Thus, we would not expect to use either a DFS or an implementation of map-reduce for managing on-line retail sales, even though a large on-line retailer such as Amazon.com uses thousands of compute nodes when processing requests over the Web. The reason is that the principal operations on Amazon data involve responding to searches for products, recording sales, and so on, processes that involve relatively little calculation and that change the database. 2 On the other hand, Amazon might use map-reduce to perform certain analytic queries on large amounts of data, such as finding for each user those users whose buying patterns were most similar. The original purpose for which the Google implementation of map-reduce was created was to execute very large matrix-vector multiplications as are needed in the calculation of PageRank (See Chapter 5). We shall see that matrix-vector and matrix-matrix calculations fit nicely into the map-reduce 2 Remember that even looking at a product …", "title": "" }, { "docid": "743aeaa668ba32e6561e9e62015e24cd", "text": "A smart city enables the effective utilization of resources and better quality of services to the citizens. To provide services such as air quality management, weather monitoring and automation of homes and buildings in a smart city, the basic parameters are temperature, humidity and CO2. This paper presents a customised design of an Internet of Things (IoT) enabled environment monitoring system to monitor temperature, humidity and CO2. In developed system, data is sent from the transmitter node to the receiver node. The data received at the receiver node is monitored and recorded in an excel sheet in a personal computer (PC) through a Graphical User Interface (GUI), made in LabVIEW. An Android application has also been developed through which data is transferred from LabVIEW to a smartphone, for monitoring data remotely. The results and the performance of the proposed system is discussed.", "title": "" }, { "docid": "983cae67894ae61b2301dc79713969c0", "text": "Although there is no analytical framework for assessing the organizational benefits of ERP systems, several researchers have indicated that the balanced scorecard (BSC) approach may be an appropriate technique for evaluating the performance of ERP systems. This paper fills this gap in the literature by providing a balanced-scorecard based framework for valuing the strategic contributions of an ERP system. Using a successful SAP implementation by a major international aircraft engine manufacturing and service organization as a case study, this paper illustrates that an ERP system does indeed impacts the business objectives of the firm and derives a new innovative ERP framework for valuing the strategic impacts of ERP systems. The ERP valuation framework, called here an ERP scorecard, integrates the four Kaplan and Norton’s balanced scorecard dimensions with Zuboff’s automate, informate and transformate goals of information systems to provide a practical approach for measuring the contributions and impacts of ERP systems on the strategic goals of the company. # 2005 Published by Elsevier B.V.", "title": "" }, { "docid": "ba900055b2038495e983dd60f7e50ea0", "text": "In the genetic code, UGA serves as a stop signal and a selenocysteine codon, but no computational methods for identifying its coding function are available. Consequently, most selenoprotein genes are misannotated. We identified selenoprotein genes in sequenced mammalian genomes by methods that rely on identification of selenocysteine insertion RNA structures, the coding potential of UGA codons, and the presence of cysteine-containing homologs. The human selenoproteome consists of 25 selenoproteins.", "title": "" }, { "docid": "43085c5afcf3a576a3f2169de4402645", "text": "In this study, we systematically investigate the impact of class imbalance on classification performance of convolutional neural networks (CNNs) and compare frequently used methods to address the issue. Class imbalance is a common problem that has been comprehensively studied in classical machine learning, yet very limited systematic research is available in the context of deep learning. In our study, we use three benchmark datasets of increasing complexity, MNIST, CIFAR-10 and ImageNet, to investigate the effects of imbalance on classification and perform an extensive comparison of several methods to address the issue: oversampling, undersampling, two-phase training, and thresholding that compensates for prior class probabilities. Our main evaluation metric is area under the receiver operating characteristic curve (ROC AUC) adjusted to multi-class tasks since overall accuracy metric is associated with notable difficulties in the context of imbalanced data. Based on results from our experiments we conclude that (i) the effect of class imbalance on classification performance is detrimental; (ii) the method of addressing class imbalance that emerged as dominant in almost all analyzed scenarios was oversampling; (iii) oversampling should be applied to the level that completely eliminates the imbalance, whereas the optimal undersampling ratio depends on the extent of imbalance; (iv) as opposed to some classical machine learning models, oversampling does not cause overfitting of CNNs; (v) thresholding should be applied to compensate for prior class probabilities when overall number of properly classified cases is of interest.", "title": "" }, { "docid": "d0a4bc15208b12b1647eb21e7ca9cc6c", "text": "The investment in an automated fabric defect detection system is more than economical when reduction in labor cost and associated benefits are considered. The development of a fully automated web inspection system requires robust and efficient fabric defect detection algorithms. The inspection of real fabric defects is particularly challenging due to the large number of fabric defect classes, which are characterized by their vagueness and ambiguity. Numerous techniques have been developed to detect fabric defects and the purpose of this paper is to categorize and/or describe these algorithms. This paper attempts to present the first survey on fabric defect detection techniques presented in about 160 references. Categorization of fabric defect detection techniques is useful in evaluating the qualities of identified features. The characterization of real fabric surfaces using their structure and primitive set has not yet been successful. Therefore, on the basis of the nature of features from the fabric surfaces, the proposed approaches have been characterized into three categories; statistical, spectral and model-based. In order to evaluate the state-of-the-art, the limitations of several promising techniques are identified and performances are analyzed in the context of their demonstrated results and intended application. The conclusions from this paper also suggest that the combination of statistical, spectral and model-based approaches can give better results than any single approach, and is suggested for further research.", "title": "" }, { "docid": "c1a8e30586aad77395e429556545675c", "text": "We investigate techniques for analysis and retrieval of object trajectories in a two or three dimensional space. Such kind of data usually contain a great amount of noise, that makes all previously used metrics fail. Therefore, here we formalize non-metric similarity functions based on the Longest Common Subsequence (LCSS), which are very robust to noise and furthermore provide an intuitive notion of similarity between trajectories by giving more weight to the similar portions of the sequences. Stretching of sequences in time is allowed, as well as global translating of the sequences in space. Efficient approximate algorithms that compute these similarity measures are also provided. We compare these new methods to the widely used Euclidean and Time Warping distance functions (for real and synthetic data) and show the superiority of our approach, especially under the strong presence of noise. We prove a weaker version of the triangle inequality and employ it in an indexing structure to answer nearest neighbor queries. Finally, we present experimental results that validate the accuracy and efficiency of our approach.", "title": "" }, { "docid": "8b7aab188ac4b6e4e777dfd1c670fab3", "text": "In this paper, we have designed a newly shaped narrowband microstrip antenna operating at nearly 2.45 GHz based on transmission-line model. We have created a reversed `Arrow' shaped slot at the edge of opposite side of microstrip line feed to improve return loss and minimize VSWR, which are required for better impedance matching. After simulating the design, we have got higher return loss (approximately -41 dB) and lower VSWR (approximately 1.02:1) at 2.442 GHz. The radiation pattern of the antenna is unidirectional, which is suitable for both fixed RFID tag and reader. The gain of this antenna is 9.67 dB. The design has been simulated in CST Microwave Studio 2011.", "title": "" }, { "docid": "a55a5785375031a7a967b0d65a2afd4e", "text": "Successful negotiation of everyday life would seem to require people to possess insight about deficiencies in their intellectual and social skills. However, people tend to be blissfully unaware of their incompetence. This lack of awareness arises because poor performers are doubly cursed: Their lack of skill deprives them not only of the ability to produce correct responses, but also of the expertise necessary to surmise that they are not producing them. People base their perceptions of performance, in part, on their preconceived notions about their skills. Because these notions often do not correlate with objective performance, they can lead people to make judgments about their performance that have little to do with actual accomplishment.", "title": "" }, { "docid": "2c39eafa87d34806dd1897335fdfe41c", "text": "One of the issues facing credit card fraud detection systems is that a significant percentage of transactions labeled as fraudulent are in fact legitimate. These &quot;false alarms&quot; delay the detection of fraudulent transactions and can cause unnecessary concerns for customers. In this study, over 1 million unique credit card transactions from 11 months of data from a large Canadian bank were analyzed. A meta-classifier model was applied to the transactions after being analyzed by the Bank&apos;s existing neural network based fraud detection algorithm. This meta-classifier model consists of 3 base classifiers constructed using the decision tree, naïve Bayesian, and k-nearest neighbour algorithms. The naïve Bayesian algorithm was also used as the meta-level algorithm to combine the base classifier predictions to produce the final classifier. Results from the research show that when a meta-classifier was deployed in series with the Bank&apos;s existing fraud detection algorithm improvements of up to 28% to their existing system can be achieved.", "title": "" }, { "docid": "c716b38ed5f8172cedc7310ff1a9eb1a", "text": "Spam is considered an invasion of privacy. Its changeable structures and variability raise the need for new spam classification techniques. The present study proposes using Bayesian additive regression trees (BART) for spam classification and evaluates its performance against other classification methods, including logistic regression, support vector machines, classification and regression trees, neural networks, random forests, and naive Bayes. BART in its original form is not designed for such problems, hence we modify BART and make it applicable to classification problems. We evaluate the classifiers using three spam datasets; Ling-Spam, PU1, and Spambase to determine the predictive accuracy and the false positive rate.", "title": "" }, { "docid": "14360f8801fcff22b7a0059b322ebf9a", "text": "Supplying realistically textured 3D city models at ground level promises to be useful for pre-visualizing upcoming traffic situations in car navigation systems. Because this pre-visualization can be rendered from the expected future viewpoints of the driver, the required maneuver will be more easily understandable. 3D city models can be reconstructed from the imagery recorded by surveying vehicles. The vastness of image material gathered by these vehicles, however, puts extreme demands on vision algorithms to ensure their practical usability. Algorithms need to be as fast as possible and should result in compact, memory efficient 3D city models for future ease of distribution and visualization. For the considered application, these are not contradictory demands. Simplified geometry assumptions can speed up vision algorithms while automatically guaranteeing compact geometry models. In this paper, we present a novel city modeling framework which builds upon this philosophy to create 3D content at high speed. Objects in the environment, such as cars and pedestrians, may however disturb the reconstruction, as they violate the simplified geometry assumptions, leading to visually unpleasant artifacts and degrading the visual realism of the resulting 3D city model. Unfortunately, such objects are prevalent in urban scenes. We therefore extend the reconstruction framework by integrating it with an object recognition module that automatically detects cars in the input video streams and localizes them in 3D. The two components of our system are tightly integrated and benefit from each other’s continuous input. 3D reconstruction delivers geometric scene context, which greatly helps improve detection precision. The detected car locations, on the other hand, are used to instantiate virtual placeholder models which augment the visual realism of the reconstructed city model.", "title": "" }, { "docid": "df6c7f13814178d7b34703757899d6b1", "text": "Regression testing of natural language systems is problematic for two main reasons: component input and output is complex, and system behaviour is context-dependent. We have developed a generic approach which solves both of these issues. We describe our regression tool, CONTEST, which supports context-dependent testing of dialogue system components, and discuss the regression test sets we developed, designed to effectively isolate components from changes and problems earlier in the pipeline. We believe that the same approach can be used in regression testing for other dialogue systems, as well as in testing any complex NLP system containing multiple components.", "title": "" }, { "docid": "8760b523ca90dccf7a9a197622bda043", "text": "The increasing need for better performance, protection, and reliability in shipboard power distribution systems, and the increasing availability of power semiconductors is generating the potential for solid state circuit breakers to replace traditional electromechanical circuit breakers. This paper reviews various solid state circuit breaker topologies that are suitable for low and medium voltage shipboard system protection. Depending on the application solid state circuit breakers can have different main circuit topologies, fault detection methods, commutation methods of power semiconductor devices, and steady state operation after tripping. This paper provides recommendations on the solid state circuit breaker topologies that provides the best performance-cost tradeoff based on the application.", "title": "" }, { "docid": "3a092c071129e2ffced1800f2b4d519c", "text": "Human actions captured in video sequences are threedimensional signals characterizing visual appearance and motion dynamics. To learn action patterns, existing methods adopt Convolutional and/or Recurrent Neural Networks (CNNs and RNNs). CNN based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. RNNs, especially Long Short- Term Memory (LSTM), are able to learn temporal motion dynamics. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long.,,In this work, we propose Lattice-LSTM (L2STM), which extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations. This method effectively enhances the ability to model dynamics across time and addresses the non-stationary issue of long-term motion dynamics without significantly increasing the model complexity. Additionally, we introduce a novel multi-modal training procedure for training our network. Unlike traditional two-stream architectures which use RGB and optical flow information as input, our two-stream model leverages both modalities to jointly train both input gates and both forget gates in the network rather than treating the two streams as separate entities with no information about the other. We apply this end-to-end system to benchmark datasets (UCF-101 and HMDB-51) of human action recognition. Experiments show that on both datasets, our proposed method outperforms all existing ones that are based on LSTM and/or CNNs of similar model complexities.", "title": "" }, { "docid": "b448ea63495d08866ba7759a4ede6895", "text": "Heterogeneous sensor data fusion is a challenging field that has gathered significant interest in recent years. Two of these challenges are learning from data with missing values, and finding shared representations for multimodal data to improve inference and prediction. In this paper, we propose a multimodal data fusion framework, the deep multimodal encoder (DME), based on deep learning techniques for sensor data compression, missing data imputation, and new modality prediction under multimodal scenarios. While traditional methods capture only the intramodal correlations, DME is able to mine both the intramodal correlations in the initial layers and the enhanced intermodal correlations in the deeper layers. In this way, the statistical structure of sensor data may be better exploited for data compression. By incorporating our new objective function, DME shows remarkable ability for missing data imputation tasks in sensor data. The shared multimodal representation learned by DME may be used directly for predicting new modalities. In experiments with a real-world dataset collected from a 40-node agriculture sensor network which contains three modalities, DME can achieve a root mean square error (RMSE) of missing data imputation which is only 20% of the traditional methods like K-nearest neighbors and sparse principal component analysis and the performance is robust to different missing rates. It can also reconstruct temperature modality from humidity and illuminance with an RMSE of $7\\; {}^{\\circ }$C, directly from a highly compressed (2.1%) shared representation that was learned from incomplete (80% missing) data.", "title": "" }, { "docid": "29f1c91fccfbeaa7ec352bdbe1c300c6", "text": "Absorption in the stellar Lyman-alpha (Lyalpha) line observed during the transit of the extrasolar planet HD 209458b in front of its host star reveals high-velocity atomic hydrogen at great distances from the planet. This has been interpreted as hydrogen atoms escaping from the planet's exosphere, possibly undergoing hydrodynamic blow-off, and being accelerated by stellar radiation pressure. Energetic neutral atoms around Solar System planets have been observed to form from charge exchange between solar wind protons and neutral hydrogen from the planetary exospheres, however, and this process also should occur around extrasolar planets. Here we show that the measured transit-associated Lyalpha absorption can be explained by the interaction between the exosphere of HD 209458b and the stellar wind, and that radiation pressure alone cannot explain the observations. As the stellar wind protons are the source of the observed energetic neutral atoms, this provides a way of probing stellar wind conditions, and our model suggests a slow and hot stellar wind near HD 209458b at the time of the observations.", "title": "" }, { "docid": "f57fddbff1acaf3c4c58f269b6221cf7", "text": "PURPOSE OF REVIEW\nCry-fuss problems are among the most common clinical presentations in the first few months of life and are associated with adverse outcomes for some mothers and babies. Cry-fuss behaviour emerges out of a complex interplay of cultural, psychosocial, environmental and biologic factors, with organic disturbance implicated in only 5% of cases. A simplistic approach can have unintended consequences. This article reviews recent evidence in order to update clinical management.\n\n\nRECENT FINDINGS\nNew research is considered in the domains of organic disturbance, feed management, maternal health, sleep management, and sensorimotor integration. This transdisciplinary approach takes into account the variable neurodevelopmental needs of healthy infants, the effects of feeding management on the highly plastic neonatal brain, and the bi-directional brain-gut-enteric microbiota axis. An individually tailored, mother-centred and family-centred approach is recommended.\n\n\nSUMMARY\nThe family of the crying baby requires early intervention to assess for and manage potentially treatable problems. Cross-disciplinary collaboration is often necessary if outcomes are to be optimized.", "title": "" }, { "docid": "b60850caccf9be627b15c7c83fb3938e", "text": "Research and development of hip stem implants started centuries ago. However, there is still no yet an optimum design that fulfills all the requirements of the patient. New manufacturing technologies have opened up new possibilities for complicated theoretical designs to become tangible reality. Current trends in the development of hip stems focus on applying porous structures to improve osseointegration and reduce stem stiffness in order to approach the stiffness of the natural human bone. In this field, modern additive manufacturing machines offer unique flexibility in manufacturing parts combining variable density mesh structures with solid and porous metal in a single manufacturing process. Furthermore, additive manufacturing machines became powerful competitors in the economical mass production of hip implants. This is due to their ability to manufacture several parts with different geometries in a single setup and with minimum material consumption. This paper reviews the application of additive manufacturing (AM) techniques in the production of innovative porous femoral hip stem design.", "title": "" } ]
scidocsrr
e981811cd59cecbef0fe719bccc6914a
On the Algorithmic Implementation of Stochastic Discrimination
[ { "docid": "8bb5acdafefc35f6c1adf00cfa47ac2c", "text": "A general method is introduced for separating points in multidimensional spaces through the use of stochastic processes. This technique is called stochastic discrimination.", "title": "" } ]
[ { "docid": "c2b0dfb06f82541fca0d2700969cf0d9", "text": "Magnetic resonance is an exceptionally powerful and versatile measurement technique. The basic structure of a magnetic resonance experiment has remained largely unchanged for almost 50 years, being mainly restricted to the qualitative probing of only a limited set of the properties that can in principle be accessed by this technique. Here we introduce an approach to data acquisition, post-processing and visualization—which we term ‘magnetic resonance fingerprinting’ (MRF)—that permits the simultaneous non-invasive quantification of multiple important properties of a material or tissue. MRF thus provides an alternative way to quantitatively detect and analyse complex changes that can represent physical alterations of a substance or early indicators of disease. MRF can also be used to identify the presence of a specific target material or tissue, which will increase the sensitivity, specificity and speed of a magnetic resonance study, and potentially lead to new diagnostic testing methodologies. When paired with an appropriate pattern-recognition algorithm, MRF inherently suppresses measurement errors and can thus improve measurement accuracy.", "title": "" }, { "docid": "f18dffe56c54c537bae8862a85132a32", "text": "A vast territory for research is open from mimicking the behaviour of microorganisms to defend themselves from competitors. Antibiotics secreted by bacteria or fungi can be copied to yield efficient molecules which are active against infectious diseases. On the other hand, nanotechnology provides novel techniques to probe and manipulate single atoms and molecules. Nanoparticles are finding a large variety of biomedical and pharmaceutical applications, since their size scale can be similar to that of biological molecules (e.g. proteins, DNA) and structures (e.g. viruses and bacteria). They are currently being used in imaging (El-Sayed et al., 2005), biosensing (Medintz et al.,2005), biomolecules immobilization (Carmona-Ribeiro, 2010a), gene and drug delivery (Carmona-Ribeiro, 2003; CarmonaRibeiro, 2010b) and vaccines (O ́Hagan et al., 2000; Lincopan & Carmona-Ribeiro, 2009; Lincopan et al., 2009). They can also incorporate antimicrobial agents (antibiotics, metals, peptides, surfactants and lipids), can be the antimicrobial agent or used to produce antimicrobial devices. Antimicrobial agents found in Nature can sucessfully be copied for synthesis of novel biomimetic but synthetic compounds. In this review, synthetic cationic surfactants and lipids, natural and synthetic peptides or particles, and hybrid antimicrobial films are overviewed unraveling novel antimicrobial approaches against infectious diseases.", "title": "" }, { "docid": "ecb146ae27419d9ca1911dc4f13214c1", "text": "In this paper, a simple mix integer programming for distribution center location is proposed. Based on this simple model, we introduce two important factors, transport mode and carbon emission, and extend it a model to describe the location problem for green supply chain. Sequently, IBM Watson implosion technologh (WIT) tool was introduced to describe them and solve them. By changing the price of crude oil, we illustrate the its impact on distribution center locations and transportation mode option for green supply chain. From the cases studies, we have known that, as the crude oil price increasing, the profits of the whole supply chain will decrease, carbon emission will also decrease to some degree, while the number of opened distribution center will increase.", "title": "" }, { "docid": "c8a7330443596d17fefe9f081b7ea5a4", "text": "The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field.", "title": "" }, { "docid": "4f846635e4f23b7630d0c853559f71dc", "text": "Parkinson's disease, known also as striatal dopamine deficiency syndrome, is a degenerative disorder of the central nervous system characterized by akinesia, muscular rigidity, tremor at rest, and postural abnormalities. In early stages of parkinsonism, there appears to be a compensatory increase in the number of dopamine receptors to accommodate the initial loss of dopamine neurons. As the disease progresses, the number of dopamine receptors decreases, apparently due to the concomitant degeneration of dopamine target sites on striatal neurons. The loss of dopaminergic neurons in Parkinson's disease results in enhanced metabolism of dopamine, augmenting the formation of H2O2, thus leading to generation of highly neurotoxic hydroxyl radicals (OH.). The generation of free radicals can also be produced by 6-hydroxydopamine or MPTP which destroys striatal dopaminergic neurons causing parkinsonism in experimental animals as well as human beings. Studies of the substantia nigra after death in Parkinson's disease have suggested the presence of oxidative stress and depletion of reduced glutathione; a high level of total iron with reduced level of ferritin; and deficiency of mitochondrial complex I. New approaches designed to attenuate the effects of oxidative stress and to provide neuroprotection of striatal dopaminergic neurons in Parkinson's disease include blocking dopamine transporter by mazindol, blocking NMDA receptors by dizocilpine maleate, enhancing the survival of neurons by giving brain-derived neurotrophic factors, providing antioxidants such as vitamin E, or inhibiting monoamine oxidase B (MAO-B) by selegiline. Among all of these experimental therapeutic refinements, the use of selegiline has been most successful in that it has been shown that selegiline may have a neurotrophic factor-like action rescuing striatal neurons and prolonging the survival of patients with Parkinson's disease.", "title": "" }, { "docid": "e6107ac6d0450bb1ce4dab713e6dcffa", "text": "Enterprises collect a large amount of personal data about their customers. Even though enterprises promise privacy to their customers using privacy statements or P3P, there is no methodology to enforce these promises throughout and across multiple enterprises. This article describes the Platform for Enterprise Privacy Practices (E-P3P), which defines technology for privacy-enabled management and exchange of customer data. Its comprehensive privacy-specific access control language expresses restrictions on the access to personal data, possibly shared between multiple enterprises. E-P3P separates the enterprise-specific deployment policy from the privacy policy that covers the complete life cycle of collected data. E-P3P introduces a viable separation of duty between the three “administrators” of a privacy system: The privacy officer designs and deploys privacy policies, the security officer designs access control policies, and the customers can give consent while selecting opt-in and opt-out choices. To appear in2nd Workshop on Privacy Enhancing Technologies , Lecture Notes in Computer Science. Springer Verlag, 2002. Copyright c © Springer", "title": "" }, { "docid": "cf998ec01aefef7cd80d2fdd25e872e1", "text": "Shunting inhibition, a conductance increase with a reversal potential close to the resting potential of the cell, has been shown to have a divisive effect on subthreshold excitatory postsynaptic potential amplitudes. It has therefore been assumed to have the same divisive effect on firing rates. We show that shunting inhibition actually has a subtractive effecton the firing rate in most circumstances. Averaged over several interspike intervals, the spiking mechanism effectively clamps the somatic membrane potential to a value significantly above the resting potential, so that the current through the shunting conductance is approximately independent of the firing rate. This leads to a subtractive rather than a divisive effect. In addition, at distal synapses, shunting inhibition will also have an approximately subtractive effect if the excitatory conductance is not small compared to the inhibitory conductance. Therefore regulating a cell's passive membrane conductancefor instance, via massive feedbackis not an adequate mechanism for normalizing or scaling its output.", "title": "" }, { "docid": "d7e2654767d1178871f3f787f7616a94", "text": "We propose a nonparametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute the final segmentation of the test subject. Such label fusion methods have been shown to yield accurate segmentation, since the use of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures. To the best of our knowledge, this manuscript presents the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multiatlas segmentation algorithms are interpreted as special cases of our framework. We conduct two sets of experiments to validate the proposed methods. In the first set of experiments, we use 39 brain MRI scans - with manually segmented white matter, cerebral cortex, ventricles and subcortical structures - to compare different label fusion algorithms and the widely-used FreeSurfer whole-brain segmentation tool. Our results indicate that the proposed framework yields more accurate segmentation than FreeSurfer and previous label fusion algorithms. In a second experiment, we use brain MRI scans of 282 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal volume changes in a study of aging and Alzheimer's Disease.", "title": "" }, { "docid": "be3640467394a0e0b5a5035749b442e9", "text": "Data pre-processing is an important and critical step in the data mining process and it has a huge impact on the success of a data mining project.[1](3) Data pre-processing is a step of the Knowledge discovery in databases (KDD) process that reduces the complexity of the data and offers better conditions to subsequent analysis. Through this the nature of the data is better understood and the data analysis is performed more accurately and efficiently. Data pre-processing is challenging as it involves extensive manual effort and time in developing the data operation scripts. There are a number of different tools and methods used for pre-processing, including: sampling, which selects a representative subset from a large population of data; transformation, which manipulates raw data to produce a single input; denoising, which removes noise from data; normalization, which organizes data for more efficient access; and feature extraction, which pulls out specified data that is significant in some particular context. Pre-processing technique is also useful for association rules algo. LikeAprior, Partitioned, Princer-search algo. and many more algos.", "title": "" }, { "docid": "08aa9d795464d444095bbb73c067c2a9", "text": "Next-generation sequencing (NGS) is a rapidly evolving set of technologies that can be used to determine the sequence of an individual's genome​ 1​ by calling genetic variants present in an individual using billions of short, errorful sequence reads​ 2​ . Despite more than a decade of effort and thousands of dedicated researchers, the hand-crafted and parameterized statistical models used for variant calling still produce thousands of errors and missed variants in each genome​ 3,4​ . Here we show that a deep convolutional neural network​ 5​ can call genetic variation in aligned next-generation sequencing read data by learning statistical relationships (likelihoods) between images of read pileups around putative variant sites and ground-truth genotype calls. This approach, called DeepVariant, outperforms existing tools, even winning the \"highest performance\" award for SNPs in a FDA-administered variant calling challenge. The learned model generalizes across genome builds and even to other species, allowing non-human sequencing projects to benefit from the wealth of human ground truth data. We further show that, unlike existing tools which perform well on only a specific technology, DeepVariant can learn to call variants in a variety of sequencing technologies and experimental designs, from deep whole genomes from 10X Genomics to Ion Ampliseq exomes. DeepVariant represents a significant step from expert-driven statistical modeling towards more automatic deep learning approaches for developing software to interpret biological instrumentation data. Main Text Calling genetic variants from NGS data has proven challenging because NGS reads are not only errorful (with rates from ~0.1-10%) but result from a complex error process that depends on properties of the instrument, preceding data processing tools, and the genome sequence itself​. State-of-the-art variant callers use a variety of statistical techniques to model these error processes and thereby accurately identify differences between the reads and the reference genome caused by real genetic variants and those arising from errors in the reads​. For example, the widely-used GATK uses logistic regression to model base errors, hidden Markov models to compute read likelihoods, and naive Bayes classification to identify variants, which are then filtered to remove likely false positives using a Gaussian mixture model peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/092890 doi: bioRxiv preprint first posted online Dec. 14, 2016; Poplin et al. Creating a universal SNP and small indel variant caller with deep neural networks. with hand-crafted features capturing common error modes​ 6​ . These techniques allow the GATK to achieve high but still imperfect accuracy on the Illumina sequencing platform​ . Generalizing these models to other sequencing technologies has proven difficult due to the need for manual retuning or extending these statistical models (see e.g. Ion Torrent​ 8,9​ ), a major problem in an area with such rapid technological progress​ 1​ . Here we describe a variant caller for NGS data that replaces the assortment of statistical modeling components with a single, deep learning model. Deep learning is a revolutionary machine learning technique applicable to a variety of domains, including image classification​ 10​ , translation​ , gaming​ , and the life sciences​ 14–17​ . This toolchain, which we call DeepVariant, (Figure 1) begins by finding candidate SNPs and indels in reads aligned to the reference genome with high-sensitivity but low specificity. The deep learning model, using the Inception-v2 architecture​ , emits probabilities for each of the three diploid genotypes at a locus using a pileup image of the reference and read data around each candidate variant (Figure 1). The model is trained using labeled true genotypes, after which it is frozen and can then be applied to novel sites or samples. Throughout the following experiments, DeepVariant was trained on an independent set of samples or variants to those being evaluated. This deep learning model has no specialized knowledge about genomics or next-generation sequencing, and yet can learn to call genetic variants more accurately than state-of-the-art methods. When applied to the Platinum Genomes Project NA12878 data​ 18​ , DeepVariant produces a callset with better performance than the GATK when evaluated on the held-out chromosomes of the Genome in a Bottle ground truth set (Figure 2A). For further validation, we sequenced 35 replicates of NA12878 using a standard whole-genome sequencing protocol and called variants on 27 replicates using a GATK best-practices pipeline and DeepVariant using a model trained on the other eight replicates (see methods). Not only does DeepVariant produce more accurate results but it does so with greater consistency across a variety of quality metrics (Figure 2B). To further confirm the performance of DeepVariant, we submitted variant calls for a blinded sample, NA24385, to the Food and Drug Administration-sponsored variant calling ​ Truth Challenge​ in May 2016 and won the \"highest performance\" award for SNPs by an independent team using a different evaluation methodology. Like many variant calling algorithms, GATK relies on a model that assumes read errors are independent​ . Though long-recognized as an invalid assumption​ 2​ , the true likelihood function that models multiple reads simultaneously is unknown​ 6,19,20​ . Because DeepVariant presents an image of all of the reads relevant for a putative variant together, the convolutional neural network (CNN) is able to account for the complex dependence among the reads by virtue of being a universal approximator​ 21​ . This manifests itself as a tight concordance between the estimated probability of error from the likelihood function and the observed error rate, as seen in Figure 2C where DeepVariant's CNN is well calibrated, strikingly more so than the GATK. That the CNN has approximated this true, but unknown, inter-dependent likelihood function is the essential technical advance enabling us to replace the hand-crafted statistical models in other approaches with a single deep learning model and still achieve such high performance in variant calling. 2 peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/092890 doi: bioRxiv preprint first posted online Dec. 14, 2016; Poplin et al. Creating a universal SNP and small indel variant caller with deep neural networks. We further explored how well DeepVariant’s CNN generalizes beyond its training data. First, a model trained with read data aligned to human genome build GRCh37 and applied to reads aligned to GRCh38 has similar performance (overall F1 = 99.45%) to one trained on GRCh38 and then applied to GRCh38 (overall F1 = 99.53%), thereby demonstrating that a model learned from one version of the human genome reference can be applied to other versions with effectively no loss in accuracy (Table S1). Second, models learned using human reads and ground truth data achieve high accuracy when applied to a mouse dataset​ 22​ (F1 = 98.29%), out-performing training on the mouse data itself (F1 = 97.84%, Table S4). This last experiment is especially demanding as not only do the species differ but nearly all of the sequencing parameters do as well: 50x 2x148bp from an Illumina TruSeq prep sequenced on a HiSeq 2500 for the human sample and 27x 2x100bp reads from a custom sequencing preparation run on an Illumina Genome Analyzer II for mouse​ . Thus, DeepVariant is robust to changes in sequencing depth, preparation protocol, instrument type, genome build, and even species. The practical benefits of this capability is substantial, as DeepVariant enables resequencing projects in non-human species, which often have no ground truth data to guide their efforts​ , to leverage the large and growing ground truth data in humans. To further assess its capabilities, we trained DeepVariant to call variants in eight datasets from Genome in a Bottle​ 24​ that span a variety of sequencing instruments and protocols, including whole genome and exome sequencing technologies, with read lengths from fifty to many thousands of basepairs (Table 1 and S6). We used the already processed BAM files to introduce additional variability as these BAMs differ in their alignment and cleaning steps. The results of this experiment all exhibit a characteristic pattern: the candidate variants have the highest sensitivity but a low PPV (mean 57.6%), which varies significantly by dataset. After retraining, all of the callsets achieve high PPVs (mean of 99.3%) while largely preserving the candidate callset sensitivity (mean loss of 2.3%). The high PPVs and low loss of sensitivity indicate that DeepVariant can learn a model that captures the technology-specific error processes in sufficient detail to separate real variation from false positives with high fidelity for many different sequencing technologies. As we already shown above that DeepVariant performs well on Illumina WGS data, we analyze here the behavior of DeepVariant on two non-Illumina WGS datasets and two exome datasets from Illumina and Ion Torrent. The SOLID and Pacific Biosciences (PacBio) WGS datasets have high error rates in the candidate callsets. SOLID (13.9% PPV for SNPs, 96.2% for indels, and 14.3% overall) has many SNP artifacts from the mapping short, color-space reads. The PacBio dataset is the opposite, with many false indels (79.8% PPV for SNPs, 1.4% for indels, and 22.1% overall) due to this technology's high indel error rate. Training DeepVariant to call variants in an exome is likely to be particularly challenging. Exomes have far fewer variants (~20-30k)​ than found in a whole-genome (~4-5M)​ 26​ . T", "title": "" }, { "docid": "1490331d46b8c19fce0a94e072bff502", "text": "We explore the reliability and validity of a self-report measure of procrastination and conscientiousness designed for use with thirdto fifth-grade students. The responses of 120 students are compared with teacher and parent ratings of the student. Confirmatory and exploratory factor analyses were also used to examine the structure of the scale. Procrastination and conscientiousness are highly correlated (inversely); evidence suggests that procrastination and conscientiousness are aspects of the same construct. Procrastination and conscientiousness are correlated with the Physiological Anxiety subscale of the Revised Children’s Manifest Anxiety Scale, and with the Task (Mastery) and Avoidance (Task Aversiveness) subscales of Skaalvik’s (1997) Goal Orientation Scales. Both theoretical implications and implications for interventions are discussed. © 2002 Wiley Periodicals, Inc.", "title": "" }, { "docid": "797ab17a7621f4eaa870a8eb24f8b94d", "text": "A single-photon avalanche diode (SPAD) with enhanced near-infrared (NIR) sensitivity has been developed, based on 0.18 μm CMOS technology, for use in future automotive light detection and ranging (LIDAR) systems. The newly proposed SPAD operating in Geiger mode achieves a high NIR photon detection efficiency (PDE) without compromising the fill factor (FF) and a low breakdown voltage of approximately 20.5 V. These properties are obtained by employing two custom layers that are designed to provide a full-depletion layer with a high electric field profile. Experimental evaluation of the proposed SPAD reveals an FF of 33.1% and a PDE of 19.4% at 870 nm, which is the laser wavelength of our LIDAR system. The dark count rate (DCR) measurements shows that DCR levels of the proposed SPAD have a small effect on the ranging performance, even if the worst DCR (12.7 kcps) SPAD among the test samples is used. Furthermore, with an eye toward vehicle installations, the DCR is measured over a wide temperature range of 25-132 °C. The ranging experiment demonstrates that target distances are successfully measured in the distance range of 50-180 cm.", "title": "" }, { "docid": "a306ea0a425a00819b81ea7f52544cfb", "text": "Early research in electronic markets seemed to suggest that E-Commerce transactions would result in decreased costs for buyers and sellers alike, and would therefore ultimately lead to the elimination of intermediaries from electronic value chains. However, a careful analysis of the structure and functions of electronic marketplaces reveals a different picture. Intermediaries provide many value-adding functions that cannot be easily substituted or ‘internalised’ through direct supplier-buyer dealings, and hence mediating parties may continue to play a significant role in the E-Commerce world. In this paper we provide an analysis of the potential roles of intermediaries in electronic markets and we articulate a number of hypotheses for the future of intermediation in such markets. Three main scenarios are discussed: the disintermediation scenario where market dynamics will favour direct buyer-seller transactions, the reintermediation scenario where traditional intermediaries will be forced to differentiate themselves and reemerge in the electronic marketplace, and the cybermediation scenario where wholly new markets for intermediaries will be created. The analysis suggests that the likelihood of each scenario dominating a given market is primarily dependent on the exact functions that intermediaries play in each case. A detailed discussion of such functions is presented in the paper, together with an analysis of likely outcomes in the form of a contingency model for intermediation in electronic markets.", "title": "" }, { "docid": "8e521a935f4cc2008146e4153a2bc3b5", "text": "The research work on supply-chain management has primarily focused on the study of materials flow and very little work has been done on the study of upstream flow of money. In this paper we study the flow of money in a supply chain from the viewpoint of a supply chain partner who receives money from the downstream partners and makes payments to the upstream partners. The objective is to schedule all payments within the constraints of the receipt of the money. A penalty is to be paid if payments are not made within the specified time. Any unused money in a given period can be invested to earn an interest. The problem is computationally complex and non-intuitive because of its dynamic nature. The incoming and outgoing monetary flows never stop and are sometimes unpredictable. For tractability purposes we first develop an integer programming model to represent the static problem, where monetary in-flows and out-flows are known before hand. We demonstrate that even the static problem is NP-Complete. First we develop a heuristic to solve this static problem. Next, the insights derived from the static problem analysis are used to develop two heuristics to solve the various level of dynamism of the problem. The performances of all these heuristics are measured and presented. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9547b04b76e653c8b4854ae193b4319f", "text": "© 2017 Western Digital Corporation or its affiliates. All rights reserved Emerging fast byte-addressable non-volatile memory (eNVM) technologies such as ReRAM and 3D Xpoint are projected to offer two orders of magnitude higher performance than flash. However, the existing solid-state drive (SSD) architecture optimizes for flash characteristics and is not adequate to exploit the full potential of eNVMs due to architectural and I/O interface (e.g., PCIe, SATA) limitations. To improve the storage performance and reduce the host main memory requirement for KVS, we propose a novel SSD architecture that extends the semantic of SSD with the KVS features and implements indexing capability inside SSD. It has in-storage processing engine that implements key-value operations such as get, put and delete to efficiently operate on KV datasets. The proposed system introduces a compute channel interface to offload key-value operations down to the SSD that significantly reduces the operating system, file system and other software overhead. This SSD achieves 4.96 Mops/sec get and 3.44 Mops/sec put operations and shows better scalability with increasing number of keyvalue pairs as compared to flash-based NVMe (flash-NVMe) and DRAMbased NVMe (DRAM-NVMe) devices. With decreasing DRAM size by 75%, its performance decreases gradually, achieving speedup of 3.23x as compared to DRAM-NVMe. This SSD significantly improves performance and reduces memory by exploiting the fine grain parallelism within a controller and keeping data movement local to effectively utilize eNVM bandwidth and eliminating the superfluous data movement between the host and the SSD. Abstract", "title": "" }, { "docid": "f480c08eea346215ccd01e21e9acfe81", "text": "In the era of big data, recommender system (RS) has become an effective information filtering tool that alleviates information overload for Web users. Collaborative filtering (CF), as one of the most successful recommendation techniques, has been widely studied by various research institutions and industries and has been applied in practice. CF makes recommendations for the current active user using lots of users’ historical rating information without analyzing the content of the information resource. However, in recent years, data sparsity and high dimensionality brought by big data have negatively affected the efficiency of the traditional CF-based recommendation approaches. In CF, the context information, such as time information and trust relationships among the friends, is introduced into RS to construct a training model to further improve the recommendation accuracy and user’s satisfaction, and therefore, a variety of hybrid CF-based recommendation algorithms have emerged. In this paper, we mainly review and summarize the traditional CF-based approaches and techniques used in RS and study some recent hybrid CF-based recommendation approaches and techniques, including the latest hybrid memory-based and model-based CF recommendation algorithms. Finally, we discuss the potential impact that may improve the RS and future direction. In this paper, we aim at introducing the recent hybrid CF-based recommendation techniques fusing social networks to solve data sparsity and high dimensionality and provide a novel point of view to improve the performance of RS, thereby presenting a useful resource in the state-of-the-art research result for future researchers.", "title": "" }, { "docid": "7eac260700c56178533ec687159ac244", "text": "Chat robot, a computer program that simulates human conversation, or chat, through artificial intelligence an intelligence chat bot will be used to give information or answers to any question asked by user related to bank. It is more like a virtual assistant, people feel like they are talking with real person. They speak the same language we do, can answer questions. In banks, at user care centres and enquiry desks, human is insufficient and usually takes long time to process the single request which results in wastage of time and also reduce quality of user service. The primary goal of this chat bot is user can interact with mentioning their queries in plain English and the chat bot can resolve their queries with appropriate response in return The proposed system would help duplicate the user utility experience with one difference that employee and yet get the queries attended and resolved. It can extend daily life, by providing solutions to help desks, telephone answering systems, user care centers. This paper defines the dataset that we have prepared from FAQs of bank websites, architecture and methodology used for developing such chatbot. Also this paper discusses the comparison of seven ML classification algorithm used for getting the class of input to chat bot.", "title": "" }, { "docid": "21cbea6b83aa89b61d8dab91abcf1b99", "text": "We present Spline-based Convolutional Neural Networks (SplineCNNs), a variant of deep neural networks for irregular structured and geometric input, e.g., graphs or meshes. Our main contribution is a novel convolution operator based on B-splines, that makes the computation time independent from the kernel size due to the local support property of the B-spline basis functions. As a result, we obtain a generalization of the traditional CNN convolution operator by using continuous kernel functions parametrized by a fixed number of trainable weights. In contrast to related approaches that filter in the spectral domain, the proposed method aggregates features purely in the spatial domain. In addition, SplineCNN allows entire end-to-end training of deep architectures, using only the geometric structure as input, instead of handcrafted feature descriptors. For validation, we apply our method on tasks from the fields of image graph classification, shape correspondence and graph node classification, and show that it outperforms or pars state-of-the-art approaches while being significantly faster and having favorable properties like domain-independence. Our source code is available on GitHub1.", "title": "" }, { "docid": "241609f10f9f5afbf6a939833b642a69", "text": "Heterogeneous or co-processor architectures are becoming an important component of high productivity computing systems (HPCS). In this work the performance of a GPU based HPCS is compared with the performance of a commercially available FPGA based HPC. Contrary to previous approaches that focussed on specific examples, a broader analysis is performed by considering processes at an architectural level. A set of benchmarks is employed that use different process architectures in order to exploit the benefits of each technology. These include the asynchronous pipelines common to \"map\" tasks, a partially synchronous tree common to \"reduce\" tasks and a fully synchronous, fully connected mesh. We show that the GPU is more productive than the FPGA architecture for most of the benchmarks and conclude that FPGA-based HPCS is being marginalised by GPUs.", "title": "" } ]
scidocsrr
002fba58f96c79a98229f37567fa4363
Pretty as a Princess: Longitudinal Effects of Engagement With Disney Princesses on Gender Stereotypes, Body Esteem, and Prosocial Behavior in Children.
[ { "docid": "b4dcc5c36c86f9b1fef32839d3a1484d", "text": "The popular Disney Princess line includes nine films (e.g., Snow White, Beauty and the Beast) and over 25,000 marketable products. Gender role depictions of the prince and princess characters were examined with a focus on their behavioral characteristics and climactic outcomes in the films. Results suggest that the prince and princess characters differ in their portrayal of traditionally masculine and feminine characteristics, these gender role portrayals are complex, and trends towards egalitarian gender roles are not linear over time. Content coding analyses demonstrate that all of the movies portray some stereotypical representations of gender, including the most recent film, The Princess and the Frog. Although both the male and female roles have changed over time in the Disney Princess line, the male characters exhibit more androgyny throughout and less change in their gender role portrayals.", "title": "" }, { "docid": "3d7fabdd5f56c683de20640abccafc44", "text": "The capacity to exercise control over the nature and quality of one's life is the essence of humanness. Human agency is characterized by a number of core features that operate through phenomenal and functional consciousness. These include the temporal extension of agency through intentionality and forethought, self-regulation by self-reactive influence, and self-reflectiveness about one's capabilities, quality of functioning, and the meaning and purpose of one's life pursuits. Personal agency operates within a broad network of sociostructural influences. In these agentic transactions, people are producers as well as products of social systems. Social cognitive theory distinguishes among three modes of agency: direct personal agency, proxy agency that relies on others to act on one's behest to secure desired outcomes, and collective agency exercised through socially coordinative and interdependent effort. Growing transnational embeddedness and interdependence are placing a premium on collective efficacy to exercise control over personal destinies and national life.", "title": "" } ]
[ { "docid": "761be34401cc6ef1d8eea56465effca9", "text": "Résumé: Dans cet article, nous proposons une nouvelle approche pour le résumé automatique de textes utilisant un algorithme d'apprentissage numérique spécifique à la tâche d'ordonnancement. L'objectif est d'extraire les phrases d'un document qui sont les plus représentatives de son contenu. Pour se faire, chaque phrase d'un document est représentée par un vecteur de scores de pertinence, où chaque score est un score de similarité entre une requête particulière et la phrase considérée. L'algorithme d'ordonnancement effectue alors une combinaison linéaire de ces scores, avec pour but d'affecter aux phrases pertinentes d'un document des scores supérieurs à ceux des phrases non pertinentes du même document. Les algorithmes d'ordonnancement ont montré leur efficacité en particulier dans le domaine de la méta-recherche, et leur utilisation pour le résumé est motivée par une analogie peut être faite entre la méta-recherche et le résumé automatique qui consiste, dans notre cas, à considérer les similarités des phrases avec les différentes requêtes comme étant des sorties de différents moteurs de recherche. Nous montrons empiriquement que l'algorithme d'ordonnancement a de meilleures performances qu'une approche utilisant un algorithme de classification sur deux corpus distincts.", "title": "" }, { "docid": "c7daf28d656a9e51e5a738e70beeadcf", "text": "We present a taxonomy for Information Visualization (IV) that characterizes it in terms of data, task, skill and context, as well as a number of dimensions that relate to the input and output hardware, the software tools, as well as user interactions and human perceptual abil ities. We il lustrate the utilit y of the taxonomy by focusing particularly on the information retrieval task and the importance of taking into account human perceptual capabiliti es and limitations. Although the relevance of Psychology to IV is often recognised, we have seen relatively littl e translation of psychological results and theory to practical IV applications. This paper targets the better development of information visualizations through the introduction of a framework delineating the major factors in interface development. We believe that higher quality visualizations will result from structured developments that take into account these considerations and that the framework will also serve to assist the development of effective evaluation and assessment processes.", "title": "" }, { "docid": "a76826da7f077cf41aaa7c8eca9be3fe", "text": "In this paper we present an open-source design for the development of low-complexity, anthropomorphic, underactuated robot hands with a selectively lockable differential mechanism. The differential mechanism used is a variation of the whiffletree (or seesaw) mechanism, which introduces a set of locking buttons that can block the motion of each finger. The proposed design is unique since with a single motor and the proposed differential mechanism the user is able to control each finger independently and switch between different grasping postures in an intuitive manner. Anthropomorphism of robot structure and motion is achieved by employing in the design process an index of anthropomorphism. The proposed robot hands can be easily fabricated using low-cost, off-the-shelf materials and rapid prototyping techniques. The efficacy of the proposed design is validated through different experimental paradigms involving grasping of everyday life objects and execution of daily life activities. The proposed hands can be used as affordable prostheses, helping amputees regain their lost dexterity.", "title": "" }, { "docid": "5a2649736269f7be88886c2a45243492", "text": "Modern computer displays tend to be in fixed size, rigid, and rectilinear rendering them insensitive to the visual area demands of an application or the desires of the user. Foldable displays offer the ability to reshape and resize the interactive surface at our convenience and even permit us to carry a very large display surface in a small volume. In this paper, we implement four interactive foldable display designs using image projection with low-cost tracking and explore display behaviors using orientation sensitivity.", "title": "" }, { "docid": "7f0dd680faf446e74aff177dc97b5268", "text": "Vehicle Ad-Hoc Networks (VANET) enable all components in intelligent transportation systems to be connected so as to improve transport safety, relieve traffic congestion, reduce air pollution, and enhance driving comfort. The vision of all vehicles connected poses a significant challenge to the collection, storage, and analysis of big traffic-related data. Vehicular cloud computing, which incorporates cloud computing into vehicular networks, emerges as a promising solution. Different from conventional cloud computing platform, the vehicle mobility poses new challenges to the allocation and management of cloud resources in roadside cloudlet. In this paper, we study a virtual machine (VM) migration problem in roadside cloudletbased vehicular network and unfold that (1) whether a VM shall be migrated or not along with the vehicle moving and (2) where a VM shall be migrated, in order to minimize the overall network cost for both VM migration and normal data traffic. We first treat the problem as a static off-line VM placement problem and formulate it into a mixed-integer quadratic programming problem. A heuristic algorithm with polynomial time is then proposed to tackle the complexity of solving mixed-integer quadratic programming. Extensive simulation results show that it produces near-optimal performance and outperforms other related algorithms significantly. Copyright © 2015 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "367c6ce6d83baff7de78e9d128123ce8", "text": "Supporting smooth movement of mobile clients is important when offloading services on an edge computing platform. Interruption-free client mobility demands seamless migration of the offloading service to nearby edge servers. However, fast migration of offloading services across edge servers in a WAN environment poses significant challenges to the handoff service design. In this paper, we present a novel service handoff system which seamlessly migrates offloading services to the nearest edge server, while the mobile client is moving. Service handoff is achieved via container migration. We identify an important performance problem during Docker container migration. Based on our systematic study of container layer management and image stacking, we propose a migration method which leverages the layered storage system to reduce file system synchronization overhead, without dependence on the distributed file system. We implement a prototype system and conduct experiments using real world product applications. Evaluation results reveal that compared to state-of-the-art service handoff systems designed for edge computing platforms, our system reduces the total duration of service handoff time by 80%(56%) with network bandwidth 5Mbps(20Mbps).", "title": "" }, { "docid": "20af5209de71897158820f935018d877", "text": "This paper presents a new bag-of-entities representation for document ranking, with the help of modern knowledge bases and automatic entity linking. Our system represents query and documents by bag-of-entities vectors constructed from their entity annotations, and ranks documents by their matches with the query in the entity space. Our experiments with Freebase on TREC Web Track datasets demonstrate that current entity linking systems can provide sufficient coverage of the general domain search task, and that bag-of-entities representations outperform bag-of-words by as much as 18% in standard document ranking tasks.", "title": "" }, { "docid": "ee9bccbfecd58151569449911c624221", "text": "Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.", "title": "" }, { "docid": "7cfeadc550f412bb92df4f265bf99de0", "text": "AIM\nCorrective image reconstruction methods which produce reconstructed images with improved spatial resolution and decreased noise level became recently commercially available. In this work, we tested the performance of three new software packages with reconstruction schemes recommended by the manufacturers using physical phantoms simulating realistic clinical settings.\n\n\nMETHODS\nA specially designed resolution phantom containing three (99m)Tc lines sources and the NEMA NU-2 image quality phantom were acquired on three different SPECT/CT systems (General Electrics Infinia, Philips BrightView and Siemens Symbia T6). Measurement of both phantoms was done with the trunk filled with a (99m)Tc-water solution. The projection data were reconstructed using the GE's Evolution for Bone(®), Philips Astonish(®) and Siemens Flash3D(®) software. The reconstruction parameters employed (number of iterations and subsets, the choice of post-filtering) followed theses recommendations of each vendor. These results were compared with reference reconstructions using the ordered subset expectation maximization (OSEM) reconstruction scheme.\n\n\nRESULTS\nThe best results (smallest value for resolution, highest percent contrast values) for all three packages were found for the scatter corrected data without applying any post-filtering. The advanced reconstruction methods improve the full width at half maximum (FWHM) of the line sources from 11.4 to 9.5mm (GE), from 9.1 to 6.4mm (Philips), and from 12.1 to 8.9 mm (Siemens) if no additional post filter was applied. The total image quality control index measured for a concentration ratio of 8:1 improves for GE from 147 to 189, from 179. to 325 for Philips and from 217 to 320 for Siemens using the reference method for comparison. The same trends can be observed for the 4:1 concentration ratio. The use of a post-filter reduces the background variability approximately by a factor of two, but deteriorates significantly the spatial resolution.\n\n\nCONCLUSIONS\nUsing advanced reconstruction algorithms the largest improvement in image resolution and contrast is found for the scatter corrected slices without applying post-filtering. The user has to choose whether noise reduction by post-filtering or improved image resolution fits better a particular imaging procedure.", "title": "" }, { "docid": "5545d32ccfd1459c8c7e918c8b324eb5", "text": "Sequence generative adversarial networks SeqGAN have been used to improve conditional sequence generation tasks, for example, chit-chat dialogue generation. To stabilize the training of SeqGAN, Monte Carlo tree search MCTS or reward at every generation step REGS is used to evaluate the goodness of a generated subsequence. MCTS is computationally intensive, but the performance of REGS is worse than MCTS. In this paper, we propose stepwise GAN StepGAN, in which the discriminator is modified to automatically assign scores quantifying the goodness of each subsequence at every generation step. StepGAN has significantly less computational costs than MCTS. We demonstrate that StepGAN outperforms previous GAN-based methods on both synthetic experiment and chit-chat dialogue generation.", "title": "" }, { "docid": "94640a4ad3b32a307658ca2028dbd589", "text": "In this paper, we investigate the diversity aspect of paraphrase generation. Prior deep learning models employ either decoding methods or add random input noise for varying outputs. We propose a simple method Diverse Paraphrase Generation (D-PAGE), which extends neural machine translation (NMT) models to support the generation of diverse paraphrases with implicit rewriting patterns. Our experimental results on two real-world benchmark datasets demonstrate that our model generates at least one order of magnitude more diverse outputs than the baselines in terms of a new evaluation metric Jeffrey’s Divergence. We have also conducted extensive experiments to understand various properties of our model with a focus on diversity.", "title": "" }, { "docid": "1608c56c79af07858527473b2b0262de", "text": "The field weakening control strategy of interior permanent magnet synchronous motor for electric vehicles was studied in the paper. A field weakening control method based on gradient descent of voltage limit according to the ellipse and modified current setting were proposed. The field weakening region was determined by the angle between the constant torque direction and the voltage limited ellipse decreasing direction. The direction of voltage limited ellipse decreasing was calculated by using the gradient descent method. The current reference was modified by the field weakening direction and the magnitude of the voltage error according to the field weakening region. A simulink model was also founded by Matlab/Simulink, and the validity of the proposed strategy was proved by the simulation results.", "title": "" }, { "docid": "ec8847a65f015a52ce90bdd304103658", "text": "This study has a purpose to investigate the adoption of online games technologies among adolescents and their behavior in playing online games. The findings showed that half of them had experience ten months or less in playing online games with ten hours or less for each time playing per week. Nearly fifty-four percent played up to five times each week where sixty-six percent played two hours or less. Behavioral Intention has significant correlation to model variables naming Perceived Enjoyment, Flow Experience, Performance Expectancy, Effort Expectancy, Social Influence, and Facilitating Conditions; Experience; and the number and duration of game sessions. The last, Performance Expectancy and Facilitating Condition had a positive, medium, and statistically direct effect on Behavioral Intention. Four other variables Perceived Enjoyment, Flow Experience, Effort Expectancy, and Social Influence had positive or negative, medium or small, and not statistically direct effect on Behavioral Intention. Additionally, Flow Experience and Social Influence have no significant different between the mean value for male and female. Other variables have significant different regard to gender, where mean value of male was significantly greater than female except for Age. Practical implications of this study are relevant to groups who have interest to enhance or to decrease the adoption of online games technologies. Those to enhance the adoption of online games technologies must: preserve Performance Expectancy and Facilitating Conditions; enhance Flow Experience, Perceived Enjoyment, Effort Expectancy, and Social Influence; and engage the adolescent's online games behavior, specifically supporting them in longer playing games and in enhancing their experience. The opposite actions to these proposed can be considered to decrease the adoption.", "title": "" }, { "docid": "04eb3cb8f83277b552d9cb80d990cce0", "text": "The growing momentum of the Internet of Things (IoT) has shown an increase in attack vectors within the security research community. We propose adapting a recent new approach of frequently changing IPv6 address assignment to add an additional layer of security to the Internet of Things. We examine implementing Moving Target IPv6 Defense (MT6D) in IPv6 over Low-Powered Wireless Personal Area Networks (6LoWPAN); a protocol that is being used in wireless sensors found in home automation systems and smart meters. 6LoWPAN allows the Internet of Things to extend into the world of wireless sensor networks. We propose adapting Moving-Target IPv6 Defense for use with 6LoWPAN in order to defend against network-side attacks such as Denial-of-Service and Man-In-The-Middle while maintaining anonymity of client-server communications. This research aims in providing a moving-target defense for wireless sensor networks while maintaining power efficiency within the network.", "title": "" }, { "docid": "6ca68f39cd15b3e698d8df8c99e160a6", "text": "This paper proposed a novel isolated bidirectional flyback converter integrated with two non-dissipative LC snubbers. In the proposed topology, the main flyback transformer and the LC snubbers are crossed-coupled to reduce current circulation and recycle the leakage energy. The proposed isolated bidirectional flyback converter can step-up the voltage of the battery (Vbat = 12V) to a high voltage side (VHV = 200V) for the load demand and vice versa. The main goal of this paper is to demonstrate the performances of this topology to achieve high voltage gain with less switching losses and reduce components stresses. The circuit analysis conferred in detail for Continuous Conduction Mode (CCM). Lastly, a laboratory prototype constructed to compare with simulation result.", "title": "" }, { "docid": "611c8ce42410f8f678aa5cb5c0de535b", "text": "User simulators are a principal offline method for training and evaluating human-computer dialog systems. In this paper, we examine simple sequence-to-sequence neural network architectures for training end-to-end, natural language to natural language, user simulators, using only raw logs of previous interactions without any additional human labelling. We compare the neural network-based simulators with a language model (LM)-based approach for creating natural language user simulators. Using both an automatic evaluation using LM perplexity and a human evaluation, we demonstrate that the sequence-tosequence approaches outperform the LM-based method. We show correlation between LM perplexity and the human evaluation on this task, and discuss the benefits of different neural network architecture variations.", "title": "" }, { "docid": "69944e5a5a23abf66be23fe6a56d53cc", "text": "A 71-76 GHz high dynamic range CMOS RF variable gain amplifier (VGA) is presented. Variable gain is achieved using two current-steering trans-conductance stages, which provide high linearity with relatively low power consumption. The circuit is fabricated in a MS/RF 90-nm CMOS technology and consumes 18-mA total current from a 2-V supply. This VGA achieves a 14-dB maximum gain, a 30-dB gain controlled range, and a 4-dBm output saturation power. To the authorpsilas knowledge, this VGA demonstrates the highest operation frequency among the reported CMOS VGAs.", "title": "" }, { "docid": "bf1b556a1617674ca7b560aa48731f76", "text": "The increasing complexity of configuring cellular networks suggests that machine learning (ML) can effectively improve 5G technologies. Deep learning has proven successful in ML tasks such as speech processing and computational vision, with a performance that scales with the amount of available data. The lack of large datasets inhibits the flourish of deep learning applications in wireless communications. This paper presents a methodology that combines a vehicle traffic simulator with a raytracing simulator, to generate channel realizations representing 5G scenarios with mobility of both transceivers and objects. The paper then describes a specific dataset for investigating beamselection techniques on vehicle-to-infrastructure using millimeter waves. Experiments using deep learning in classification, regression and reinforcement learning problems illustrate the use of datasets generated with the proposed methodology.", "title": "" }, { "docid": "27f001247d02f075c9279b37acaa49b3", "text": "A Zadoff–Chu (ZC) sequence is uncorrelated with a non-zero cyclically shifted version of itself. However, this alone is insufficient to mitigate inter-code interference in LTE initial uplink synchronization. The performance of the state-of-the-art algorithms vary widely depending on the specific ZC sequences employed. We develop a systematic procedure to choose the ZC sequences that yield the optimum performance. It turns out that the procedure for ZC code selection in LTE standard is suboptimal when the carrier frequency offset is not small.", "title": "" }, { "docid": "bd9f01cad764a03f1e6cded149b9adbd", "text": "Psycholinguistic research has shown that the influence of abstract syntactic knowledge on performance is shaped by particular sentences that have been experienced. To explore this idea, the authors applied a connectionist model of sentence production to the development and use of abstract syntax. The model makes use of (a) error-based learning to acquire and adapt sequencing mechanisms and (b) meaning-form mappings to derive syntactic representations. The model is able to account for most of what is known about structural priming in adult speakers, as well as key findings in preferential looking and elicited production studies of language acquisition. The model suggests how abstract knowledge and concrete experience are balanced in the development and use of syntax.", "title": "" } ]
scidocsrr
f2fa921143776e7508b96f6146d7ab80
SNIF: a simple nude image finder
[ { "docid": "203359248f9d54f837540bdd7f717ccb", "text": "This paper presents \\bic (Border/Interior pixel Classification), a compact and efficient CBIR approach suitable for broad image domains. It has three main components: (1) a simple and powerful image analysis algorithm that classifies image pixels as either border or interior, (2) a new logarithmic distance (dLog) for comparing histograms, and (3) a compact representation for the visual features extracted from images. Experimental results show that the BIC approach is consistently more compact, more efficient and more effective than state-of-the-art CBIR approaches based on sophisticated image analysis algorithms and complex distance functions. It was also observed that the dLog distance function has two main advantages over vectorial distances (e.g., L1): (1) it is able to increase substantially the effectiveness of (several) histogram-based CBIR approaches and, at the same time, (2) it reduces by 50% the space requirement to represent a histogram.", "title": "" }, { "docid": "84a187b1e5331c4e7eb349c8b1358f14", "text": "We describe the maximum-likelihood parameter estimation problem and how the ExpectationMaximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical rigor.", "title": "" } ]
[ { "docid": "b3c203dabe2c19764634fbc3a6717381", "text": "This work complements existing research regarding the forgiveness process by highlighting the role of commitment in motivating forgiveness. On the basis of an interdependence-theoretic analysis, the authors suggest that (a) victims' self-oriented reactions to betrayal are antithetical to forgiveness, favoring impulses such as grudge and vengeance, and (b) forgiveness rests on prorelationship motivation, one cause of which is strong commitment. A priming experiment, a cross-sectional survey study, and an interaction record study revealed evidence of associations (or causal effects) of commitment with forgiveness. The commitment-forgiveness association appeared to rest on intent to persist rather than long-term orientation or psychological attachment. In addition, the commitment-forgiveness association was mediated by cognitive interpretations of betrayal incidents; evidence for mediation by emotional reactions was inconsistent.", "title": "" }, { "docid": "a44b74738723580f4056310d6856bb74", "text": "This book covers the theory and principles of core avionic systems in civil and military aircraft, including displays, data entry and control systems, fly by wire control systems, inertial sensor and air data systems, navigation, autopilot systems an... Use the latest data mining best practices to enable timely, actionable, evidence-based decision making throughout your organization! Real-World Data Mining demystifies current best practices, showing how to use data mining to uncover hidden patterns ... Data Warehousing in the Age of the Big Data will help you and your organization make the most of unstructured data with your existing data warehouse. As Big Data continues to revolutionize how we use data, it doesn't have to create more confusion. Ex... This book explores the concepts of data mining and data warehousing, a promising and flourishing frontier in data base systems and new data base applications and is also designed to give a broad, yet ....", "title": "" }, { "docid": "05a77d687230dc28697ca1751586f660", "text": "In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities. The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful. In this article, we analyze the interactions between bots that edit articles on Wikipedia. We track the extent to which bots undid each other's edits over the period 2001-2010, model how pairs of bots interact over time, and identify different types of interaction trajectories. We find that, although Wikipedia bots are intended to support the encyclopedia, they often undo each other's edits and these sterile \"fights\" may sometimes continue for years. Unlike humans on Wikipedia, bots' interactions tend to occur over longer periods of time and to be more reciprocated. Yet, just like humans, bots in different cultural environments may behave differently. Our research suggests that even relatively \"dumb\" bots may give rise to complex interactions, and this carries important implications for Artificial Intelligence research. Understanding what affects bot-bot interactions is crucial for managing social media well, providing adequate cyber-security, and designing well functioning autonomous vehicles.", "title": "" }, { "docid": "d792928284e2d7d9c54621974a4e3e9b", "text": "This paper presents a new fuzzy controller for semi-active vehicle suspension systems, which has a significantly fewer number of rules in comparison to existing fuzzy controllers. The proposed fuzzy controller has only nine fuzzy rules, whose performance is equivalent to the existing fuzzy controller with 49 fuzzy rules. The proposed controller with less number of fuzzy rules will be more feasible and cost-efficient in hardware implementation. For comparison, a linear quadratic regulator controlled semi-active suspension, and a passive suspension are also implemented and simulated. Simulation results show that the ride comfort and road holding are improved by 28% and 31%, respectively, with the fuzzy controlled semi-active suspension system, in comparison to the linear quadratic regulator controlled semi-active suspension.", "title": "" }, { "docid": "a1fef597312118f53e6b1468084a9300", "text": "The design of highly emissive and stable blue emitters for organic light emitting diodes (OLEDs) is still a challenge, justifying the intense research activity of the scientific community in this field. Recently, a great deal of interest has been devoted to the elaboration of emitters exhibiting a thermally activated delayed fluorescence (TADF). By a specific molecular design consisting into a minimal overlap between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) due to a spatial separation of the electron-donating and the electron-releasing parts, luminescent materials exhibiting small S1-T1 energy splitting could be obtained, enabling to thermally upconvert the electrons from the triplet to the singlet excited states by reverse intersystem crossing (RISC). By harvesting both singlet and triplet excitons for light emission, OLEDs competing and sometimes overcoming the performance of phosphorescence-based OLEDs could be fabricated, justifying the interest for this new family of materials massively popularized by Chihaya Adachi since 2012. In this review, we proposed to focus on the recent advances in the molecular design of blue TADF emitters for OLEDs during the last few years.", "title": "" }, { "docid": "a08e1710d15b69ea23980daa722ace0d", "text": "Olympic combat sports separate athletes into weight divisions, in an attempt to reduce size, strength, range and/or leverage disparities between competitors. Official weigh-ins are conducted anywhere from 3 and up to 24 h prior to competition ensuring athletes meet weight requirements (i.e. have 'made weight'). Fighters commonly aim to compete in weight divisions lower than their day-to-day weight, achieved via chronic and acute manipulations of body mass (BM). Although these manipulations may impair health and absolute performance, their strategic use can improve competitive success. Key considerations are the acute manipulations around weigh-in, which differ in importance, magnitude and methods depending on the requirements of the individual combat sport and the weigh-in regulations. In particular, the time available for recovery following weigh-in/before competition will determine what degree of acute BM loss can be implemented and reversed. Increased exercise and restricted food and fluid intake are undertaken to decrease body water and gut contents reducing BM. When taken to the extreme, severe weight-making practices can be hazardous, and efforts have been made to reduce their prevalence. Indeed some have called for the abolition of these practices altogether. In lieu of adequate strategies to achieve this, and the pragmatic recognition of the likely continuation of these practices as long as regulations allow, this review summarises guidelines for athletes and coaches for manipulating BM and optimising post weigh-in recovery, to achieve better health and performance outcomes across the different Olympic combat sports.", "title": "" }, { "docid": "c157b149d334b2cc1f718d70ef85e75e", "text": "The large inter-individual variability within the normal population, the limited reproducibility due to habituation or fatigue, and the impact of instruction and the subject's motivation, all constitute a major problem in posturography. These aspects hinder reliable evaluation of the changes in balance control in the case of disease and complicate objectivation of the impact of therapy and sensory input on balance control. In this study, we examine whether measurement of balance control near individualized limits of stability and under very challenging sensory conditions might reduce inter- and intra-individual variability compared to the well-known Sensory Organization Test (SOT). To do so, subjects balance on a platform on which instability increases automatically until body orientation or body sway velocity surpasses a safety limit. The maximum tolerated platform instability is then used as a measure for balance control under 10 different sensory conditions. Ninety-seven healthy subjects and 107 patients suffering from chronic dizziness (whiplash syndrome (n = 25), Meniere's disease (n = 28), acute (n = 28) or gradual (n = 26) peripheral function loss) were tested. In both healthy subjects and patients this approach resulted in a low intra-individual variability (< 14.5(%). In healthy subjects and patients, balance control was maximally affected by closure of the eyes and by vibration of the Achilles' tendons. The other perturbation techniques applied (sway referenced vision or platform, cooling of the foot soles) were less effective. Combining perturbation techniques reduced balance control even more, but the effect was less than the linear summation of the effect induced by the techniques applied separately. The group averages of healthy subjects show that vision contributed maximum 37%, propriocepsis minimum 26%, and labyrinths maximum 44% to balance control in healthy subjects. However, a large inter-individual variability was observed. Balance control of each patient group was less than in healthy subjects in all sensory conditions. Similar to healthy subjects, patients also show a large inter-individual variability, which results in a low sensitivity of the test. With the exception of some minor differences between Whiplash and Meniere patients, balance control did not differ between the four patient groups. This points to a low specificity of the test. Balance control was not correlated with the outcome of the standard vestibular examination. This study strengthens our notion that the contribution of the sensory inputs to balance control differs considerably per individual and may simply be due to differences in the vestibular function related to the specific pathology, but also to differences in motor learning strategies in relation to daily life requirements. It is difficult to provide clinically relevant normative data. We conclude that, like the SOT, the current test is merely a functional test of balance with limited diagnostic value.", "title": "" }, { "docid": "dbd06c81892bc0535e2648ee21cb00b4", "text": "This paper examines the causes of conflict in Burundi and discusses strategies for building peace. The analysis of the complex relationships between distribution and group dynamics reveals that these relationships are reciprocal, implying that distribution and group dynamics are endogenous. The nature of endogenously generated group dynamics determines the type of preferences (altruistic or exclusionist), which in turn determines the type of allocative institutions and policies that prevail in the political and economic system. While unequal distribution of resources may be socially inefficient, it nonetheless can be rational from the perspective of the ruling elite, especially because inequality perpetuates dominance. However, as the unequal distribution of resources generates conflict, maintaining a system based on inequality is difficult because it requires ever increasing investments in repression. It is therefore clear that if the new Burundian leadership is serious about building peace, it must engineer institutions that uproot the legacy of discrimination and promote equal opportunity for social mobility for all members of ethnic groups and regions.", "title": "" }, { "docid": "7c09cb7f935e2fb20a4d2e56a5471e61", "text": "This paper proposes and evaluates an approach to the parallelization, deployment and management of bioinformatics applications that integrates several emerging technologies for distributed computing. The proposed approach uses the MapReduce paradigm to parallelize tools and manage their execution, machine virtualization to encapsulate their execution environments and commonly used data sets into flexibly deployable virtual machines, and network virtualization to connect resources behind firewalls/NATs while preserving the necessary performance and the communication environment. An implementation of this approach is described and used to demonstrate and evaluate the proposed approach. The implementation integrates Hadoop, Virtual Workspaces, and ViNe as the MapReduce, virtual machine and virtual network technologies, respectively, to deploy the commonly used bioinformatics tool NCBI BLAST on a WAN-based test bed consisting of clusters at two distinct locations, the University of Florida and the University of Chicago. This WAN-based implementation, called CloudBLAST, was evaluated against both non-virtualized and LAN-based implementations in order to assess the overheads of machine and network virtualization, which were shown to be insignificant. To compare the proposed approach against an MPI-based solution, CloudBLAST performance was experimentally contrasted against the publicly available mpiBLAST on the same WAN-based test bed. Both versions demonstrated performance gains as the number of available processors increased, with CloudBLAST delivering speedups of 57 against 52.4 of MPI version, when 64 processors on 2 sites were used. The results encourage the use of the proposed approach for the execution of large-scale bioinformatics applications on emerging distributed environments that provide access to computing resources as a service.", "title": "" }, { "docid": "b42f4d645e2a7e24df676a933f414a6c", "text": "Epilepsy is a common neurological condition which affects the central nervous system that causes people to have a seizure and can be assessed by electroencephalogram (EEG). Electroencephalography (EEG) signals reflect two types of paroxysmal activity: ictal activity and interictal paroxystic events (IPE). The relationship between IPE and ictal activity is an essential and recurrent question in epileptology. The spike detection in EEG is a difficult problem. Many methods have been developed to detect the IPE in the literature. In this paper we propose three methods to detect the spike in real EEG signal: Page Hinkley test, smoothed nonlinear energy operator (SNEO) and fractal dimension. Before using these methods, we filter the signal. The Singular Spectrum Analysis (SSA) filter is used to remove the noise in an EEG signal.", "title": "" }, { "docid": "42e2aec24a5ab097b5fff3ec2fe0385d", "text": "Online freelancing marketplaces have grown quickly in recent years. In theory, these sites offer workers the ability to earn money without the obligations and potential social biases associated with traditional employment frameworks. In this paper, we study whether two prominent online freelance marketplaces - TaskRabbit and Fiverr - are impacted by racial and gender bias. From these two platforms, we collect 13,500 worker profiles and gather information about workers' gender, race, customer reviews, ratings, and positions in search rankings. In both marketplaces, we find evidence of bias: we find that gender and race are significantly correlated with worker evaluations, which could harm the employment opportunities afforded to the workers. We hope that our study fuels more research on the presence and implications of discrimination in online environments.", "title": "" }, { "docid": "ff429302ec983dd1203ac6dd97506ef8", "text": "Financial crises have occurred for many centuries. They are often preceded by a credit boom and a rise in real estate and other asset prices, as in the current crisis. They are also often associated with severe disruption in the real economy. This paper surveys the theoretical and empirical literature on crises. The first explanation of banking crises is that they are a panic. The second is that they are part of the business cycle. Modeling crises as a global game allows the two to be unified. With all the liquidity problems in interbank markets that have occurred during the current crisis, there is a growing literature on this topic. Perhaps the most serious market failure associated with crises is contagion, and there are many papers on this important topic. The relationship between asset price bubbles, particularly in real estate, and crises is discussed at length. Disciplines Economic Theory | Finance | Finance and Financial Management This journal article is available at ScholarlyCommons: http://repository.upenn.edu/fnce_papers/403 Financial Crises: Theory and Evidence Franklin Allen University of Pennsylvania Ana Babus Cambridge University Elena Carletti European University Institute", "title": "" }, { "docid": "b266069e91c24120b1732c5576087a90", "text": "Reactions of organic molecules on Montmorillonite c lay mineral have been investigated from various asp ects. These include catalytic reactions for organic synthesis, chemical evolution, the mechanism of humus-formatio n, and environmental problems. Catalysis by clay minerals has attracted much interest recently, and many repo rts including the catalysis by synthetic or modified cl ays have been published. In this review, we will li mit the review to organic reactions using Montmorillonite clay as cat alyst.", "title": "" }, { "docid": "9651fa86b37b6de23956e76459e127fc", "text": "This corrects the article DOI: 10.1038/nature12346", "title": "" }, { "docid": "05ab4fa15696ee8b47e017ebbbc83f2c", "text": "Vertically aligned rutile TiO2 nanowire arrays (NWAs) with lengths of ∼44 μm have been successfully synthesized on transparent, conductive fluorine-doped tin oxide (FTO) glass by a facile one-step solvothermal method. The length and wire-to-wire distance of NWAs can be controlled by adjusting the ethanol content in the reaction solution. By employing optimized rutile TiO2 NWAs for dye-sensitized solar cells (DSCs), a remarkable power conversion efficiency (PCE) of 8.9% is achieved. Moreover, in combination with a light-scattering layer, the performance of a rutile TiO2 NWAs based DSC can be further enhanced, reaching an impressive PCE of 9.6%, which is the highest efficiency for rutile TiO2 NWA based DSCs so far.", "title": "" }, { "docid": "a9a3d46bd6f5df951957ddc57d3d390d", "text": "In this paper, we propose a low-power level shifter (LS) capable of converting extremely low-input voltage into high-output voltage. The proposed LS consists of a pre-amplifier with a logic error correction circuit and an output latch stage. The pre-amplifier generates complementary amplified signals, and the latch stage converts them into full-swing output signals. Simulated results demonstrated that the proposed LS in a 0.18-μm CMOS process can convert a 0.19-V input into 1.8-V output correctly. The energy and the delay time of the proposed LS were 0.24 pJ and 21.4 ns when the low supply voltage, high supply voltage, and the input pulse frequency, were 0.4, 1.8 V, and 100 kHz, respectively.", "title": "" }, { "docid": "4318041c3cf82ce72da5983f20c6d6c4", "text": "In line with cloud computing emergence as the dominant enterprise computing paradigm, our conceptualization of the cloud computing reference architecture and service construction has also evolved. For example, to address the need for cost reduction and rapid provisioning, virtualization has moved beyond hardware to containers. More recently, serverless computing or Function-as-a-Service has been presented as a means to introduce further cost-efficiencies, reduce configuration and management overheads, and rapidly increase an application's ability to speed up, scale up and scale down in the cloud. The potential of this new computation model is reflected in the introduction of serverless computing platforms by the main hyperscale cloud service providers. This paper provides an overview and multi-level feature analysis of seven enterprise serverless computing platforms. It reviews extant research on these platforms and identifies the emergence of AWS Lambda as a de facto base platform for research on enterprise serverless cloud computing. The paper concludes with a summary of avenues for further research.", "title": "" }, { "docid": "172567417be706a47c94d35d90c24400", "text": "This work presents a novel semi-supervised learning approach for data-driven modeling of asset failures when health status is only partially known in historical data. We combine a generative model parameterized by deep neural networks with non-linear embedding technique. It allows us to build prognostic models with the limited amount of health status information for the precise prediction of future asset reliability. The proposed method is evaluated on a publicly available dataset for remaining useful life (RUL) estimation, which shows significant improvement even when a fraction of the data with known health status is as sparse as 1% of the total. Our study suggests that the non-linear embedding based on a deep generative model can efficiently regularize a complex model with deep architectures while achieving high prediction accuracy that is far less sensitive to the availability of health status information.", "title": "" }, { "docid": "cb29a1fc5a8b70b755e934c9b3512a36", "text": "The problem of pedestrian detection in image and video frames has been extensively investigated in the past decade. However, the low performance in complex scenes shows that it remains an open problem. In this paper, we propose to cascade simple Aggregated Channel Features (ACF) and rich Deep Convolutional Neural Network (DCNN) features for efficient and effective pedestrian detection in complex scenes. The ACF based detector is used to generate candidate pedestrian windows and the rich DCNN features are used for fine classification. Experiments show that the proposed approach achieved leading performance in the INRIA dataset and comparable performance to the state-of-the-art in the Caltech and ETH datasets.", "title": "" }, { "docid": "88e193c935a216ea21cb352921deaa71", "text": "This overview paper outlines our views of actual security of biometric authentication and encryption systems. The attractiveness of some novel approaches like cryptographic key generation from biometric data is in some respect understandable, yet so far has lead to various shortcuts and compromises on security. Our paper starts with an introductory section that is followed by a section about variability of biometric characteristics, with a particular attention paid to biometrics used in large systems. The following sections then discuss the potential for biometric authentication systems, and for the use of biometrics in support of cryptographic applications as they are typically used in computer systems.", "title": "" } ]
scidocsrr
d6f4c8c6e82f1d9869c7a0557aced73a
Effects of bilingualism on the age of onset and progression of MCI and AD: evidence from executive function tests.
[ { "docid": "ca6e91eb89850bae6ff938dc2a7602d5", "text": "OBJECTIVES\nThere is strong epidemiologic evidence to suggest that older adults who maintain an active lifestyle in terms of social, mental, and physical engagement are protected to some degree against the onset of dementia. Such factors are said to contribute to cognitive reserve, which acts to compensate for the accumulation of amyloid and other brain pathologies. We present evidence that lifelong bilingualism is a further factor contributing to cognitive reserve.\n\n\nMETHODS\nData were collected from 211 consecutive patients diagnosed with probable Alzheimer disease (AD). Patients' age at onset of cognitive impairment was recorded, as was information on occupational history, education, and language history, including fluency in English and any other languages. Following this procedure, 102 patients were classified as bilingual and 109 as monolingual.\n\n\nRESULTS\nWe found that the bilingual patients had been diagnosed 4.3 years later and had reported the onset of symptoms 5.1 years later than the monolingual patients. The groups were equivalent on measures of cognitive and occupational level, there was no apparent effect of immigration status, and the monolingual patients had received more formal education. There were no gender differences.\n\n\nCONCLUSIONS\nThe present data confirm results from an earlier study, and thus we conclude that lifelong bilingualism confers protection against the onset of AD. The effect does not appear to be attributable to such possible confounding factors as education, occupational status, or immigration. Bilingualism thus appears to contribute to cognitive reserve, which acts to compensate for the effects of accumulated neuropathology.", "title": "" }, { "docid": "4dc5daa63bf280623914e2415bacd2a2", "text": "The regular use of two languages by bilingual individuals has been shown to have a broad impact on language and cognitive functioning. In this monograph, we consider four aspects of this influence. In the first section, we examine differences between monolinguals and bilinguals in children’s acquisition of language and adults’ linguistic processing, particularly in terms of lexical retrieval. Children learning two languages from birth follow the same milestones for language acquisition as monolinguals do (first words, first use of grammar) but may use different strategies for language acquisition, and they generally have a smaller vocabulary in each language than do monolingual children learning only a single language. Adult bilinguals typically take longer to retrieve individual words than monolinguals do, and they generate fewer words when asked to satisfy a constraint such as category membership or initial letter. In the second section, we consider the impact of bilingualism on nonverbal cognitive processing in both children and adults. The primary effect in this case is the enhancement of executive control functions in bilinguals. On tasks that require inhibition of distracting information, switching between tasks, or holding information in mind while performing a task, bilinguals of all ages outperform comparable monolinguals. A plausible reason is that bilinguals recruit control processes to manage their ongoing linguistic performance and that these control processes become enhanced for other unrelated aspects of cognitive processing. Preliminary evidence also suggests that the executive control advantage may even mitigate cognitive decline in older age and contribute to cognitive reserve, which in turn may postpone Alzheimer’s disease. In the third section, we describe the brain networks that are responsible for language processing in bilinguals and demonstrate their involvement in nonverbal executive control for bilinguals. We begin by reviewing neuroimaging research that identifies the networks used for various nonverbal executive control tasks in the literature. These networks are used as a reference point to interpret the way in which bilinguals perform both verbal and nonverbal control tasks. The results show that bilinguals manage attention to their two language systems using the same networks that are used by monolinguals performing nonverbal tasks. In the fourth section, we discuss the special circumstances that surround the referral of bilingual children (e.g., language delays) and adults (e.g., stroke) for clinical intervention. These referrals are typically based on standardized assessments that use normative data from monolingual populations, such as vocabulary size and lexical retrieval. As we have seen, however, these measures are often different for bilinguals, both for children and adults. We discuss the implications of these linguistic differences for standardized test performance and clinical approaches. We conclude by considering some questions that have important public policy implications. What are the pros and cons of French or Spanish immersion educational programs, for example? Also, if bilingualism confers advantages in certain respects, how about three languages—do the benefits increase? In the healthcare field, how can current knowledge help in the treatment of bilingual aphasia patients following stroke? Given the recent increase in bilingualism as a research topic, answers to these and other related questions should be available in the near future.", "title": "" }, { "docid": "89b30e45feda20ad34ec7bef3a877e5d", "text": "Advanced inhibitory control skills have been found in bilingual speakers as compared to monolingual controls (Bialystok, 1999). We examined whether this effect is generalized to an unstudied language group (Spanish-English bilingual) and multiple measures of executive function by administering a battery of tasks to 50 kindergarten children drawn from three language groups: native bilinguals, monolinguals (English), and English speakers enrolled in second-language immersion kindergarten. Despite having significantly lower verbal scores and parent education/income level, Spanish-English bilingual children's raw scores did not differ from their peers. After statistically controlling for these factors and age, native bilingual children performed significantly better on the executive function battery than both other groups. Importantly, the relative advantage was significant for tasks that appear to call for managing conflicting attentional demands (Conflict tasks); there was no advantage on impulse-control (Delay tasks). These results advance our understanding of both the generalizability and specificity of the compensatory effects of bilingual experience for children's cognitive development.", "title": "" } ]
[ { "docid": "5a2f217791a2614ec3699da4a8446a9f", "text": "The retinal vascular condition is a trustworthy biomarker of several ophthalmologic and cardiovascular diseases, so automatic vessel segmentation is a crucial step to diagnose and monitor these problems. Deep Learning models have recently revolutionized the state-of-the-art in several fields, since they can learn features with multiple levels of abstraction from the data itself. However, these methods can easily fall into overfitting, since a huge number of parameters must be learned. Having bigger datasets may act as regularization and lead to better models. Yet, acquiring and manually annotating images, especially in the medical field, can be a long and costly procedure. Hence, when using regular datasets, people heavily need to apply artificial data augmentation. In this work, we use a fully convolutional neural network capable of reaching the state-of-the-art. Also, we investigate the benefits of augmenting data with new samples created by warping retinal fundus images with nonlinear transformations. Our results hint that may be possible to halve the amount of data, while maintaining the same performance.", "title": "" }, { "docid": "d59b64b96cc79a2e21e705c021473f2a", "text": "Bovine colostrum (first milk) contains very high concentrations of IgG, and on average 1 kg (500 g/liter) of IgG can be harvested from each immunized cow immediately after calving. We used a modified vaccination strategy together with established production systems from the dairy food industry for the large-scale manufacture of broadly neutralizing HIV-1 IgG. This approach provides a low-cost mucosal HIV preventive agent potentially suitable for a topical microbicide. Four cows were vaccinated pre- and/or postconception with recombinant HIV-1 gp140 envelope (Env) oligomers of clade B or A, B, and C. Colostrum and purified colostrum IgG were assessed for cross-clade binding and neutralization against a panel of 27 Env-pseudotyped reporter viruses. Vaccination elicited high anti-gp140 IgG titers in serum and colostrum with reciprocal endpoint titers of up to 1 × 10(5). While nonimmune colostrum showed some intrinsic neutralizing activity, colostrum from 2 cows receiving a longer-duration vaccination regimen demonstrated broad HIV-1-neutralizing activity. Colostrum-purified polyclonal IgG retained gp140 reactivity and neutralization activity and blocked the binding of the b12 monoclonal antibody to gp140, showing specificity for the CD4 binding site. Colostrum-derived anti-HIV antibodies offer a cost-effective option for preparing the substantial quantities of broadly neutralizing antibodies that would be needed in a low-cost topical combination HIV-1 microbicide.", "title": "" }, { "docid": "6dd81725ffdb5a90c9f02c4faca784a3", "text": "In 1989 the IT function of the exploration and production division of British Petroleum Company set out to transform itself in response to a severe economic environment and poor internal perceptions of IT performance. This case study traces and analyzes the changes made over six years. The authors derive a model of the transformed IT organization comprising seven components which they suggest can guide IT departments in general as they seek to reform themselves in the late 1990's. This model is seen to fit well with recent thinking on general management in that the seven components of change can be reclassified into the Bartlett and Ghoshal (1994) framework of Purpose, Process and People. Some suggestions are made on how to apply the model in other organizations.", "title": "" }, { "docid": "14fdf8fa41d46ad265b48bbc64a2d3cc", "text": "Preserving edge structures is a challenge to image interpolation algorithms that reconstruct a high-resolution image from a low-resolution counterpart. We propose a new edge-guided nonlinear interpolation technique through directional filtering and data fusion. For a pixel to be interpolated, two observation sets are defined in two orthogonal directions, and each set produces an estimate of the pixel value. These directional estimates, modeled as different noisy measurements of the missing pixel are fused by the linear minimum mean square-error estimation (LMMSE) technique into a more robust estimate, using the statistics of the two observation sets. We also present a simplified version of the LMMSE-based interpolation algorithm to reduce computational cost without sacrificing much the interpolation performance. Experiments show that the new interpolation techniques can preserve edge sharpness and reduce ringing artifacts", "title": "" }, { "docid": "13f43cf82f6322c2659f08b009c75076", "text": "The revolution of Internet-of-Things (IoT) is reshaping the modern food supply chains with promising business prospects. To be successful in practice, the IoT solutions should create “income-centric” values beyond the conventional “traceability-centric” values. To accomplish what we promised to users, sensor portfolios and information fusion must correspond to the new requirements introduced by this income-centric value creation. In this paper, we propose a value-centric business-technology joint design framework. Based on it the income-centric added-values including shelf life prediction, sales premium, precision agriculture, and reduction of assurance cost are identified and assessed. Then corresponding sensor portfolios are developed and implemented. Three-tier information fusion architecture is proposed as well as examples about acceleration data processing, self-learning shelf life prediction and real-time supply chain re-planning. The feasibilities of the proposed design framework and solution have been confirmed by the field trials and an implemented prototype system.", "title": "" }, { "docid": "6ee8efea33f518d68f5582097c4c2929", "text": "The COMPOSE project aims to provide an open Marketplace for the Internet of Things as well as the necessary platform to support it. A necessary component of COMPOSE is an API that allows things, COMPOSE users and the platform to communicate. The COMPOSE API allows for things to push data to the platform, the platform to initiate asynchronous actions on the things, and COMPOSE users to retrieve and process data from the things. In this paper we present the design and implementation of the COMPOSE API, as well as a detailed description of the main key requirements that the API must satisfy. The API documentation and the source code for the platform are available online.", "title": "" }, { "docid": "06a10608b51cc1ae6c7ef653faf637a9", "text": "WE aLL KnoW how to protect our private or most valuable data from unauthorized access: encrypt it. When a piece of data M is encrypted under a key K to yield a ciphertext C=EncK(M), only the intended recipient (who knows the corresponding secret decryption key S) will be able to invert the encryption function and recover the original plaintext using the decryption algorithm DecS(C)=DecS(EncK(M))=M. Encryption today—in both symmetric (where S=K) and public key versions (where S remains secret even when K is made publicly available)—is widely used to achieve confidentiality in many important and well-known applications: online banking, electronic shopping, and virtual private networks are just a few of the most common applications using encryption, typically as part of a larger protocol, like the TLS protocol used to secure communication over the Internet. Still, the use of encryption to protect valuable or sensitive data can be very limiting and inflexible. Once the data M is encrypted, the corresponding ciphertext C behaves to a large extent as a black box: all we can do with the box is keep it closed or opened in order to access and operate on the data. In many situations this may be exactly what we want. For example, take a remote storage system, where we want to store a large collection of documents or data files. We store the data in encrypted form, and when we want to access a specific piece of data, we retrieve the corresponding ciphertext, decrypting it locally on our own trusted computer. But as soon as we go beyond the simple data storage/ retrieval model, we are in trouble. Say we want the remote system to provide a more complex functionality, like a database system capable of indexing and searching our data, or answering complex relational or semistructured queries. Using standard encryption technology we are immediately faced with a dilemma: either we store our data unencrypted and reveal our precious or sensitive data to the storage/ database service provider, or we encrypt it and make it impossible for the provider to operate on it. If data is encrypted, then answering even a simple counting query (for example, the number of records or files that contain a certain keyword) would typically require downloading and decrypting the entire database content. Homomorphic encryption is a special kind of encryption that allows operating on ciphertexts without decrypting them; in fact, without even knowing the decryption key. For example, given ciphertexts C=EncK(M) and C'=EncK(M'), an additively homomorphic encryption scheme would allow to combine C and C' to obtain EncK(M+M'). Such encryption schemes are immensely useful in the design of complex cryptographic protocols. For example, an electronic voting scheme may collect encrypted votes Ci=EncK(Mi) where each vote Mi is either 0 or 1, and then tally them to obtain the encryption of the outcome C=EncK(M1+..+Mn). This would be decrypted by an appropriate authority that has the decryption key and ability to announce the result, but the entire collection and tallying process would operate on encrypted data without the use of the secret key. (Of course, this is an oversimplified protocol, as many other issues must be addressed in a real election scheme, but it well illustrates the potential usefulness of homomorphic encryption.) To date, all known homomorphic encryption schemes supported essentially only one basic operation, for example, addition. But the potential of fully homomorphic encryption (that is, homomorphic encryption supporting arbitrarily complex computations on ciphertexts) is clear. Think of encrypting your queries before you send them to your favorite search engine, and receive the encryption of the result without the search engine even knowing what the query was. Imagine running your most computationally intensive programs on your large datasets on a cluster of remote computers, as in a cloud computing environment, while keeping both your programs, data, and results encrypted and confidential. The idea of fully homomorphic encryption schemes was first proposed by Rivest, Adleman, and Dertouzos the late 1970s, but remained a mirage for three decades, the never-to-be-found Holy Grail of cryptography. At least until 2008, when Craig Gentry announced a new approach to the construction of fully homomorphic cryptosystems. In the following paper, Gentry describes his innovative method for constructing fully homomorphic encryption schemes, the first credible solution to this long-standing major problem in cryptography and theoretical computer science at large. While much work is still to be done before fully homomorphic encryption can be used in practice, Gentry’s work is clearly a landmark achievement. Before Gentry’s discovery many members of the cryptography research community thought fully homomorphic encryption was impossible to achieve. Now, most cryptographers (me among them) are convinced the Holy Grail exists. In fact, there must be several of them, more or less efficient ones, all out there waiting to be discovered. Gentry gives a very accessible and enjoyable description of his general method to achieve fully homomorphic encryption as well as a possible instantiation of his framework recently proposed by van Dijik, Gentry, Halevi, and Vaikuntanathan. He has taken great care to explain his technically complex results, some of which have their roots in lattice-based cryptography, using a metaphorical tale of a jeweler and her quest to keep her precious materials safe, while at the same time allowing her employees to work on them. Gentry’s homomorphic encryption work is truly worth a read.", "title": "" }, { "docid": "39cc52cd5ba588e9d4799c3b68620f18", "text": "Using data from a popular online social network site, this paper explores the relationship between profile structure (namely, which fields are completed) and number of friends, giving designers insight into the importance of the profile and how it works to encourage connections and articulated relationships between users. We describe a theoretical framework that draws on aspects of signaling theory, common ground theory, and transaction costs theory to generate an understanding of why certain profile fields may be more predictive of friendship articulation on the site. Using a dataset consisting of 30,773 Facebook profiles, we determine which profile elements are most likely to predict friendship links and discuss the theoretical and design implications of our findings.", "title": "" }, { "docid": "9dafb1a1286c4bd65ad22be0a5b18eee", "text": "OBJECTIVES\nThis study mainly integrates the mature Technology-Organization-Environment (TOE) framework and recently developed Human-Organization-Technology (HOT) fit model to identify factors that affect the hospital decision in adopting Hospital Information System (HIS).\n\n\nMETHODS\nAccordingly, a hybrid Multi-Criteria-Decision-Making (MCDM) model is used to address the dependence relationships of factors with the aid of Analytic Network Processes (ANP) and Decision Making Trial and Evaluation Laboratory (DEMATEL) approaches. The initial model of the study is designed by considering four main dimensions with 13 variables as organizational innovation adoption factors with respect to HIS. By using DEMATEL, the interdependencies strength among the dimensions and variables are tested. The ANP method is then adopted in order to determine the relative importance of the adoption factors, and is used to identify how these factors are weighted and prioritized by the public hospital professionals, who are wholly familiar with the HIS and have years of experience in decision making in hospitals' Information System (IS) department.\n\n\nRESULTS\nThe results of this study indicate that from the experts' viewpoint \"Perceived Technical Competence\" is the most important factor in the Human dimension. In the Technology dimension, the experts agree that the \"Relative Advantage\" is more important in relation to the other factors. In the Organization dimension, \"Hospital Size\" is considered more important rather than others. And, in the Environment dimension, according to the experts judgment, \"Government Policy\" is the most important factor. The results of ANP survey from experts also reveal that the experts in the HIS field believed that these factors should not be overlooked by managers of hospitals and the adoption of HIS is more related to more consideration of these factors. In addition, from the results, it is found that the experts are more concerned about Environment and Technology for the adoption HIS.\n\n\nCONCLUSIONS\nThe findings of this study make a novel contribution in the context of healthcare industry that is to improve the decision process of innovation in adoption stage and to help enhance more the diffusion of IS in the hospital setting, which by doing so, can provide plenty of profits to the patient community and the hospitals.", "title": "" }, { "docid": "85016bc639027363932f9adf7012d7a7", "text": "The output voltage ripple is one of the most significant system parameters in switch-mode power supplies. This ripple degrades the performance of application specific integrated circuits (ASICs). The most common way to reduce it is to use additional integrated low drop-out regulators (LDO) on the ASIC. This technique usually suffers from high system efficiency as it is required for portable electronic systems. It also increases the design challenges of on-chip power management circuits and area required for the LDOs. This work presents a low-power fully integrated 0.97mm2 DC-DC Buck converter with a tuned series LDO with 1mV voltage ripple in a 0.25μm BiCMOS process. The converter prodives a power supply rejection ratio of more than 60 dB from 1 to 6MHz and a load current range of 0...400 mA. A peak efficiency of 93.7% has been measured. For high light load efficiency, automatic mode operation is implemented. To decrease the form factor and costs, the external components count has been reduced to a single inductor of 1 μH and two external capacitors of 2 μF each.", "title": "" }, { "docid": "5300e9938a545895c8b97fe6c9d06aa5", "text": "Background subtraction is a common computer vision task. We analyze the usual pixel-level approach. We develop an efficient adaptive algorithm using Gaussian mixture probability density. Recursive equations are used to constantly update the parameters and but also to simultaneously select the appropriate number of components for each pixel.", "title": "" }, { "docid": "043ee08c9249a05f1f46799f8c52d848", "text": "The objective of the project described in this paper is the development of a cybernetic prosthesis, replicating as much as possible the sensory-motor capabilities of the natural hand. The human hand is not only an effective tool but also an ideal instrument to acquire information from the external environment. The development of a truly human-like artificial hand is probably the most widely known paradigm of ”bionics”. The Cyberhand Project aims to obtain a cybernetic prosthetic hand interfaced to the peripheral nervous system. In particular this paper is focused on the hand mechanisms design and it presents preliminary results in developing the three fingered anthropomorphic hand prototype and its sensory system.", "title": "" }, { "docid": "bce3143cc1ba21c34ebe5d1b596731f9", "text": "Memory errors in C and C++ programs continue to be one of the dominant sources of security problems, accounting for over a third of the high severity vulnerabilities reported in 2011. Wide-spread deployment of defenses such as address-space layout randomization (ASLR) have made memory exploit development more difficult, but recent trends indicate that attacks are evolving to overcome this defense. Techniques for systematic detection and blocking of memory errors can provide more comprehensive protection that can stand up to skilled adversaries, but unfortunately, these techniques introduce much higher overheads and provide significantly less compatibility than ASLR. We propose a new memory error detection technique that explores a part of the design space that trades off some ability to detect bounds errors in order to obtain good performance and excellent backwards compatibility. On the SPECINT 2000 benchmark, the runtime overheads of our technique is about half of that reported by the fastest previous bounds-checking technique. On the compatibility front, our technique has been tested on over 7 million lines of code, which is much larger than that reported for previous bounds-checking techniques.", "title": "" }, { "docid": "a86840c1c1c6bef15889fd0e62815402", "text": "The Web offers a corpus of over 100 million tables [6], but the meaning of each table is rarely explicit from the table itself. Header rows exist in few cases and even when they do, the attribute names are typically useless. We describe a system that attempts to recover the semantics of tables by enriching the table with additional annotations. Our annotations facilitate operations such as searching for tables and finding related tables. To recover semantics of tables, we leverage a database of class labels and relationships automatically extracted from the Web. The database of classes and relationships has very wide coverage, but is also noisy. We attach a class label to a column if a sufficient number of the values in the column are identified with that label in the database of class labels, and analogously for binary relationships. We describe a formal model for reasoning about when we have seen sufficient evidence for a label, and show that it performs substantially better than a simple majority scheme. We describe a set of experiments that illustrate the utility of the recovered semantics for table search and show that it performs substantially better than previous approaches. In addition, we characterize what fraction of tables on the Web can be annotated using our approach.", "title": "" }, { "docid": "edf52710738647f7ebd4c017ddf56c2c", "text": "Tasks like search-and-rescue and urban reconnaissance benefit from large numbers of robots working together, but high levels of autonomy are needed in order to reduce operator requirements to practical levels. Reducing the reliance of such systems on human operators presents a number of technical challenges including automatic task allocation, global state and map estimation, robot perception, path planning, communications, and human-robot interfaces. This paper describes our 14-robot team, designed to perform urban reconnaissance missions, that won the MAGIC 2010 competition. This paper describes a variety of autonomous systems which require minimal human effort to control a large number of autonomously exploring robots. Maintaining a consistent global map, essential for autonomous planning and for giving humans situational awareness, required the development of fast loop-closing, map optimization, and communications algorithms. Key to our approach was a decoupled centralized planning architecture that allowed individual robots to execute tasks myopically, but whose behavior was coordinated centrally. In this paper, we will describe technical contributions throughout our system that played a significant role in the performance of our system. We will also present results from our system both from the competition and from subsequent quantitative evaluations, pointing out areas in which the system performed well and where interesting research problems remain.", "title": "" }, { "docid": "df9d74df931a596b7025150d11a18364", "text": "In recent years, ''gamification'' has been proposed as a solution for engaging people in individually and socially sustainable behaviors, such as exercise, sustainable consumption, and education. This paper studies demographic differences in perceived benefits from gamification in the context of exercise. On the basis of data gathered via an online survey (N = 195) from an exercise gamification service Fitocracy, we examine the effects of gender, age, and time using the service on social, hedonic, and utilitarian benefits and facilitating features of gamifying exercise. The results indicate that perceived enjoyment and usefulness of the gamification decline with use, suggesting that users might experience novelty effects from the service. The findings show that women report greater social benefits from the use of gamification. Further, ease of use of gamification is shown to decline with age. The implications of the findings are discussed. The question of how we understand gamer demographics and gaming behaviors, along with use cultures of different demographic groups, has loomed over the last decade as games became one of the main veins of entertainment and consumer culture (Yi, 2004). The deeply established perception of games being a field of entertainment dominated by young males has been challenged. Nowadays, digital gaming is a mainstream activity with broad demographics. The gender divide has been diminishing, the age span has been widening, and the average age is higher than An illustrative study commissioned by PopCap (Information Solutions Group, 2011) reveals that it is actually women in their 30s and 40s who play the popular social games on social networking services (see e.g. most – outplaying men and younger people. It is clear that age and gender perspectives on gaming activities and motivations require further scrutiny. The expansion of the game industry and the increased competition within the field has also led to two parallel developments: (1) using game design as marketing (Hamari & Lehdonvirta, 2010) and (2) gamification – going beyond what traditionally are regarded as games and implementing game design there often for the benefit of users. For example, services such as Mindbloom, Fitocracy, Zombies, Run!, and Nike+ are aimed at assisting the user toward beneficial behavior related to lifestyle and health choices. However, it is unclear whether we can see age and gender discrepancies in use of gamified services similar to those in other digital gaming contexts. The main difference between games and gamifica-tion is that gamification is commonly …", "title": "" }, { "docid": "a31f26b4c937805a800e33e7986ee929", "text": "In this paper, we propose a novel shape interpolation approach based on Poisson equation. We formulate the trajectory problem of shape interpolation as solving Poisson equations defined on a domain mesh. A non-linear gradient field interpolation method is proposed to take both vertex coordinates and surface orientation into account. With proper boundary conditions, the in-between shapes are reconstructed implicitly from the interpolated gradient fields, while traditional methods usually manipulate vertex coordinates directly. Besides of global shape interpolation, our method is also applicable to local shape interpolation, and can be further enhanced by incorporating with deformation. Our approach can generate visual pleasing and physical plausible morphing sequences with stable area and volume changes. Experimental results demonstrate that our technique can avoid the shrinkage problem appeared in linear shape interpolation.", "title": "" }, { "docid": "d045e59441a16874f3ccb1d8068e4e6d", "text": "In two experiments, we tested the hypotheses that (a) the difference between liars and truth tellers will be greater when interviewees report their stories in reverse order than in chronological order, and (b) instructing interviewees to recall their stories in reverse order will facilitate detecting deception. In Experiment 1, 80 mock suspects told the truth or lied about a staged event and did or did not report their stories in reverse order. The reverse order interviews contained many more cues to deceit than the control interviews. In Experiment 2, 55 police officers watched a selection of the videotaped interviews of Experiment 1 and made veracity judgements. Requesting suspects to convey their stories in reverse order improved police observers' ability to detect deception and did not result in a response bias.", "title": "" }, { "docid": "549ab60970a95d3642106dffc5d09a75", "text": "This paper proposes a human activity recognition method which is based on features learned from 3D video data without incorporating domain knowledge. The experiments on data collected by RGBD cameras produce results outperforming other techniques. Our feature encoding method follows the bag-of-visual-word model, then we use a SVM classifier to recognise the activities. We do not use skeleton or tracking information and the same technique is applied on color and depth data.", "title": "" }, { "docid": "bc5758f419fba25622e2cd5513f80e12", "text": "Poultry is a largest source of human food, rapid increasing trend in human population and badly effect of newly seen poultry diseases (N1H1 bird-flue, highly pathogenic avian influenza HPAI) making it difficult to meet the daily increasing human poultry requirements. To improve the poultry growth by using most modern technology we are proposing a complete wireless sensor network solution for poultry farming. Poultry farming is mostly divided into two categories (1) Egg production poultry farms and (2) Meat production poultry farm. In this study we will propose Complete Wireless network solution for poultry farming (CWNS-PF) to establish an ideal poultry farm with maximum productivity and economy. This proposed CWNS-PF is equally useful for both types of poultry farms. Our proposed system mainly consists of 7 components, if these are followed and managed well, quality and quantity of chickens can be improved which will ultimately lead to improve the human health. The proposed solution indicates a possibility that the wearable wireless sensor node would be a useful tool for early detection and outbreaks of infected chickens. Furthermore, system (including wearable sensors nodes, fix sensor nodes in the shed and in the soil) will improve the overall farm production, quality and economy.", "title": "" } ]
scidocsrr
ea6b86d40d67c7bc9deb0911b960f7b7
Stochastic Gradient VB and the Variational Auto-Encoder
[ { "docid": "92e415f4cf575b6bd701b342b7a37c92", "text": "Mean-field variational inference is a method for approximate Bayesian posterior inference. It approximates a full posterior distribution with a factorized set of distributions by maximizing a lower bound on the marginal likelihood. This requires the ability to integrate a sum of terms in the log joint likelihood using this factorized distribution. Often not all integrals are in closed form, which is typically handled by using a lower bound. We present an alternative algorithm based on stochastic optimization that allows for direct optimization of the variational lower bound. This method uses control variates to reduce the variance of the stochastic search gradient, in which existing lower bounds can play an important role. We demonstrate the approach on two non-conjugate models: logistic regression and an approximation to the HDP.", "title": "" }, { "docid": "66e157714a715a6008c61a958bb6b60a", "text": "I present an expectation-maximization (EM) algorithm for principal component analysis (PCA). The algorithm allows a few eigenvectors and eigenvalues to be extracted from large collections of high dimensional data. It is computationally very efficient in space and time. It also naturally accommodates missing information. I also introduce a new variant of PCA calledsensibleprincipal component analysis (SPCA) which defines a proper density model in the data space. Learning for SPCA is also done with an EM algorithm. I report results on synthetic and real data showing that these EM algorithms correctly and efficiently find the leading eigenvectors of the covariance of datasets in a few iterations using up to hundreds of thousands of datapoints in thousands of dimensions.", "title": "" } ]
[ { "docid": "20cb30a452bf20c9283314decfb7eb6e", "text": "In this paper, we apply bidirectional training to a long short term memory (LSTM) network for the first time. We also present a modified, full gradient version of the LSTM learning algorithm. We discuss the significance of framewise phoneme classification to continuous speech recognition, and the validity of using bidirectional networks for online causal tasks. On the TIMIT speech database, we measure the framewise phoneme classification scores of bidirectional and unidirectional variants of both LSTM and conventional recurrent neural networks (RNNs). We find that bidirectional LSTM outperforms both RNNs and unidirectional LSTM.", "title": "" }, { "docid": "6850b52405e8056710f4b3010858cfbe", "text": "spread of misinformation, rumors and hoaxes. The goal of this work is to introduce a simple modeling framework to study the diffusion of hoaxes and in particular how the availability of debunking information may contain their diffusion. As traditionally done in the mathematical modeling of information diffusion processes, we regard hoaxes as viruses: users can become infected if they are exposed to them, and turn into spreaders as a consequence. Upon verification, users can also turn into non-believers and spread the same attitude with a mechanism analogous to that of the hoax-spreaders. Both believers and non-believers, as time passes, can return to a susceptible state. Our model is characterized by four parameters: spreading rate, gullibility, probability to verify a hoax, and that to forget one's current belief. Simulations on homogeneous, heterogeneous, and real networks for a wide range of parameters values reveal a threshold for the fact-checking probability that guarantees the complete removal of the hoax from the network. Via a mean field approximation, we establish that the threshold value does not depend on the spreading rate but only on the gullibility and forgetting probability. Our approach allows to quantitatively gauge the minimal reaction necessary to eradicate a hoax.", "title": "" }, { "docid": "e50b074abe37cc8caec8e3922347e0d9", "text": "Subjectivity and sentiment analysis (SSA) has recently gained considerable attention, but most of the resources and systems built so far are tailored to English and other Indo-European languages. The need for designing systems for other languages is increasing, especially as blogging and micro-blogging websites become popular throughout the world. This paper surveys different techniques for SSA for Arabic. After a brief synopsis about Arabic, we describe the main existing techniques and test corpora for Arabic SSA that have been introduced in the literature.", "title": "" }, { "docid": "9648c6cbdd7a04c595b7ba3310f32980", "text": "Increase in identity frauds, crimes, security there is growing need of fingerprint technology in civilian and law enforcement applications. Partial fingerprints are of great interest which are either found at crime scenes or resulted from improper scanning. These fingerprints are poor in quality and the number of features present depends on size of fingerprint. Due to the lack of features such as core and delta, general fingerprint matching algorithms do not perform well for partial fingerprint matching. By using combination of level1 and level 2 features accuracy of partial matching cannot be increased. Therefore, we utilize extended features in combination with other feature set. Efficacious fusion methods for coalesce of different modality systems perform better for these types of prints. In this paper, we propose a method for partial fingerprint matching using score level fusion of minutiae based radon transform and pores based LBP extraction. To deal with broken ridges and fragmentary information, radon transform is used to get local information around minutiae. Finally, we evaluate the performance by comparing Equal Error Rate (ERR) of proposed method and existing method and proposed method reduces the error rate to 1.84%.", "title": "" }, { "docid": "7d2f5505b2a60fb113524903aa5acc7d", "text": "Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.", "title": "" }, { "docid": "41b1a0c362c7bdb77b7dbcc20adcd532", "text": "Augmented reality involves the use of models and their associated renderings to supplement information in a real scene. In order for this information to be relevant or meaningful, the models must be positioned and displayed in such a way that they align with their corresponding real objects. For practical reasons this alignment cannot be known a priori, and cannot be hard-wired into a system. Instead a simple, reliable alignment or calibration process is performed so that computer models can be accurately registered with their real-life counterparts. We describe the design and implementation of such a process and we show how it can be used to create convincing interactions between real and virtual objects.", "title": "" }, { "docid": "02d254abf79e779cf6ec827c0826c2be", "text": "Hosts used for the production of recombinant proteins are typically high-protein secreting mutant strains that have been selected for a specific purpose, such as efficient production of cellulose-degrading enzymes. Somewhat surprisingly, sequencing of the genomes of a series of mutant strains of the cellulolytic Trichoderma reesei, widely used as an expression host for recombinant gene products, has shed very little light on the nature of changes that boost high-level protein secretion. While it is generally agreed and shown that protein secretion in filamentous fungi occurs mainly through the hyphal tip, there is growing evidence that secretion of proteins also takes place in sub-apical regions. Attempts to increase correct folding and thereby the yields of heterologous proteins in fungal hosts by co-expression of cellular chaperones and foldases have resulted in variable success; underlying reasons have been explored mainly at the transcriptional level. The observed physiological changes in fungal strains experiencing increasing stress through protein overexpression under strong gene promoters also reflect the challenge the host organisms are experiencing. It is evident, that as with other eukaryotes, fungal endoplasmic reticulum is a highly dynamic structure. Considering the above, there is an emerging body of work exploring the use of weaker expression promoters to avoid undue stress. Filamentous fungi have been hailed as candidates for the production of pharmaceutically relevant proteins for therapeutic use. One of the biggest challenges in terms of fungally produced heterologous gene products is their mode of glycosylation; fungi lack the functionally important terminal sialylation of the glycans that occurs in mammalian cells. Finally, exploration of the metabolic pathways and fluxes together with the development of sophisticated fermentation protocols may result in new strategies to produce recombinant proteins in filamentous fungi.", "title": "" }, { "docid": "eb0ec729796a93f36d348e70e3fa9793", "text": "This paper proposes a novel approach to measure the object size using a regular digital camera. Nowadays, the remote object-size measurement is very crucial to many multimedia applications. Our proposed computer-aided automatic object-size measurement technique is based on a new depth-information extraction (range finding) scheme using a regular digital camera. The conventional range finders are often carried out using the passive method such as stereo cameras or the active method such as ultrasonic and infrared equipment. They either require the cumbersome set-up or deal with point targets only. The proposed approach requires only a digital camera with certain image processing techniques and relies on the basic principles of visible light. Experiments are conducted to evaluate the performance of our proposed new object-size measurement mechanism. The average error-percentage of this method is below 2%. It demonstrates the striking effectiveness of our proposed new method.", "title": "" }, { "docid": "f408d2dbbc48d3681aa2925a37e90e43", "text": "Smart home technologies have become, in the last few years, a very active topic of research. However, many scientists working in this field do not possess smart home infrastructures allowing them to conduct satisfactory experiments in a concrete environment with real data. To address this issue, this paper presents a new flexible 3D smart home infrastructure simulator, which is developed in Java specifically to help researchers working in the field of activity recognition. A set of pre-recorded scenarios, made with data extracted from clinical trials, will be included with the simulator in order to give a common foundation to test activity recognition algorithms. The goal is to release the SIMACT simulator with a visual scenario editor as an open source component that will benefit the whole smart home research community.", "title": "" }, { "docid": "e46943cc1c73a56093d4194330d52d52", "text": "This paper deals with the compact modeling of an emerging technology: the carbon nanotube field-effect transistor (CNTFET). The paper proposed two design-oriented compact models, the first one for CNTFET with a classical behavior (MOSFET-like CNTFET), and the second one for CNTFET with an ambipolar behavior (Schottky-barrier CNTFET). Both models have been compared with exact numerical simulations and then implemented in VHDL-AMS", "title": "" }, { "docid": "1c6cfa3ca676a8ee8b6ceef6c992312b", "text": "The paper presents some of the results obtained by studying Petri nets’ capability for modeling and analysis of Supply Chain performances. It is well known that the absence of coordination in Supply Chain management causes the so-called Bullwhip Effect, in which fluctuations in orders increase as they move up the chain. A simple three-stage supply chain with one player at each stage – a retailer, a wholesaler and a manufacturer – is considered. The model of the chain is developed using a timed, hierarchical coloured Petri Net. Simulation and performance analysis have been performed applying software package CPN Tools.", "title": "" }, { "docid": "11c3b4c63bb9cdc19f542bb477cca191", "text": "Although there are many motion planning techniques, there is no single one that performs optimally in every environment for every movable object. Rather, each technique has different strengths and weaknesses which makes it best-suited for particular types of situations. Also, since a given environment can consist of vastly different regions, there may not even be a single planner that is well suited for the problem. Ideally, one would use a suite of planners in concert to solve the problem by applying the best-suited planner in each region. In this paper, we propose an automated framework for feature-sensitive motion planning. We use a machine learning approach to characterize and partition C-space into (possibly overlapping) regions that are well suited to one of the planners in our library of roadmap-based motion planning methods. After the best-suited method is applied in each region, their resulting roadmaps are combined to form a roadmap of the entire planning space. We demonstrate on a range of problems that our proposed feature-sensitive approach achieves results superior to those obtainable by any of the individual planners on their own. “A Machine Learning Approach for ...”, Morales et al. TR04-001, Parasol Lab, Texas A&M, February 2004 1", "title": "" }, { "docid": "3a42d8134c586f866cfec645850566f5", "text": "Underwater images suffer from color distortion and low contrast, because light is attenuated as it propagates through water. The attenuation varies with wavelength and depends both on the properties of the water body in which the image was taken and the 3D structure of the scene, making it difficult to restore the colors. Existing single underwater image enhancement techniques either ignore the wavelength dependency of the attenuation, or assume a specific spectral profile. We propose a new method that takes into account multiple spectral profiles of different water types, and restores underwater scenes from a single image. We show that by estimating just two additional global parameters the attenuation ratios of the blue-red and blue-green color channels the problem of underwater image restoration can be reduced to single image dehazing, where all color channels have the same attenuation coefficients. Since we do not know the water type ahead of time, we try different parameter sets out of an existing library of water types. Each set leads to a different restored image and the one that best satisfies the Gray-World assumption is chosen. The proposed single underwater image restoration method is fully automatic and is based on a more comprehensive physical image formation model than previously used. We collected a dataset of real images taken in different locations with varying water properties and placed color charts in the scenes. Moreover, to obtain ground truth, the 3D structure of the scene was calculated based on stereo imaging. This dataset enables a quantitative evaluation of restoration algorithms on natural images and shows the advantage of the proposed method.", "title": "" }, { "docid": "445a49977b5d36f9da462e07faf79548", "text": "In this paper, we consider the use of deep neural networks in the context of Multiple-Input-Multiple-Output (MIMO) detection. We give a brief introduction to deep learning and propose a modern neural network architecture suitable for this detection task. First, we consider the case in which the MIMO channel is constant, and we learn a detector for a specific system. Next, we consider the harder case in which the parameters are known yet changing and a single detector must be learned for all multiple varying channels. We demonstrate the performance of our deep MIMO detector using numerical simulations in comparison to competing methods including approximate message passing and semidefinite relaxation. The results show that deep networks can achieve state of the art accuracy with significantly lower complexity while providing robustness against ill conditioned channels and mis-specified noise variance.", "title": "" }, { "docid": "7a84328148fac2738d8954976b09aa45", "text": "The region was covered by 1:250 000 mapping by the Geological Survey of Canada during the mid 1940s (Lord, 1948). A number of showings were found. One of these, the Marmot, was the focus of the first modern exploration (1960s) in the general area. At the same time there was significant exploration activity for porphyry copper and molybdenum mineralization in the intrusive belt running north and south through the McConnell Range. A large gossan was discovered in 1966 at the present site of the Kemess North prospect and led to similar exploration on nearby ground. Falconbridge Nickel Ltd., during a reconnaissance helicopter flight in 1971, discovered a malachite-stained bed in the Sustut drainage that was traceable for over 2500 feet. Their assessment suggested a replacement copper deposi t hosted by volcaniclastic rocks in the upper part of the Takla Group. Numerous junior and major resource companies acquired ground in the area. In 1972 copper was found on the Willow cliffs on the opposite side of the Sustut River and a porphyry style target was identified at the Day. In 1973 the B.C. Geological Survey conducted a mineral deposit study of the Sustut copper area (Church, 1974a). The Geological Survey of Canada returned to pursue general and detailed studies within the McConnell sheet (Richards 1976, and Monger 1977). Monger and Church (1976) revised the stratigraphic nomenclature based on breaks and lithological changes in the volcanic succession supported by fossil data and field observations. In 1983, follow up of a gold-copper-molybdenum soil anomaly led to the discovery of the Kemess South porphyry deposit.", "title": "" }, { "docid": "55ba29fa0dde98b30ab5c88151c67e65", "text": "The primary objective of this paper is to critically evaluate empirical research on some variables relating to the configuration of text on screen to consolidate our current knowledge in these areas. The text layout variables are line length, columns, window size and interlinear spacing, with an emphasis on line length due to the larger number of studies related to this variable. Methodological issues arising from individual studies and from comparisons among studies are identified. A synthesis of results is offered which provides alternative interpretations of some findings and identifies the number of characters per line as the critical variable in looking at line length. Further studies are needed to explore the interactions between characters per line and eye movements, scrolling movements, reading patterns and familiarity with formats.", "title": "" }, { "docid": "057a521ce1b852591a44417e788e4541", "text": "We introduce InfraStructs, material-based tags that embed information inside digitally fabricated objects for imaging in the Terahertz region. Terahertz imaging can safely penetrate many common materials, opening up new possibilities for encoding hidden information as part of the fabrication process. We outline the design, fabrication, imaging, and data processing steps to fabricate information inside physical objects. Prototype tag designs are presented for location encoding, pose estimation, object identification, data storage, and authentication. We provide detailed analysis of the constraints and performance considerations for designing InfraStruct tags. Future application scenarios range from production line inventory, to customized game accessories, to mobile robotics.", "title": "" }, { "docid": "41023aae4c9e5038aeae834d547d525f", "text": "Coal consumption in Malaysia and Indonesia is growing at the rate of 9.7 and 4.7% per year since 2002, respectively. The increase in coal utilization usually tallies fairly well with the increase in CO2 emission. The present study attempts at predicting the emissions of CO2 from coal fired power plants from 2005 until 2020. The paper also analyzes the potential of carbon capture (CC) program as a source of foreign direct investment in Malaysia and Indonesia. The perceived emission rate is based on the percentage of coal for energy mix and coal consumption for electricity generation. Results from the study shows that CO2 emission from coal fired power plants will grow at 4.1% per year to reach 98 million tons in Malaysia and 171 million tons in Indonesia by 2020. It is learnt that adsorption technology can be applied in coal fired power plants to reduce CO2 emissions in Malaysia and Indonesia. Integrated Gasification Combined Cycle (IGCC) power plants incooperating a pre-combustion capture with the adsorption technology is one of the available options for new plants in Malaysia and Indonesia.", "title": "" }, { "docid": "ecff3bff28db8ec04b0e9c4ecde9e984", "text": "The performance of computer aided ECG analysis depends on the precise and accurate delineation of QRS-complexes. This paper presents an application of K-Nearest Neighbor (KNN) algorithm as a classifier for detection of QRS-complex in ECG. The proposed algorithm is evaluated on two manually annotated standard databases such as CSE and MIT-BIH Arrhythmia database. In this work, a digital band-pass filter is used to reduce false detection caused by interference present in ECG signal and further gradient of the signal is used as a feature for QRS-detection. In addition the accuracy of KNN based classifier is largely dependent on the value of K and type of distance metric. The value of K = 3 and Euclidean distance metric has been proposed for the KNN classifier, using fivefold cross-validation. The detection rates of 99.89% and 99.81% are achieved for CSE and MIT-BIH databases respectively. The QRS detector obtained a sensitivity Se = 99.86% and specificity Sp = 99.86% for CSE database, and Se = 99.81% and Sp = 99.86% for MIT-BIH Arrhythmia database. A comparison is also made between proposed algorithm and other published work using CSE and MIT-BIH Arrhythmia databases. These results clearly establishes KNN algorithm for reliable and accurate QRS-detection.", "title": "" }, { "docid": "697ed30a5d663c1dda8be0183fa4a314", "text": "Due to the Web expansion, the prediction of online news popularity is becoming a trendy research topic. In this paper, we propose a novel and proactive Intelligent Decision Support System (IDSS) that analyzes articles prior to their publication. Using a broad set of extracted features (e.g., keywords, digital media content, earlier popularity of news referenced in the article) the IDSS first predicts if an article will become popular. Then, it optimizes a subset of the articles features that can more easily be changed by authors, searching for an enhancement of the predicted popularity probability. Using a large and recently collected dataset, with 39,000 articles from the Mashable website, we performed a robust rolling windows evaluation of five state of the art models. The best result was provided by a Random Forest with a discrimination power of 73%. Moreover, several stochastic hill climbing local searches were explored. When optimizing 1000 articles, the best optimization method obtained a mean gain improvement of 15 percentage points in terms of the estimated popularity probability. These results attest the proposed IDSS as a valuable tool for online news authors.", "title": "" } ]
scidocsrr
a82c1ad6b24fc85ace930fa89d96b107
Predicting Future Hourly Residential Electrical Consumption : A Machine Learning Case Study
[ { "docid": "fabc65effd31f3bb394406abfa215b3e", "text": "Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems. A more detailed overview of the theory (without proofs) can be found in Vapnik (1995). In Vapnik (1998) one can find detailed description of the theory (including proofs).", "title": "" } ]
[ { "docid": "796eeaa652ad1efe467d828fe30e1afb", "text": "Deep neural networks (DNN) have been shown to be very effective at solving challenging problems in several areas of computing, including vision, speech, and natural language processing. However, traditional platforms for implementing these DNNs are often very power hungry, which has lead to significant efforts in the development of configurable platforms capable of implementing these DNNs efficiently. One of these platforms, the IBM TrueNorth processor, has demonstrated very low operating power in performing visual computing and neural network classification tasks in real-time. The neuron computation, synaptic memory, and communication fabrics are all configurable, so that a wide range of network types and topologies can be mapped to TrueNorth. This reconfigurability translates into the capability to support a wide range of low-power functions in addition to feed-forward DNN classifiers, including for example, the audio processing functions presented here.In this work, we propose an end-to-end audio processing pipeline that is implemented entirely on a TrueNorth processor and designed to specifically leverage the highly-parallel, low-precision computing primitives TrueNorth offers. As part of this pipeline, we develop an audio feature extractor (LATTE) designed for implementation on TrueNorth, and explore the tradeoffs among several design variants in terms of accuracy, power, and performance. We customize the energy-efficient deep neuromorphic networks structures that our design utilizes as the classifier and show how classifier parameters can trade between power and accuracy. In addition to enabling a wide range of diverse functions, the reconfigurability of TrueNorth enables re-training and re-programming the system to satisfy varying energy, speed, area, and accuracy requirements. The resulting system's end-to-end power consumption can be as low as <inline-formula><tex-math notation=\"LaTeX\"> $14.43\\text{mW}$</tex-math><alternatives><inline-graphic xlink:href=\"tsai-ieq1-2630683.gif\"/></alternatives> </inline-formula>, which would give up to 100 hours of continuous usage with button cell batteries (CR3023 <inline-formula><tex-math notation=\"LaTeX\">$1.5\\; \\text{Whr}$</tex-math><alternatives> <inline-graphic xlink:href=\"tsai-ieq2-2630683.gif\"/></alternatives></inline-formula>) or 450 hours with cellphone batteries (iPhone 6s <inline-formula><tex-math notation=\"LaTeX\">$6.55\\; \\text{Whr}$</tex-math><alternatives> <inline-graphic xlink:href=\"tsai-ieq3-2630683.gif\"/></alternatives></inline-formula>).", "title": "" }, { "docid": "55f11df001ffad95e07cd20b3b27406d", "text": "CNNs have proven to be a very successful yet computationally expensive technique which made them slow to be adopted in mobile and embedded systems. There is a number of possible optimizations: minimizing the memory footprint, using lower precision and approximate computation, reducing computation cost of convolutions with FFTs. These have been explored recently and were shown to work. This project take ideas of using FFTs further and develops an alternative way to computing CNN – purely in frequency domain. As a side result it develops intuition about nonlinear elements: why do they work and how new types can be created.", "title": "" }, { "docid": "37bec9d0ebe7910f376fa6c3212be51d", "text": "A non-uniform high impedance surface (NU-HIS or tapered HIS) is proposed against a uniform HIS (U-HIS). The surfaces are one dimensional (1D) and made of parallel wires with a length a little less than half wavelength. To show the effects of the surfaces, a half wavelength dipole antenna is placed near three different surfaces, PEC, U-HIS, NU-HIS while the dipole height is fixed. These three EM problems are analyzed numerically by the method of moments (MoM) and are compared. It is observed that NU-HIS yields more bandwidth than U-HIS, while in both cases the structures have identical volumes and nearly identical gains. This effect is attributed to the decrease in sensitivity of the HIS to the incidence angle caused by applying non-uniformity to the elements radii", "title": "" }, { "docid": "0353dbfd30bbfe3f47d471d6ead52010", "text": "In traditional 3D model reconstruction, the texture information is captured in a certain dynamic range, which is usually insufficient for rendering under new environmental light. This paper proposes a novel approach for multi-view stereo (MVS) reconstruction of models with high dynamic range (HDR) texture. In the proposed approach, multi-view images are firstly taken with different exposure times simultaneously. Corresponding pixels in adjacent viewpoints are then extracted using a multi-projection method, to robustly recover the response function of the camera. With the response function, pixel values in the differently exposed images can be converted to the desired relative radiance values. Subsequently, geometry reconstruction and HDR texture recovering can be achieved using these values. Experimental results demonstrate that our method can recover the HDR texture for the 3D model efficiently while keep high geometry precision. With our reconstructed HDR texture model, high-quality scene re-lighting is exemplarily exhibited.", "title": "" }, { "docid": "fb71d22cad59ba7cf5b9806e37df3340", "text": "Templates are effective tools for increasing the precision of natural language requirements and for avoiding ambiguities that may arise from the use of unrestricted natural language. When templates are applied, it is important to verify that the requirements are indeed written according to the templates. If done manually, checking conformance to templates is laborious, presenting a particular challenge when the task has to be repeated multiple times in response to changes in the requirements. In this article, using techniques from natural language processing (NLP), we develop an automated approach for checking conformance to templates. Specifically, we present a generalizable method for casting templates into NLP pattern matchers and reflect on our practical experience implementing automated checkers for two well-known templates in the requirements engineering community. We report on the application of our approach to four case studies. Our results indicate that: (1) our approach provides a robust and accurate basis for checking conformance to templates; and (2) the effectiveness of our approach is not compromised even when the requirements glossary terms are unknown. This makes our work particularly relevant to practice, as many industrial requirements documents have incomplete glossaries.", "title": "" }, { "docid": "d54e33049b3f5170ec8bd09d8f17c05c", "text": "Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higherlevel representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution P (x) is structurally related to some task of interest, say predicting P (y|x). This paper focusses on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.", "title": "" }, { "docid": "072a203514eb53db7aa9aaa55c6745d8", "text": "The possibility to estimate accurately the subsurface electric properties from ground-penetrating radar (GPR) signals using inverse modeling is obstructed by the appropriateness of the forward model describing the GPR subsurface system. In this paper, we improved the recently developed approach of Lambot et al. whose success relies on a stepped-frequency continuous-wave (SFCW) radar combined with an off-ground monostatic transverse electromagnetic horn antenna. This radar configuration enables realistic and efficient forward modeling. We included in the initial model: 1) the multiple reflections occurring between the antenna and the soil surface using a positive feedback loop in the antenna block diagram and 2) the frequency dependence of the electric properties using a local linear approximation of the Debye model. The model was validated in laboratory conditions on a tank filled with a two-layered sand subject to different water contents. Results showed remarkable agreement between the measured and modeled Green's functions. Model inversion for the dielectric permittivity further demonstrated the accuracy of the method. Inversion for the electric conductivity led to less satisfactory results. However, a sensitivity analysis demonstrated the good stability properties of the inverse solution and put forward the necessity to reduce the remaining clutter by a factor 10. This may partly be achieved through a better characterization of the antenna transfer functions and by performing measurements in an environment without close extraneous scatterers.", "title": "" }, { "docid": "3f6f138edff3f50184f78c902dd15a04", "text": "A program that makes an existing website look like a database is called a wrapper. Wrapper learning is the problem of learning website wrappers from examples. We present a wrapper-learning system called WL2 that can exploit several different representations of a document. Examples of such different representations include DOM-level and token-level representations, as well as two-dimensional geometric views of the rendered page (for tabular data) and representations of the visual appearance of text asm it will be rendered. Additionally, the learning system is modular, and can be easily adapted to new domains and tasks. The learning system described is part of an \"industrial-strength\" wrapper management system that is in active use at WhizBang Labs. Controlled experiments show that the learner has broader coverage and a faster learning rate than earlier wrapper-learning systems.", "title": "" }, { "docid": "cf074f806c9b78947c54fb7f41167d9e", "text": "Applications of Machine Learning to Support Dementia Care through Commercially Available O↵-the-Shelf Sensing", "title": "" }, { "docid": "ac07f85a8d6114061569e043e19747f5", "text": "In this paper, some novel and modified driving techniques for a single switch zero voltage switching (ZVS) topology are introduced. These medium/high frequency and digitally synthesized driving techniques can be applied to decrease the dangers of peak currents that may damage the switching circuit when switching in out of nominal conditions. The technique is fully described and evaluated experimentally in a 2500W prototype intended for a domestic induction cooking application.", "title": "" }, { "docid": "418c7d8269fa79b25cfef15b6f4b39a5", "text": "Financial and economic data are typically available in the form of tables and comprise mostly of monetary amounts, numeric and other domain-specific fields. They can be very hard to search and they are often made available out of context, or in forms which cannot be integrated with systems where text is required, such as voice-enabled devices. This work presents a novel system that enables both experts in the finance domain and non-expert users to search financial data with both keyword and natural language queries. Our system answers the queries with an automatically generated textual description using Natural Language Generation (NLG). The answers are further enriched with derived information, not explicitly asked in the user query, to provide the context of the answer. The system is designed to be flexible in order to accommodate new use cases without significant development effort, thus allowing fast integration of new datasets.", "title": "" }, { "docid": "0737e99613b83104bc9390a46fbc4aeb", "text": "Natural language text exhibits hierarchical structure in a variety of respects. Ideally, we could incorporate our prior knowledge of this hierarchical structure into unsupervised learning algorithms that work on text data. Recent work by Nickel and Kiela (2017) proposed using hyperbolic instead of Euclidean embedding spaces to represent hierarchical data and demonstrated encouraging results when embedding graphs. In this work, we extend their method with a re-parameterization technique that allows us to learn hyperbolic embeddings of arbitrarily parameterized objects. We apply this framework to learn word and sentence embeddings in hyperbolic space in an unsupervised manner from text corpora. The resulting embeddings seem to encode certain intuitive notions of hierarchy, such as wordcontext frequency and phrase constituency. However, the implicit continuous hierarchy in the learned hyperbolic space makes interrogating the model’s learned hierarchies more difficult than for models that learn explicit edges between items. The learned hyperbolic embeddings show improvements over Euclidean embeddings in some – but not all – downstream tasks, suggesting that hierarchical organization is more useful for some tasks than others.", "title": "" }, { "docid": "89526592b297342697c131daba388450", "text": "Fundamental and advanced developments in neum-fuzzy synergisms for modeling and control are reviewed. The essential part of neuro-fuuy synergisms comes from a common framework called adaptive networks, which unifies both neural networks and fuzzy models. The f u u y models under the framework of adaptive networks is called Adaptive-Network-based Fuzzy Inference System (ANFIS), which possess certain advantages over neural networks. We introduce the design methods f o r ANFIS in both modeling and control applications. Current problems and future directions for neuro-fuzzy approaches are also addressed.", "title": "" }, { "docid": "891c148f7a6a03d5af33d44ace8d1bc2", "text": "Data center network (DCN) architecture is regarded as one of the most important determinants of network performance. As the most typical representatives of DCN architecture designs, the server-centric scheme stands out due to its good performance in various aspects. In this paper, we firstly present the design, implementation and evaluation of SprintNet, a novel server-centric network architecture for data centers. SprintNet achieves high performance in network capacity, fault tolerance, and network latency. SprintNet is also a scalable, yet low-diameter network architecture where the maximum shortest distance between any pair of servers can be limited by no more than four and is independent of the number of layers. The specially designed routing schemes for SprintNet strengthen its merits. However, all of these kind of server-centric architectures still suffer from some critical shortcomings owing to the server’s responsibility of forwarding packets. With regard to these issues, in this paper, we then propose a hardware based approach, named ‘‘Forwarding Unit’’ to provide an effective solution to these drawbacks and improve the efficiency of server-centric architectures. Both theoretical analysis and simulations are conducted to evaluate the overall performance of SprintNet and the Forwarding Unit approach with respect to cost-effectiveness, fault-tolerance, system latency, packet loss ratio, aggregate bottleneck throughput, and average path length. The evaluation results convince the feasibility and good performance of both SprintNet and Forwarding Unit. 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "75d3edb41070203a1bea83d91719354a", "text": "Atrial fibrillation (AF), the most common sustained cardiac arrhythmia, is seen in approximately 1.5% of the U.S. population [1] and results in substantial morbidity and mortality [2]. One of the largest U.S. epidemiological studies, the Framingham Heart Study, predicted that AF prevalence doubles with each advancing decade of age, from 0.5% at age 50-59 years to almost 9% at age 80-89 years, independent of the increasing prevalence of known predisposing conditions [2]. Although medical treatment involving radiofrequency catheter ablation has become the well-accepted management strategy for AF [3] failure of this therapy is common, with only two-thirds or less of the patients treated remaining free of AF on long-term followup [3]. Early recurrence of atrial tachyarrhythmia, usually defined as arrhythmia recurrence within the first 3 months following ablation, is frequently associated with late recurrence of atrial tachyarrhythmia [4,5]. Acute myocardial injury and the subsequent inflammatory response, as well as modifications of the cardiac autonomic nervous system, provide an early and potentially reversible pro-arrhythmic substrate because of altered atrial myocardial conduction and refractoriness [3]. Research has shown that psychological stressors and imbalance in the autonomic nervous system (ANS) are the most common triggers for paroxysmal AF [6,7]. The mind-body therapy yoga has been shown to reduce stress and maintain autonomic nervous system balance [8]: hence, use of complementary health", "title": "" }, { "docid": "e0fcb8834599385516f25dfbe9058b9d", "text": "As urban crimes (e.g., burglary and robbery) negatively impact our everyday life and must be addressed in a timely manner, predicting crime occurrences is of great importance for public safety and urban sustainability. However, existing methods do not fully explore dynamic crime patterns as factors underlying crimes may change over time. In this paper, we develop a new crime prediction framework--DeepCrime, a deep neural network architecture that uncovers dynamic crime patterns and carefully explores the evolving inter-dependencies between crimes and other ubiquitous data in urban space. Furthermore, our DeepCrime framework is capable of automatically capturing the relevance of crime occurrences across different time periods. In particular, our DeepCrime framework enables predicting crime occurrences of different categories in each region of a city by i) jointly embedding all spatial, temporal, and categorical signals into hidden representation vectors, and ii) capturing crime dynamics with an attentive hierarchical recurrent network. Extensive experiments on real-world datasets demonstrate the superiority of our framework over many competitive baselines across various settings.", "title": "" }, { "docid": "41e74e0226ef48076aa3e33f2f652b80", "text": "Gastroschisis and omphalocele are the two most common congenital abdominal wall defects. Both are frequently detected prenatally due to routine maternal serum screening and fetal ultrasound. Prenatal diagnosis may influence timing, mode and location of delivery. Prognosis for gastroschisis is primarily determined by the degree of bowel injury, whereas prognosis for omphalocele is related to the number and severity of associated anomalies. The surgical management of both conditions consists of closure of the abdominal wall defect, while minimizing the risk of injury to the abdominal viscera either through direct trauma or due to increased intra-abdominal pressure. Options include primary closure or a variety of staged approaches. Long-term outcome is favorable in most cases; however, significant associated anomalies (in the case of omphalocele) or intestinal dysfunction (in the case of gastroschisis) may result in morbidity and mortality.", "title": "" }, { "docid": "56923afdaa642f201aa5a72ec78d7cfb", "text": "Mobile phones are rapidly becoming the most widespread and popular form of communication; thus, they are also the most important attack target of malware. The amount of malware in mobile phones is increasing exponentially and poses a serious security threat. Google’s Android is the most popular smart phone platforms in the world and the mechanisms of permission declaration access control cannot identify the malware. In this paper, we proposed an ensemble machine learning system for the detection of malware on Android devices. More specifically, four groups of features including permissions, monitoring system events, sensitive API and permission rate are extracted to characterize each Android application (app). Then an ensemble random forest classifier is learned to detect whether an app is potentially malicious or not. The performance of our proposed method is evaluated on the actual data set using tenfold cross-validation. The experimental results demonstrate that the proposed method can achieve a highly accuracy of 89.91%. For further assessing the performance of our method, we compared it with the state-of-the-art support vector machine classifier. Comparison results demonstrate that the proposed method is extremely promising and could provide a cost-effective alternative for Android malware detection.", "title": "" }, { "docid": "8b548e2c1922e6e105ab40b60fd7433c", "text": "Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating $1024\\times 1024$ network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).", "title": "" } ]
scidocsrr
2b858b83c97ce14a8bf33708d3bb3d09
Personalized Grade Prediction: A Data Mining Approach
[ { "docid": "ab23f66295574368ccd8fc4e1b166ecc", "text": "Although the educational level of the Portuguese population has improved in the last decades, the statistics keep Portugal at Europe’s tail end due to its high student failure rates. In particular, lack of success in the core classes of Mathematics and the Portuguese language is extremely serious. On the other hand, the fields of Business Intelligence (BI)/Data Mining (DM), which aim at extracting high-level knowledge from raw data, offer interesting automated tools that can aid the education domain. The present work intends to approach student achievement in secondary education using BI/DM techniques. Recent real-world data (e.g. student grades, demographic, social and school related features) was collected by using school reports and questionnaires. The two core classes (i.e. Mathematics and Portuguese) were modeled under binary/five-level classification and regression tasks. Also, four DM models (i.e. Decision Trees, Random Forest, Neural Networks and Support Vector Machines) and three input selections (e.g. with and without previous grades) were tested. The results show that a good predictive accuracy can be achieved, provided that the first and/or second school period grades are available. Although student achievement is highly influenced by past evaluations, an explanatory analysis has shown that there are also other relevant features (e.g. number of absences, parent’s job and education, alcohol consumption). As a direct outcome of this research, more efficient student prediction tools can be be developed, improving the quality of education and enhancing school resource management.", "title": "" }, { "docid": "ae67aadc3cddd3642bf0a7f6336b9817", "text": "To increase efficacy in traditional classroom courses as well as in Massive Open Online Courses (MOOCs), automated systems supporting the instructor are needed. One important problem is to automatically detect students that are going to do poorly in a course early enough to be able to take remedial actions. Existing grade prediction systems focus on maximizing the accuracy of the prediction while overseeing the importance of issuing timely and personalized predictions. This paper proposes an algorithm that predicts the final grade of each student in a class. It issues a prediction for each student individually, when the expected accuracy of the prediction is sufficient. The algorithm learns online what is the optimal prediction and time to issue a prediction based on past history of students' performance in a course. We derive a confidence estimate for the prediction accuracy and demonstrate the performance of our algorithm on a dataset obtained based on the performance of approximately 700 UCLA undergraduate students who have taken an introductory digital signal processing over the past seven years. We demonstrate that for 85% of the students we can predict with 76% accuracy whether they are going do well or poorly in the class after the fourth course week. Using data obtained from a pilot course, our methodology suggests that it is effective to perform early in-class assessments such as quizzes, which result in timely performance prediction for each student, thereby enabling timely interventions by the instructor (at the student or class level) when necessary.", "title": "" } ]
[ { "docid": "883191185d4671164eb4f12f19eb47f3", "text": "Lustre is a declarative, data-flow language, which is devoted to the specification of synchronous and real-time applications. It ensures efficient code generation and provides formal specification and verification facilities. A graphical tool dedicated to the development of critical embedded systems and often used by industries and professionals is SCADE (Safety Critical Application Development Environment). SCADE is a graphical environment based on the LUSTRE language and it allows the hierarchical definition of the system components and the automatic code generation. This research work is partially concerned with Lutess, a testing environment which automatically transforms formal specifications into test data generators.", "title": "" }, { "docid": "7d1348ad0dbd8f33373e556009d4f83a", "text": "Laryngeal neoplasms represent 2% of all human cancers. They befall mainly the male sex, especially between 50 and 70 years of age, but exceptionally may occur in infancy or extreme old age. Their occurrence has increased considerably inclusively due to progressive population again. The present work aims at establishing a relation between this infirmity and its prognosis in patients submitted to the treatment recommended by Departament of Otolaryngology and Head Neck Surgery of the School of Medicine of São José do Rio Preto. To this effect, by means of karyometric optical microscopy, cell nuclei in the glottic region of 20 individuals, divided into groups according to their tumor stage and time of survival, were evaluated. Following comparation with a control group and statistical analsis, it became possible to verify that the lesser diameter of nuclei is of prognostic value for initial tumors in this region.", "title": "" }, { "docid": "b8d840944817351bb2969a745b55f5c6", "text": ".............................................................................................................................................................. 7 Tiivistelmä .......................................................................................................................................................... 9 List of original papers .................................................................................................................................. 11 Acknowledgements ..................................................................................................................................... 13", "title": "" }, { "docid": "3d56f88bf8053258a12e609129237b19", "text": "Thepresentstudyfocusesontherelationships between entrepreneurial characteristics (achievement orientation, risk taking propensity, locus of control, and networking), e-service business factors (reliability, responsiveness, ease of use, and self-service), governmental support, and the success of e-commerce entrepreneurs. Results confirm that the achievement orientation and locus of control of founders and business emphasis on reliability and ease of use functions of e-service quality are positively related to the success of e-commerce entrepreneurial ventures in Thailand. Founder risk taking and networking, e-service responsiveness and self-service, and governmental support are found to be non-significant.", "title": "" }, { "docid": "33cab03ab9773efe22ba07dd461811ef", "text": "This paper describes a real-time feature-based stereo SLAM system that is robust and accurate in a wide variety of conditions –indoors, outdoors, with dynamic objects, changing light conditions, fast robot motions and large-scale loops. Our system follows a parallel-tracking-and-mapping strategy: a tracking thread estimates the camera pose at frame rate; and a mapping thread updates a keyframe-based map at a lower frequency. The stereo constraints of our system allow a robust initialization –avoiding the well-known bootstrapping problem in monocular systems– and the recovery of the real scale. Both aspects are essential for its practical use in real robotic systems that interact with the physical world. In this paper we provide the implementation details, an exhaustive evaluation of the system in public datasets and a comparison of most state-of-the-art feature detectors and descriptors on the presented system. For the benefit of the community, its code for ROS (Robot Operating System) has been released.", "title": "" }, { "docid": "95ead545f73f70398291bdf9e2b5b104", "text": "Diffusion-based classifiers such as those relying on the Personalized PageRank and the heat kernel enjoy remarkable classification accuracy at modest computational requirements. Their performance however is affected by the extent to which the chosen diffusion captures a typically unknown label propagation mechanism, which can be specific to the underlying graph, and potentially different for each class. This paper introduces a disciplined, data-efficient approach to learning class-specific diffusion functions adapted to the underlying network topology. The novel learning approach leverages the notion of “landing probabilities” of class-specific random walks, which can be computed efficiently, thereby ensuring scalability to large graphs. This is supported by rigorous analysis of the properties of the model as well as the proposed algorithms. Furthermore, a robust version of the classifier facilitates learning even in noisy environments. Classification tests on real networks demonstrate that adapting the diffusion function to the given graph and observed labels significantly improves the performance over fixed diffusions, reaching—and many times surpassing—the classification accuracy of computationally heavier state-of-the-art competing methods, which rely on node embeddings and deep neural networks.", "title": "" }, { "docid": "02de9e47c4cba04cc2795af68ec449b9", "text": "We explore the performance of latent variable models for conditional text generation in the context of neural machine translation (NMT). Similar to (Zhang et al., 2016), we augment the encoder-decoder NMT paradigm by introducing a continuous latent variable to model features of the translation process. We extend this model with a co-attention mechanism motivated by (Parikh et al., 2016) in the inference network. Compared to the vision domain, latent variable models for text face additional challenges due to the discrete nature of language, namely posterior collapse (Bowman et al., 2015). We experiment with different approaches to mitigate this issue. We show that our conditional variational model improves upon both discriminative attention-based translation and the variational baseline presented in (Zhang et al., 2016). Finally, we present some exploration of the learned latent space to illustrate what the latent variable is capable of capturing. This is the first reported conditional variational model for text that meaningfully utilizes the latent variable without weakening the translation model.", "title": "" }, { "docid": "ca5b9cd1634431254e1a454262eecb40", "text": "This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.", "title": "" }, { "docid": "4e55d02fdd8ff4c5739cc433f4f15e9b", "text": "muchine, \" a progrum f o r uutomuticully generating syntacticully correct progrums (test cusrs> f o r checking compiler front ends. The notion of \" clynumic grammur \" is introduced und is used in a syntax-defining notution thut procides f o r context-sensitiuity. Exurnples demonstrute use of the syntax machine. The \" syntax machine \" discussed here automatically generates random test cases for any suitably defined programming language.' The test cases it produces are syntactically valid programs. But they are not \" meaningful, \" and if an attempt is made to execute them, the results are unpredictable and uncheckable. For this reason, they are less valuable than handwritten test cases. However, as an inexhaustible source of new test material, the syntax machine has shown itself to be a valuable tool. In the following sections, we characterize the use of this tool in testing different types of language processors, introduce the concept of \" dynamic grammar \" of a programming language, outline the structure of the system, and show what the syntax machine does by means of some examples. Test cases Test cases for a language processor are programs written following the rules of the language, as documented. The test cases, when processed, should give known results. If this does not happen, then either the processor or its documentation is in error. We can distinguish three categories of language processors and assess the usefulness of the syntax machine for testing them. For an interpreter, the syntax machine test cases are virtually useless,", "title": "" }, { "docid": "eed5c66d0302c492f2480a888678d1dc", "text": "In 1988 Kennedy and Chua introduced the dynamical canonical nonlinear programming circuit (NPC) to solve in real time nonlinear programming problems where the objective function and the constraints are smooth (twice continuously differentiable) functions. In this paper, a generalized circuit is introduced (G-NPC), which is aimed at solving in real time a much wider class of nonsmooth nonlinear programming problems where the objective function and the constraints are assumed to satisfy only the weak condition of being regular functions. G-NPC, which derives from a natural extension of NPC, has a neural-like architecture and also features the presence of constraint neurons modeled by ideal diodes with infinite slope in the conducting region. By using the Clarke's generalized gradient of the involved functions, G-NPC is shown to obey a gradient system of differential inclusions, and its dynamical behavior and optimization capabilities, both for convex and nonconvex problems, are rigorously analyzed in the framework of nonsmooth analysis and the theory of differential inclusions. In the special important case of linear and quadratic programming problems, salient dynamical features of G-NPC, namely the presence of sliding modes , trajectory convergence in finite time, and the ability to compute the exact optimal solution of the problem being modeled, are uncovered and explained in the developed analytical framework.", "title": "" }, { "docid": "10e88f0d1a339c424f7e0b8fa5b43c1e", "text": "Hash functions play an important role in modern cryptography. This paper investigates optimisation techniques that have recently been proposed in the literature. A new VLSI architecture for the SHA-256 and SHA-512 hash functions is presented, which combines two popular hardware optimisation techniques, namely pipelining and unrolling. The SHA processors are developed for implementation on FPGAs, thereby allowing rapid prototyping of several designs. Speed/area results from these processors are analysed and are shown to compare favourably with other FPGA-based implementations, achieving the fastest data throughputs in the literature to date", "title": "" }, { "docid": "1cf3ee00f638ca44a3b9772a2df60585", "text": "Navigation has been a popular area of research in both academia and industry. Combined with maps, and different localization technologies, navigation systems have become robust and more usable. By combining navigation with augmented reality, it can be improved further to become realistic and user friendly. This paper surveys existing researches carried out in this area, describes existing techniques for building augmented reality navigation systems, and the problems faced.", "title": "" }, { "docid": "07575ce75d921d6af72674e1fe563ff7", "text": "With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level attitudinal factors--job satisfaction, organizational commitment, and psychological empowerment--as well as organizational citizenship behaviors that have the potential to provide insights into how human resource systems influence the performance of organizational units. The results support a unit-level path model, such that department-level, high-performance work system utilization is associated with enhanced levels of job satisfaction, organizational commitment, and psychological empowerment. In turn, these attitudinal variables were found to be positively linked to enhanced organizational citizenship behaviors, which are further related to a second-order construct measuring departmental performance.", "title": "" }, { "docid": "fed5b83e2e35a3a5e2c8df38d96be981", "text": "The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by predictive factors. The method starts with a model for the overall treatment effect as defined for the primary analysis in the study protocol and uses measures for detecting parameter instabilities in this treatment effect. The procedure produces a segmented model with differential treatment parameters corresponding to each patient subgroup. The subgroups are linked to predictive factors by means of a decision tree. The method is applied to the search for subgroups of patients suffering from amyotrophic lateral sclerosis that differ with respect to their Riluzole treatment effect, the only currently approved drug for this disease.", "title": "" }, { "docid": "72a283eda92eb25404536308d8909999", "text": "This paper presents a 128.7nW analog front-end amplifier and Gm-C filter for biomedical sensing applications, specifically for Electroencephalogram (EEG) use. The proposed neural amplifier has a supply voltage of 1.8V, consumes a total current of 71.59nA, for a total dissipated power of 128nW and has a gain of 40dB. Also, a 3th order Butterworth Low Pass Gm-C Filter with a 14.7nS transconductor is designed and presented. The filter has a pass band suitable for use in EEG (1-100Hz). The amplifier and filter utilize current sources without resistance which provide 56nA and (1.154nA ×5) respectively. The proposed amplifier occupies and area of 0.26mm2 in 0.3μm TSMC process.", "title": "" }, { "docid": "8b6d3b5fb8af809619119ee0f75cb3c6", "text": "This paper mainly discusses how to use histogram projection and LBDM (Learning Based Digital Matting) to extract a tongue from a medical image, which is one of the most important steps in diagnosis of traditional Chinese Medicine. We firstly present an effective method to locate the tongue body, getting the convinced foreground and background area in form of trimap. Then, use this trimap as the input for LBDM algorithm to implement the final segmentation. Experiment was carried out to evaluate the proposed scheme, using 480 samples of pictures with tongue, the results of which were compared with the corresponding ground truth. Experimental results and analysis demonstrated the feasibility and effectiveness of the proposed algorithm.", "title": "" }, { "docid": "bb0ac3d88646bf94710a4452ddf50e51", "text": "Everyday knowledge about living things, physical objects and the beliefs and desires of other people appears to be organized into sophisticated systems that are often called intuitive theories. Two long term goals for psychological research are to understand how these theories are mentally represented and how they are acquired. We argue that the language of thought hypothesis can help to address both questions. First, compositional languages can capture the content of intuitive theories. Second, any compositional language will generate an account of theory learning which predicts that theories with short descriptions tend to be preferred. We describe a computational framework that captures both ideas, and compare its predictions to behavioral data from a simple theory learning task. Any comprehensive account of human knowledge must acknowledge two principles. First, everyday knowledge is more than a list of isolated facts, and much of it appears to be organized into richly structured systems that are sometimes called intuitive theories. Even young children, for instance, have systematic beliefs about domains including folk physics, folk biology, and folk psychology [10]. Second, some aspects of these theories appear to be learned. Developmental psychologists have explored how intuitive theories emerge over the first decade of life, and at least some of these changes appear to result from learning. Although theory learning raises some challenging problems, two computational principles that may support this ability have been known for many years. First, a theory-learning system must be able to represent the content of any theory that it acquires. A learner that cannot represent a given system of concepts is clearly unable to learn this system from data. Second, there will always be many systems of concepts that are compatible with any given data set, and a learner must rely on some a priori ordering of the set of possible theories to decide which candidate is best [5, 9]. Loosely speaking, this ordering can be identified with a simplicity measure, or a prior distribution over the space of possible theories. There is at least one natural way to connect these two computational principles. Suppose that intuitive theories are represented in a “language of thought:” a language that allows complex concepts to be represented as combinations of simpler concepts [5]. A compositional language provides a straightforward way to construct sophisticated theories, but also provides a natural ordering over the resulting space of theories: the a priori probability of a theory can be identified with its length in this representation language [3, 7]. Combining this prior distribution with an engine for Bayesian inference leads immediately to a computational account of theory learning. There may be other ways to explain how people represent and acquire complex systems of knowledge, but it is striking that the “language of thought” hypothesis can address both questions. This paper describes a computational framework that helps to explain how theories are acquired, and that can be used to evaluate different proposals about the language of thought. Our approach builds on previous discussions of concept learning that have explored the link between compositional representations and inductive inference. Two recent approaches propose that concepts are represented in a form of propositional logic, and that the a priori plausibility of an inductive hypothesis is related to the length of its representation in this language [4, 6]. Our approach is similar in spirit, but is motivated in part by the need for languages richer than propositional logic. The framework we present is extremely general, and is compatible with virtually any representation language, including various forms of predicate logic. Methods for learning theories expressed in predicate logic have previously been explored in the field of Inductive Logic Programming, and we recently proposed a theory-learning model that is inspired by this tradition [7]. Our current approach is motivated by similar goals, but is better able to account for the discovery of abstract theoretical laws. The next section describes our computational framework and introduces the specific logical language that we will consider throughout. Our framework allows relatively sophisticated theories to be represented and learned, but we evaluate it here by applying it to a simple learning problem and comparing its predictions with human inductive inferences. A Bayesian approach to theory discovery Suppose that a learner observes some of the relationships that hold among a fixed, finite set of entities, and wishes to discover a theory that accounts for these data. Suppose, for instance, that the entities are thirteen adults from a remote tribe (a through m), and that the data specify that the spouse relation (S(·, ·)) is true of some pairs (Figure 1). One candidate theory states that S(·, ·) is a symmetric relation, that some of the individuals are male (M(·)), that marriages are permitted only between males and non-males, and that males may take multiple spouses but non-males may have only one spouse (Figure 1b). Other theories are possible, including the theory which states only that S(·, ·) is symmetric. Accounts of theory learning should distinguish between at least three kinds of entities: theories, models, and data. A theory is a set of statements that captures constraints on possible configurations of the world. For instance, the theory in Figure 1b rules out configurations where the spouse relation is asymmetric. A model of a theory specifies the extension", "title": "" }, { "docid": "1c05027fba55d64070cb3ff698b9c253", "text": "The advancement of the World Wide Web has resulted in the creation of a new form of retail transactionselectronic retailing (e-tailing) or web-shopping. Thus, customers’ involvements in online purchasing have become an important trend. As such, it is vital to identify the determinants of the customer online purchase intention. The aim of this research is to evaluate the impacts of shopping orientations, online trust and prior online purchase experience to the customer online purchase intention. A total of 242 undergraduate information technology students from a private university in Malaysia participated in this research. The findings revealed that impulse purchase intention, quality orientation, brand orientation, online trust and prior online purchase experience were positively related to the customer online purchase intention.", "title": "" }, { "docid": "a862bcbf9addb965b9f05ed4ba6ace07", "text": "Delivery of electroporation pulses in electroporation-based treatments could potentially induce heartrelated effects. The objective of our work was to develop a software tool for electrocardiogram (ECG) analysis to facilitate detection of such effects in pre-selected ECGor heart rate variability (HRV) parameters. Our software tool consists of five distinct modules for: (i) preprocessing; (ii) learning; (iii) detection and classification; (iv) selection and verification; and (v) ECG and HRV analysis. Its key features are: automated selection of ECG segments from ECG signal according to specific user-defined requirements (e.g., selection of relatively noise-free ECG segments); automated detection of prominent heartbeat features, such as Q, R and T wave peak; automated classification of individual heartbeat as normal or abnormal; displaying of heartbeat annotations; quick manual screening of analyzed ECG signal; and manual correction of annotation and classification errors. The performance of the detection and classification module was evaluated on 19 two-hour-long ECG records from Long-Term ST database. On average, the QRS detection algorithm had high sensitivity (99.78%), high positive predictivity (99.98%) and low detection error rate (0.35%). The classification algorithm correctly classified 99.45% of all normal QRS complexes. For normal heartbeats, the positive predictivity of 99.99% and classification error rate of 0.01% were achieved. The software tool provides for reliable and effective detection and classification of heartbeats and for calculation of ECG and HRV parameters. It will be used to clarify the issues concerning patient safety during the electroporation-based treatments used in clinical practice. Preventing the electroporation pulses from interfering with the heart is becoming increasingly important because new applications of electroporation-based treatments are being developed which are using endoscopic, percutaneous or surgical means to access internal tumors or tissues and in which the target tissue can be located in immediate vicinity to the heart.", "title": "" }, { "docid": "8cac4d9b14b0e2918a52f3e71cc440bd", "text": "Cyber-Physical Systems refer to systems that have an interaction between computers, communication channels and physical devices to solve a real-world problem. Towards industry 4.0 revolution, Cyber-Physical Systems currently become one of the main targets of hackers and any damage to them lead to high losses to a nation. According to valid resources, several cases reported involved security breaches on Cyber-Physical Systems. Understanding fundamental and theoretical concept of security in the digital world was discussed worldwide. Yet, security cases in regard to the cyber-physical system are still remaining less explored. In addition, limited tools were introduced to overcome security problems in Cyber-Physical System. To improve understanding and introduce a lot more security solutions for the cyber-physical system, the study on this matter is highly on demand. In this paper, we investigate the current threats on Cyber-Physical Systems and propose a classification and matrix for these threats, and conduct a simple statistical analysis of the collected data using a quantitative approach. We confirmed four components i.e., (the type of attack, impact, intention and incident categories) main contributor to threat taxonomy of Cyber-Physical Systems. Keywords—Cyber-Physical Systems; threats; incidents; security; cybersecurity; taxonomies; matrix; threats analysis", "title": "" } ]
scidocsrr
bb090e623e20242028023fecb3d439eb
Deep Learning with Nonparametric Clustering
[ { "docid": "11ce5da16cf0c0c6cfb85e0d0bbdc13e", "text": "Recently, fully-connected and convolutional neural networks have been trained to reach state-of-the-art performance on a wide variety of tasks such as speech recognition, image classification, natural language processing, and bioinformatics data. For classification tasks, much of these “deep learning” models employ the softmax activation functions to learn output labels in 1-of-K format. In this paper, we demonstrate a small but consistent advantage of replacing softmax layer with a linear support vector machine. Learning minimizes a margin-based loss instead of the cross-entropy loss. In almost all of the previous works, hidden representation of deep networks are first learned using supervised or unsupervised techniques, and then are fed into SVMs as inputs. In contrast to those models, we are proposing to train all layers of the deep networks by backpropagating gradients through the top level SVM, learning features of all layers. Our experiments show that simply replacing softmax with linear SVMs gives significant gains on datasets MNIST, CIFAR-10, and the ICML 2013 Representation Learning Workshop’s face expression recognition challenge.", "title": "" }, { "docid": "e8a78557974794594acb1f0cafb93be4", "text": "In a Bayesian mixture model it is not necessary a priori to limit the number of components to be finite. In this paper an infinite Gaussian mixture model is presented which neatly sidesteps the difficult problem of finding the “right” number of mixture components. Inference in the model is done using an efficient parameter-free Markov Chain that relies entirely on Gibbs sampling.", "title": "" }, { "docid": "693e935d405b255ac86b8a9f5e7852a3", "text": "Recent developments have demonstrated the capacity of rest rict d Boltzmann machines (RBM) to be powerful generative models, able to extract useful featu r s from input data or construct deep artificial neural networks. In such settings, the RBM only yields a preprocessing or an initialization for some other model, instead of acting as a complete supervised model in its own right. In this paper, we argue that RBMs can provide a self-contained framework fo r developing competitive classifiers. We study the Classification RBM (ClassRBM), a variant on the R BM adapted to the classification setting. We study different strategies for training the Cla ssRBM and show that competitive classification performances can be reached when appropriately com bining discriminative and generative training objectives. Since training according to the gener ative objective requires the computation of a generally intractable gradient, we also compare differen t approaches to estimating this gradient and address the issue of obtaining such a gradient for proble ms with very high dimensional inputs. Finally, we describe how to adapt the ClassRBM to two special cases of classification problems, namely semi-supervised and multitask learning.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
[ { "docid": "232d7e7986de374499c8ca580d055729", "text": "In this paper we provide a survey of recent contributions to robust portfolio strategies from operations research and finance to the theory of portfolio selection. Our survey covers results derived not only in terms of the standard mean-variance objective, but also in terms of two of the most popular risk measures, mean-VaR and mean-CVaR developed recently. In addition, we review optimal estimation methods and Bayesian robust approaches.", "title": "" }, { "docid": "f3dcf620edb77a199b2ad9d2410cc858", "text": "As the amount of digital data grows, so does the theft of sensitive data through the loss or misplacement of laptops, thumb drives, external hard drives, and other electronic storage media. Sensitive data may also be leaked accidentally due to improper disposal or resale of storage media. To protect the secrecy of the entire data lifetime, we must have confidential ways to store and delete data. This survey summarizes and compares existing methods of providing confidential storage and deletion of data in personal computing environments.", "title": "" }, { "docid": "ec377000353bce311c0887cd4edab554", "text": "This paper explains various security issues in the existing home automation systems and proposes the use of logic-based security algorithms to improve home security. This paper classifies natural access points to a home as primary and secondary access points depending on their use. Logic-based sensing is implemented by identifying normal user behavior at these access points and requesting user verification when necessary. User position is also considered when various access points changed states. Moreover, the algorithm also verifies the legitimacy of a fire alarm by measuring the change in temperature, humidity, and carbon monoxide levels, thus defending against manipulative attackers. The experiment conducted in this paper used a combination of sensors, microcontrollers, Raspberry Pi and ZigBee communication to identify user behavior at various access points and implement the logical sensing algorithm. In the experiment, the proposed logical sensing algorithm was successfully implemented for a month in a studio apartment. During the course of the experiment, the algorithm was able to detect all the state changes of the primary and secondary access points and also successfully verified user identity 55 times generating 14 warnings and 5 alarms.", "title": "" }, { "docid": "55b967cd6d28082ba0fa27605f161060", "text": "Background. A scheme for format-preserving encryption (FPE) is supposed to do that which a conventional (possibly tweakable) blockcipher does—encipher messages within some message space X—except that message space, instead of being something like X = {0, 1}128, is more gen­ eral [1, 3]. For example, the message space might be the set X = {0, 1, . . . , 9}16, in which case each 16-digit plaintext X ∈ X gets enciphered into a 16-digit ciphertext Y ∈ X . In a stringbased FPE scheme—the only type of FPE that we consider here—the message space is of the form n X = {0, 1, . . . , radix − 1} for some message length n and alphabet size radix.", "title": "" }, { "docid": "4edb9dea1e949148598279c0111c4531", "text": "This paper presents a design of highly effective triple band microstrip antenna for wireless communication applications. The triple band design is a metamaterial-based design for WLAN and WiMAX (2.4/3.5/5.6 GHz) applications. The triple band response is obtained by etching two circular and one rectangular split ring resonator (SRR) unit cells on the ground plane of a conventional patch operating at 3.56 GHz. The circular cells are introduced to resonate at 5.3 GHz for the upper WiMAX band, while the rectangular cell is designed to resonate at 2.45 GHz for the lower WLAN band. Furthermore, a novel complementary H-shaped unit cell oriented above the triple band antenna is proposed. The proposed H-shaped is being used as a lens to significantly increase the antenna gain. To investigate the left-handed behavior of the proposed H-shaped, extensive parametric study for the placement of each unit cell including the metamaterial lens, which is the main parameter affecting the antenna performance, is presented and discussed comprehensively. Good consistency between the measured and simulated results is achieved. The proposed antenna meets the requirements of WiMAX and WLAN standards with high peak realized gain.", "title": "" }, { "docid": "6544cffbaf9cc0c6c12991c2acbe2dd5", "text": "The aim of this updated statement is to provide comprehensive and timely evidence-based recommendations on the prevention of ischemic stroke among survivors of ischemic stroke or transient ischemic attack. Evidence-based recommendations are included for the control of risk factors, interventional approaches for atherosclerotic disease, antithrombotic treatments for cardioembolism, and the use of antiplatelet agents for noncardioembolic stroke. Further recommendations are provided for the prevention of recurrent stroke in a variety of other specific circumstances, including arterial dissections; patent foramen ovale; hyperhomocysteinemia; hypercoagulable states; sickle cell disease; cerebral venous sinus thrombosis; stroke among women, particularly with regard to pregnancy and the use of postmenopausal hormones; the use of anticoagulation after cerebral hemorrhage; and special approaches to the implementation of guidelines and their use in high-risk populations.", "title": "" }, { "docid": "1ea2074181341aaa112a678d75ec5de7", "text": "5 Evacuation planning and scheduling is a critical aspect of disaster management and national security applications. This paper proposes a conflict-based path-generation approach for evacuation planning. Its key idea is to decompose the evacuation planning problem into a master and a subproblem. The subproblem generates new evacuation paths for each evacuated area, while the master problem optimizes the flow of evacuees and produce an evacuation plan. Each new path is generated to remedy conflicts in the evacuation flows and adds new columns and a new row in the master problem. The algorithm is applied to a set of large-scale evacuation scenarios ranging from the Hawkesbury-Nepean flood plain (West Sydney, Australia) which require evacuating in the order of 70,000 persons, to the New Orleans metropolitan area and its 1,000,000 residents. Experiments illustrate the scalability of the approach which is able to produce evacuation for scenarios with more than 1,200 nodes, while a direct Mixed Integer Programming formulation becomes intractable for instances with more than 5 nodes. With this approach, realistic evacuations scenarios can be solved near-optimally in reasonable time, supporting both evacuation planning in strategic, tactical, and operational environments.", "title": "" }, { "docid": "3ac230304ab65efa3c31b10dc0dffa4d", "text": "Current networking integrates common \"Things\" to the Web, creating the Internet of Things (IoT). The considerable number of heterogeneous Things that can be part of an IoT network demands an efficient management of resources. With the advent of Fog computing, some IoT management tasks can be distributed toward the edge of the constrained networks, closer to physical devices. Blockchain protocols hosted on Fog networks can handle IoT management tasks such as communication, storage, and authentication. This research goes beyond the current definition of Things and presents the Internet of \"Smart Things.\" Smart Things are provisioned with Artificial Intelligence (AI) features based on CLIPS programming language to become self-inferenceable and self-monitorable. This work uses the permission-based blockchain protocol Multichain to communicate many Smart Things by reading and writing blocks of information. This paper evaluates Smart Things deployed on Edison Arduino boards. Also, this work evaluates Multichain hosted on a Fog network.", "title": "" }, { "docid": "976507b0b89c2202ab603ccedae253f5", "text": "We present a natural language generator based on the sequence-to-sequence approach that can be trained to produce natural language strings as well as deep syntax dependency trees from input dialogue acts, and we use it to directly compare two-step generation with separate sentence planning and surface realization stages to a joint, one-step approach. We were able to train both setups successfully using very little training data. The joint setup offers better performance, surpassing state-of-the-art with regards to ngram-based scores while providing more relevant outputs.", "title": "" }, { "docid": "0105247ab487c2d06f3ffa0d00d4b4f9", "text": "Many distributed storage systems achieve high data access throughput via partitioning and replication, each system with its own advantages and tradeoffs. In order to achieve high scalability, however, today's systems generally reduce transactional support, disallowing single transactions from spanning multiple partitions. Calvin is a practical transaction scheduling and data replication layer that uses a deterministic ordering guarantee to significantly reduce the normally prohibitive contention costs associated with distributed transactions. Unlike previous deterministic database system prototypes, Calvin supports disk-based storage, scales near-linearly on a cluster of commodity machines, and has no single point of failure. By replicating transaction inputs rather than effects, Calvin is also able to support multiple consistency levels---including Paxos-based strong consistency across geographically distant replicas---at no cost to transactional throughput.", "title": "" }, { "docid": "ac34478a54d67abce7c892e058295e63", "text": "The popularity of the term \"integrated curriculum\" has grown immensely in medical education over the last two decades, but what does this term mean and how do we go about its design, implementation, and evaluation? Definitions and application of the term vary greatly in the literature, spanning from the integration of content within a single lecture to the integration of a medical school's comprehensive curriculum. Taking into account the integrated curriculum's historic and evolving base of knowledge and theory, its support from many national medical education organizations, and the ever-increasing body of published examples, we deem it necessary to present a guide to review and promote further development of the integrated curriculum movement in medical education with an international perspective. We introduce the history and theory behind integration and provide theoretical models alongside published examples of common variations of an integrated curriculum. In addition, we identify three areas of particular need when developing an ideal integrated curriculum, leading us to propose the use of a new, clarified definition of \"integrated curriculum\", and offer a review of strategies to evaluate the impact of an integrated curriculum on the learner. This Guide is presented to assist educators in the design, implementation, and evaluation of a thoroughly integrated medical school curriculum.", "title": "" }, { "docid": "d529d1052fce64ae05fbc64d2b0450ab", "text": "Today, many industrial companies must face problems raised by maintenance. In particular, the anomaly detection problem is probably one of the most challenging. In this paper we focus on the railway maintenance task and propose to automatically detect anomalies in order to predict in advance potential failures. We first address the problem of characterizing normal behavior. In order to extract interesting patterns, we have developed a method to take into account the contextual criteria associated to railway data (itinerary, weather conditions, etc.). We then measure the compliance of new data, according to extracted knowledge, and provide information about the seriousness and the exact localization of a detected anomaly. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "70c6da9da15ad40b4f64386b890ccf51", "text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "362b0fc349c827316116a620da34ac91", "text": "Identifying and correcting grammatical errors in the text written by non-native writers have received increasing attention in recent years. Although a number of annotated corpora have been established to facilitate data-driven grammatical error detection and correction approaches, they are still limited in terms of quantity and coverage because human annotation is labor-intensive, time-consuming, and expensive. In this work, we propose to utilize unlabeled data to train neural network based grammatical error detection models. The basic idea is to cast error detection as a binary classification problem and derive positive and negative training examples from unlabeled data. We introduce an attention-based neural network to capture long-distance dependencies that influence the word being detected. Experiments show that the proposed approach significantly outperforms SVM and convolutional networks with fixed-size context window.", "title": "" }, { "docid": "20d02454fd850d8a7e05123a1769d44b", "text": "We describe the extension and objective evaluation of a network of semantically related noun senses (or concepts) that has been automatically acquired by analyzing lexical cooccurrence in Wikipedia. The acquisition process makes no use of the metadata or links that have been manually built into the encyclopedia, and nouns in the network are automatically disambiguated to their corresponding noun senses without supervision. For this task, we use the noun sense inventory of WordNet 3.0. Thus, this work can be conceived of as augmenting the WordNet noun ontology with unweighted, undirected relatedto edges between synsets. Our network contains 208,832 such edges. We evaluate our network’s performance on a word sense disambiguation (WSD) task and show: a) the network is competitive with WordNet when used as a stand-alone knowledge source for two WSD algorithms; b) combining our network with WordNet achieves disambiguation results that exceed the performance of either resource individually; and c) our network outperforms a similar resource that has been automatically derived from semantic annotations in the Wikipedia corpus.", "title": "" }, { "docid": "4be5f35876daebc0c00528bede15b66c", "text": "Information Extraction (IE) is concerned with mining factual structures from unstructured text data, including entity and relation extraction. For example, identifying Donald Trump as “person” and Washington D.C. as “location”, and understand the relationship between them (say, Donald Trump spoke at Washington D.C.), from a specific sentence. Typically, IE systems rely on large amount of training data, primarily acquired via human annotation, to achieve the best performance. But since human annotation is costly and non-scalable, the focus has shifted to adoption of a new strategy Distant Supervision [1]. Distant supervision is a technique that can automatically extract labeled training data from existing knowledge bases without human efforts. However the training data generated by distant supervision is context-agnostic and can be very noisy. Moreover, we also observe the difference between the quality of training examples in terms of to what extent it infers the target entity/relation type. In this project, we focus on removing the noise and identifying the quality difference in the training data generated by distant supervision, by leveraging the feedback signals from one of IE’s downstream applications, QA, to improve the performance of one of the state-of-the-art IE framework, CoType [3]. Keywords—Data Mining, Relation Extraction, Question Answering.", "title": "" }, { "docid": "158b554ee5aedcbee9136dcde010dc30", "text": "In this paper, we propose a novel progressive parameter pruning method for Convolutional Neural Network acceleration, named Structured Probabilistic Pruning (SPP), which effectively prunes weights of convolutional layers in a probabilistic manner. Unlike existing deterministic pruning approaches, where unimportant weights are permanently eliminated, SPP introduces a pruning probability for each weight, and pruning is guided by sampling from the pruning probabilities. A mechanism is designed to increase and decrease pruning probabilities based on importance criteria in the training process. Experiments show that, with 4× speedup, SPP can accelerate AlexNet with only 0.3% loss of top-5 accuracy and VGG-16 with 0.8% loss of top-5 accuracy in ImageNet classification. Moreover, SPP can be directly applied to accelerate multi-branch CNN networks, such as ResNet, without specific adaptations. Our 2× speedup ResNet-50 only suffers 0.8% loss of top-5 accuracy on ImageNet. We further show the effectiveness of SPP on transfer learning tasks.", "title": "" }, { "docid": "d1357b2e247d521000169dce16f182ee", "text": "Camera shake or target movement often leads to undesired blur effects in videos captured by a hand-held camera. Despite significant efforts having been devoted to video-deblur research, two major challenges remain: 1) how to model the spatio-temporal characteristics across both the spatial domain (i.e., image plane) and the temporal domain (i.e., neighboring frames) and 2) how to restore sharp image details with respect to the conventionally adopted metric of pixel-wise errors. In this paper, to address the first challenge, we propose a deblurring network (DBLRNet) for spatial-temporal learning by applying a 3D convolution to both the spatial and temporal domains. Our DBLRNet is able to capture jointly spatial and temporal information encoded in neighboring frames, which directly contributes to the improved video deblur performance. To tackle the second challenge, we leverage the developed DBLRNet as a generator in the generative adversarial network (GAN) architecture and employ a content loss in addition to an adversarial loss for efficient adversarial training. The developed network, which we name as deblurring GAN, is tested on two standard benchmarks and achieves the state-of-the-art performance.", "title": "" }, { "docid": "88dd795c6d1fa37c13fbf086c0eb0e37", "text": "We propose a method to obtain a complete and accurate 3D model from multiview images captured under a variety of unknown illuminations. Based on recent results showing that for Lambertian objects, general illumination can be approximated well using low-order spherical harmonics, we develop a robust alternating approach to recover surface normals. Surface normals are initialized using a multi-illumination multiview stereo algorithm, then refined using a robust alternating optimization method based on the ℓ1 metric. Erroneous normal estimates are detected using a shape prior. Finally, the computed normals are used to improve the preliminary 3D model. The reconstruction system achieves watertight and robust 3D reconstruction while neither requiring manual interactions nor imposing any constraints on the illumination. Experimental results on both real world and synthetic data show that the technique can acquire accurate 3D models for Lambertian surfaces, and even tolerates small violations of the Lambertian assumption.", "title": "" } ]
scidocsrr
dd286bccd8bf96ab971a1e75d8a650d0
New variants of ABCA12 in harlequin ichthyosis baby
[ { "docid": "e9d79ece14c21fcf859e53a1e730a217", "text": "ABCA12: adenosine triphosphate binding cassette A12 HI: harlequin ichthyosis NICU: neonatal intensive care unit INTRODUCTION Harlequin ichthyosis (HI) is a rare autosomal recessive congenital ichthyosis associated with mutations in the keratinocyte lipid transporter adenosine triphosphate binding cassette A12 (ABCA12), leading to disruption in lipid and protease transport into lamellar granules in the granular layer of the epidermis. Subsequent defective desquamation with compensatory hyperkeratinization follows. Historically, there has been a high early mortality rate in infants with HI; however, improved neonatal management and the early introduction of systemic retinoids may contribute to improved prognosis. Death in these patients is most commonly caused by sepsis, respiratory failure, or electrolyte imbalances. We report a case of a neonate with HI treated in the first few days of life with acitretin. The patient initially improved but eventually died of pseudomonas sepsis at 6 weeks of age.", "title": "" } ]
[ { "docid": "a90a20f66d3e73947fbc28dc60bcee24", "text": "It is well known that the performance of speech recognition algorithms degrade in the presence of adverse environments where a speaker is under stress, emotion, or Lombard effect. This study evaluates the effectiveness of traditional features in recognition of speech under stress and formulates new features which are shown to improve stressed speech recognition. The focus is on formulating robust features which are less dependent on the speaking conditions rather than applying compensation or adaptation techniques. The stressed speaking styles considered are simulated angry and loud, Lombard effect speech, and noisy actual stressed speech from the SUSAS database which is available on CD-ROM through the NATO IST/TG-01 research group and LDC1 . In addition, this study investigates the immunity of linear prediction power spectrum and fast Fourier transform power spectrum to the presence of stress. Our results show that unlike fast Fourier transform’s (FFT) immunity to noise, the linear prediction power spectrum is more immune than FFT to stress as well as to a combination of a noisy and stressful environment. Finally, the effect of various parameter processing such as fixed versus variable preemphasis, liftering, and fixed versus cepstral mean normalization are studied. Two alternative frequency partitioning methods are proposed and compared with traditional mel-frequency cepstral coefficients (MFCC) features for stressed speech recognition. It is shown that the alternate filterbank frequency partitions are more effective for recognition of speech under both simulated and actual stressed conditions.", "title": "" }, { "docid": "a5b147f5b3da39fed9ed11026f5974a2", "text": "The aperture coupled patch geometry has been extended to dual polarization by several authors. In Tsao et al. (1988) a cross-shaped slot is fed by a balanced feed network which allows for a high degree of isolation. However, the balanced feed calls for an air-bridge which complicates both the design process and the manufacture. An alleviation to this problem is to separate the two channels onto two different substrate layers separated by the ground plane. In this case the disadvantage is increased cost. Another solution with a single layer feed is presented in Brachat and Baracco (1995) where one channel feeds a single slot centered under the patch whereas the other channel feeds two separate slots placed near the edges of the patch. Our experience is that with this geometry it is hard to achieve a well-matched broadband design since the slots near the edge of the patch present very low coupling. All the above geometries maintain symmetry with respect to the two principal planes if we ignore the small spurious coupling from feed lines in the vicinity of the aperture. We propose to reduce the symmetry to only one principal plane which turns out to be sufficient for high isolation and low cross-polarization. The advantage is that only one layer of feed network is needed, with no air-bridges required. In addition the aperture position is centered under the patch. An important application for dual polarized antennas is base station antennas. We have therefore designed and measured an element for the PCS band (1.85-1.99 GHz).", "title": "" }, { "docid": "134ecc62958fa9bb930ff934c5fad7a3", "text": "We extend our methods from [24] to reprove the Local Langlands Correspondence for GLn over p-adic fields as well as the existence of `-adic Galois representations attached to (most) regular algebraic conjugate self-dual cuspidal automorphic representations, for which we prove a local-global compatibility statement as in the book of Harris-Taylor, [10]. In contrast to the proofs of the Local Langlands Correspondence given by Henniart, [13], and Harris-Taylor, [10], our proof completely by-passes the numerical Local Langlands Correspondence of Henniart, [11]. Instead, we make use of a previous result from [24] describing the inertia-invariant nearby cycles in certain regular situations.", "title": "" }, { "docid": "6aaabe17947bc455d940047745ed7962", "text": "In this paper, we want to study how natural and engineered systems could perform complex optimizations with limited computational and communication capabilities. We adopt a continuous-time dynamical system view rooted in early work on optimization and more recently in network protocol design, and merge it with the dynamic view of distributed averaging systems. We obtain a general approach, based on the control system viewpoint, that allows to analyze and design (distributed) optimization systems converging to the solution of given convex optimization problems. The control system viewpoint provides many insights and new directions of research. We apply the framework to a distributed optimal location problem and demonstrate the natural tracking and adaptation capabilities of the system to changing constraints.", "title": "" }, { "docid": "876bbee05b7838f4de218b424d895887", "text": "Although it is commonplace to assume that the type or level of processing during the input of a verbal item determines the representation of that item in memory, which in turn influences later attempts to store, recognize, or recall that item or similar items, it is much less common to assume that the way in which an item is retrieved from memory is also a potent determiner of that item's subsequent representation in memory. Retrieval from memory is often assumed, implicitly or explicitly, as a process analogous to the way in which the contents of a memory location in a computer are read out, that is, as a process that does not, by itself, modify the state of the retrieved item in memory. In my opinion, however, there is ample evidence for a kind of Heisenberg principle with respect to retrieval processes: an item can seldom, if ever, be retrieved from memory without modifying the representation of that item in memory in significant ways. It is both appropriate and productive, I think, to analyze retrieval processes within the same kind of levels-of-processing framework formulated by Craik and Lockhart ( 1972) with respect to input processes; this chapter is an attempt to do so. In the first of the two main sections below, I explore the extent to which negative-recency phenomena in the long-term recall of a list of items is attributable to differences in levels of retrieval during initial recall. In the second section I present some recent results from ex-", "title": "" }, { "docid": "10bc2f9827aa9a53e3ca4b7188bd91c3", "text": "Learning hash functions across heterogenous high-dimensional features is very desirable for many applications involving multi-modal data objects. In this paper, we propose an approach to obtain the sparse codesets for the data objects across different modalities via joint multi-modal dictionary learning, which we call sparse multi-modal hashing (abbreviated as SM2H). In SM2H, both intra-modality similarity and inter-modality similarity are first modeled by a hypergraph, then multi-modal dictionaries are jointly learned by Hypergraph Laplacian sparse coding. Based on the learned dictionaries, the sparse codeset of each data object is acquired and conducted for multi-modal approximate nearest neighbor retrieval using a sensitive Jaccard metric. The experimental results show that SM2H outperforms other methods in terms of mAP and Percentage on two real-world data sets.", "title": "" }, { "docid": "bf5cedb076c779157e1c1fbd4df0adc9", "text": "Generating novel graph structures that optimize given objectives while obeying some given underlying rules is fundamental for chemistry, biology and social science research. This is especially important in the task of molecular graph generation, whose goal is to discover novel molecules with desired properties such as drug-likeness and synthetic accessibility, while obeying physical laws such as chemical valency. However, designing models to find molecules that optimize desired properties while incorporating highly complex and non-differentiable rules remains to be a challenging task. Here we propose Graph Convolutional Policy Network (GCPN), a general graph convolutional network based model for goaldirected graph generation through reinforcement learning. The model is trained to optimize domain-specific rewards and adversarial loss through policy gradient, and acts in an environment that incorporates domain-specific rules. Experimental results show that GCPN can achieve 61% improvement on chemical property optimization over state-of-the-art baselines while resembling known molecules, and achieve 184% improvement on the constrained property optimization task.", "title": "" }, { "docid": "7c5f2c92cb3d239674f105a618de99e0", "text": "We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.", "title": "" }, { "docid": "a9d136429d3d5b871fa84c3209bd763c", "text": "Portable embedded computing systems require energy autonomy. This is achieved by batteries serving as a dedicated energy source. The requirement of portability places severe restrictions on size and weight, which in turn limits the amount of energy that is continuously available to maintain system operability. For these reasons, efficient energy utilization has become one of the key challenges to the designer of battery-powered embedded computing systems.In this paper, we first present a novel analytical battery model, which can be used for the battery lifetime estimation. The high quality of the proposed model is demonstrated with measurements and simulations. Using this battery model, we introduce a new \"battery-aware\" cost function, which will be used for optimizing the lifetime of the battery. This cost function generalizes the traditional minimization metric, namely the energy consumption of the system. We formulate the problem of battery-aware task scheduling on a single processor with multiple voltages. Then, we prove several important mathematical properties of the cost function. Based on these properties, we propose several algorithms for task ordering and voltage assignment, including optimal idle period insertion to exercise charge recovery.This paper presents the first effort toward a formal treatment of battery-aware task scheduling and voltage scaling, based on an accurate analytical model of the battery behavior.", "title": "" }, { "docid": "be7f7d9c6a28b7d15ec381570752de95", "text": "Neural network are most popular in the research community due to its generalization abilities. Additionally, it has been successfully implemented in biometrics, features selection, object tracking, document image preprocessing and classification. This paper specifically, clusters, summarize, interpret and evaluate neural networks in document Image preprocessing. The importance of the learning algorithms in neural networks training and testing for preprocessing is also highlighted. Finally, a critical analysis on the reviewed approaches and the future research guidelines in the field are suggested.", "title": "" }, { "docid": "443f718fdc81e2ff64c1069ad105e601", "text": "With the fast progression of digital data exchange in electronic way, information security is becoming much more important in data storage and transmission. Cryptography has come up as a solution which plays a vital role in information security system against malicious attacks. This security mechanism uses some algorithms to scramble data into unreadable text which can be only being decoded or decrypted by party those possesses the associated key. These algorithms consume a significant amount of computing resources such as CPU time, memory and computation time. In this paper two most widely used symmetric encryption techniques i.e. data encryption standard (DES) and advanced encryption standard (AES) have been implemented using MATLAB software. After the implementation, these techniques are compared on some points. These points are avalanche effect due to one bit variation in plaintext keeping the key constant, avalanche effect due to one bit variation in key keeping the plaintext constant, memory required for implementation and simulation time required for encryption.", "title": "" }, { "docid": "575208e6df214fa4378fa18be48af51d", "text": "A parser based on logic programming language (DCG) has very useful features; perspicuity, power, generality and so on. However, it does have some drawbacks in which it cannot deal with CFG with left recursive rules, for example. To overcome these drawbacks, a Bottom-Up parser embedded in Prolog (BUP) has been developed. In BUP, CFG rules are translated into Prolog clauses which work as a bottom-up left corner parser with top-down expectation. BUP is augmented by introducing a “link” relation to reduce the size of a search space. Furthermore, BUP can be revised to maintain partial parsing results to avoid computational duplication. A BUP translator and a BUP tracer which support the development of grammar rules are described.", "title": "" }, { "docid": "0326178ab59983db61eb5dfe0e2b25a4", "text": "Article history: Received 9 September 2008 Received in revised form 16 April 2009 Accepted 14 May 2009", "title": "" }, { "docid": "a1fe2227bc9d6ddeda58ff8d137d660b", "text": "Vulnerability exploits remain an important mechanism for malware delivery, despite efforts to speed up the creation of patches and improvements in software updating mechanisms. Vulnerabilities in client applications (e.g., Browsers, multimedia players, document readers and editors) are often exploited in spear phishing attacks and are difficult to characterize using network vulnerability scanners. Analyzing their lifecycle requires observing the deployment of patches on hosts around the world. Using data collected over 5 years on 8.4 million hosts, available through Symantec's WINE platform, we present the first systematic study of patch deployment in client-side vulnerabilities. We analyze the patch deployment process of 1,593 vulnerabilities from 10 popular client applications, and we identify several new threats presented by multiple installations of the same program and by shared libraries distributed with several applications. For the 80 vulnerabilities in our dataset that affect code shared by two applications, the time between patch releases in the different applications is up to 118 days (with a median of 11 days). Furthermore, as the patching rates differ considerably among applications, many hosts patch the vulnerability in one application but not in the other one. We demonstrate two novel attacks that enable exploitation by invoking old versions of applications that are used infrequently, but remain installed. We also find that the median fraction of vulnerable hosts patched when exploits are released is at most 14%. Finally, we show that the patching rate is affected by user-specific and application-specific factors, for example, hosts belonging to security analysts and applications with an automated updating mechanism have significantly lower median times to patch.", "title": "" }, { "docid": "c3473e7fe7b46628d384cbbe10bfe74c", "text": "STUDY OBJECTIVE\nTo (1) examine the prevalence of abnormal genital findings in a large cohort of female children presenting with concerns of sexual abuse; and (2) explore how children use language when describing genital contact and genital anatomy.\n\n\nDESIGN\nIn this prospective study we documented medical histories and genital findings in all children who met inclusion criteria. Findings were categorized as normal, indeterminate, and diagnostic of trauma. Logistic regression analysis was used to determine the effects of key covariates on predicting diagnostic findings. Children older than 4 years of age were asked questions related to genital anatomy to assess their use of language.\n\n\nSETTING\nA regional, university-affiliated sexual abuse clinic.\n\n\nPARTICIPANTS\nFemale children (N = 1500) aged from birth to 17 years (inclusive) who received an anogenital examination with digital images.\n\n\nINTERVENTIONS AND MAIN OUTCOME MEASURES\nPhysical exam findings, medical history, and the child's use of language were recorded.\n\n\nRESULTS\nPhysical findings were determined in 99% (n = 1491) of patients. Diagnostic findings were present in 7% (99 of 1491). After adjusting for age, acuity, and type of sexual contact reported by the adult, the estimated odds of diagnostic findings were 12.5 times higher for children reporting genital penetration compared with those who reported only contact (95% confidence interval, 3.46-45.34). Finally, children used the word \"inside\" to describe contact other than penetration of the vaginal canal (ie, labial penetration).\n\n\nCONCLUSION\nA history of penetration by the child was the primary predictor of diagnostic findings. Interpretation of children's use of \"inside\" might explain the low prevalence of diagnostic findings and warrants further study.", "title": "" }, { "docid": "03977b7bdc0102caf7033012354aa897", "text": "One of the important issues in service organizations is to identify the customers, understanding their difference and ranking them. Recently, the customer value as a quantitative parameter has been used for segmenting customers. A practical solution for analytical development is using analytical techniques such as dynamic clustering algorithms and programs to explore the dynamics in consumer preferences. The aim of this research is to understand the current customer behavior and suggest a suitable policy for new customers in order to attain the highest benefits and customer satisfaction. To identify such market in life insurance customers, We have used the FKM.pf.niose fuzzy clustering technique for classifying the customers based on their demographic and behavioral data of 1071 people in the period April to October 2014. Results show the optimal number of clusters is 3. These three clusters can be named as: investment, security of life and a combination of both. Some suggestions are presented to improve the performance of the insurance company.", "title": "" }, { "docid": "2bfe219ce52a44299178513d88721353", "text": "This paper describes a spatio-temporal model of the human visual system (HVS) for video imaging applications, predicting the response of the neurons of the primary visual cortex. The model simulates the behavior of the HVS with a three-dimensional lter bank which decomposes the data into perceptual channels, each one being tuned to a speciic spatial frequency, orientation and temporal frequency. It further accounts for contrast sensitivity, inter-stimuli masking and spatio-temporal interaction. The free parameters of the model have been estimated by psychophysics. The model can then be used as the basis for many applications. As an example, a quality metric for coded video sequences is presented.", "title": "" }, { "docid": "dd2e81d24584fe0684266217b732d881", "text": "In order to understand the role of titanium isopropoxide (TIPT) catalyst on insulation rejuvenation for water tree aged cables, dielectric properties and micro structure changes are investigated for the rejuvenated cables. Needle-shape defects are made inside cross-linked polyethylene (XLPE) cable samples to form water tree in the XLPE layer. The water tree aged samples are injected by the liquid with phenylmethyldimethoxy silane (PMDMS) catalyzed by TIPT for rejuvenation, and the breakdown voltage of the rejuvenated samples is significantly higher than that of the new samples. By the observation of scanning electronic microscope (SEM), the nano-TiO2 particles are observed inside the breakdown channels of the rejuvenated samples. Accordingly, the insulation performance of rejuvenated samples is significantly enhanced by the nano-TiO2 particles. Through analyzing the products of hydrolysis from TIPT, the nano-scale TiO2 particles are observed, and its micro-morphology is consistent with that observed inside the breakdown channels. According to the observation, the insulation enhancement mechanism is described. Therefore, the dielectric property of the rejuvenated cables is improved due to the nano-TiO2 produced by the hydrolysis from TIPT.", "title": "" }, { "docid": "64635c4d7d372acdba1fc3c36ffaaf12", "text": "We investigate a technique from the literature, called the phantom-types technique, that uses parametric polymorphism, type constraints, and unification of polymorphic types to model a subtyping hierarchy. Hindley-Milner type systems, such as the one found in Standard ML, can be used to enforce the subtyping relation, at least for first-order values. We show that this technique can be used to encode any finite subtyping hierarchy (including hierarchies arising from multiple interface inheritance). We formally demonstrate the suitability of the phantom-types technique for capturing first-order subtyping by exhibiting a type-preserving translation from a simple calculus with bounded polymorphism to a calculus embodying the type system of SML.", "title": "" } ]
scidocsrr
d481b29bacd75dfaeaa95fc807645f4f
DOES HUMAN FACIAL ATTRACTIVENESS HONESTLY ADVERTISE HEALTH ? Longitudinal Data on an Evolutionary Question
[ { "docid": "6210a0a93b97a12c2062ac78953f3bd1", "text": "This article proposes a contextual-evolutionary theory of human mating strategies. Both men and women are hypothesized to have evolved distinct psychological mechanisms that underlie short-term and long-term strategies. Men and women confront different adaptive problems in short-term as opposed to long-term mating contexts. Consequently, different mate preferences become activated from their strategic repertoires. Nine key hypotheses and 22 predictions from Sexual Strategies Theory are outlined and tested empirically. Adaptive problems sensitive to context include sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment. Discussion summarizes 6 additional sources of behavioral data, outlines adaptive problems common to both sexes, and suggests additional contexts likely to cause shifts in mating strategy.", "title": "" } ]
[ { "docid": "4599529680781f9d3d19f766e51a7734", "text": "Existing support vector regression (SVR) based image superresolution (SR) methods always utilize single layer SVR model to reconstruct source image, which are incapable of restoring the details and reduce the reconstruction quality. In this paper, we present a novel image SR approach, where a multi-layer SVR model is adopted to describe the relationship between the low resolution (LR) image patches and the corresponding high resolution (HR) ones. Besides, considering the diverse content in the image, we introduce pixel-wise classification to divide pixels into different classes, such as horizontal edges, vertical edges and smooth areas, which is more conductive to highlight the local characteristics of the image. Moreover, the input elements to each SVR model are weighted respectively according to their corresponding output pixel's space positions in the HR image. Experimental results show that, compared with several other learning-based SR algorithms, our method gains high-quality performance.", "title": "" }, { "docid": "f12c53ede3ef1cbab2641970aacbe16f", "text": "Considerable advances have been achieved in estimating the depth map from a single image via convolutional neural networks (CNNs) during the past few years. Combining depth prediction from CNNs with conventional monocular simultaneous localization and mapping (SLAM) is promising for accurate and dense monocular reconstruction, in particular addressing the two long-standing challenges in conventional monocular SLAM: low map completeness and scale ambiguity. However, depth estimated by pretrained CNNs usually fails to achieve sufficient accuracy for environments of different types from the training data, which are common for certain applications such as obstacle avoidance of drones in unknown scenes. Additionally, inaccurate depth prediction of CNN could yield large tracking errors in monocular SLAM. In this paper, we present a real-time dense monocular SLAM system, which effectively fuses direct monocular SLAM with an online-adapted depth prediction network for achieving accurate depth prediction of scenes of different types from the training data and providing absolute scale information for tracking and mapping. Specifically, on one hand, tracking pose (i.e., translation and rotation) from direct SLAM is used for selecting a small set of highly effective and reliable training images, which acts as ground truth for tuning the depth prediction network on-the-fly toward better generalization ability for scenes of different types. A stage-wise Stochastic Gradient Descent algorithm with a selective update strategy is introduced for efficient convergence of the tuning process. On the other hand, the dense map produced by the adapted network is applied to address scale ambiguity of direct monocular SLAM which in turn improves the accuracy of both tracking and overall reconstruction. The system with assistance of both CPUs and GPUs, can achieve real-time performance with progressively improved reconstruction accuracy. Experimental results on public datasets and live application to obstacle avoidance of drones demonstrate that our method outperforms the state-of-the-art methods with greater map completeness and accuracy, and a smaller tracking error.", "title": "" }, { "docid": "14c981a63e34157bb163d4586502a059", "text": "In this paper, we investigate an angle of arrival (AoA) and angle of departure (AoD) estimation algorithm for sparse millimeter wave multiple-input multiple-output (MIMO) channels. The analytical channel model whose use we advocate here is the beam space (or virtual) MIMO channel representation. By leveraging the beam space MIMO concept, we characterize probabilistic channel priors under an analog precoding and combining constraints. This investigation motivates Bayesian inference approaches to virtual AoA and AoD estimation. We divide the estimation task into downlink sounding for AoA estimation and uplink sounding for AoD estimation. A belief propagation (BP)-type algorithm is adopted, leading to computationally efficient approximate message passing (AMP) and approximate log-likelihood ratio testing (ALLRT) algorithms. Numerical results demonstrate that the proposed algorithm outperforms the conventional AMP in terms of the AoA and AoD estimation accuracy for the sparse millimeter wave MIMO channel.", "title": "" }, { "docid": "5aeffba75c1e6d5f0e7bde54662da8e8", "text": "A large number of Open Relation Extraction approaches have been proposed recently, covering a wide range of NLP machinery, from “shallow” (e.g., part-of-speech tagging) to “deep” (e.g., semantic role labeling–SRL). A natural question then is what is the tradeoff between NLP depth (and associated computational cost) versus effectiveness. This paper presents a fair and objective experimental comparison of 8 state-of-the-art approaches over 5 different datasets, and sheds some light on the issue. The paper also describes a novel method, EXEMPLAR, which adapts ideas from SRL to less costly NLP machinery, resulting in substantial gains both in efficiency and effectiveness, over binary and n-ary relation extraction tasks.", "title": "" }, { "docid": "6256a71f6c852d4be82f029e785b9d1f", "text": "Recently proposed robust 3D face alignment methods establish either dense or sparse correspondence between a 3D face model and a 2D facial image. The use of these methods presents new challenges as well as opportunities for facial texture analysis. In particular, by sampling the image using the fitted model, a facial UV can be created. Unfortunately, due to self-occlusion, such a UV map is always incomplete. In this paper, we propose a framework for training Deep Convolutional Neural Network (DCNN) to complete the facial UV map extracted from in-the-wild images. To this end, we first gather complete UV maps by fitting a 3D Morphable Model (3DMM) to various multiview image and video datasets, as well as leveraging on a new 3D dataset with over 3,000 identities. Second, we devise a meticulously designed architecture that combines local and global adversarial DCNNs to learn an identity-preserving facial UV completion model. We demonstrate that by attaching the completed UV to the fitted mesh and generating instances of arbitrary poses, we can increase pose variations for training deep face recognition/verification models, and minimise pose discrepancy during testing, which lead to better performance. Experiments on both controlled and in-the-wild UV datasets prove the effectiveness of our adversarial UV completion model. We achieve state-of-the-art verification accuracy, 94.05%, under the CFP frontal-profile protocol only by combining pose augmentation during training and pose discrepancy reduction during testing. We will release the first in-the-wild UV dataset (we refer as WildUV) that comprises of complete facial UV maps from 1,892 identities for research purposes.", "title": "" }, { "docid": "c80222e5a7dfe420d16e10b45f8fab66", "text": "Matching people across non-overlapping camera views, known as person re-identification, is challenging due to the lack of spatial and temporal constraints and large visual appearance changes caused by variations in view angle, lighting, background clutter and occlusion. To address these challenges, most previous approaches aim to extract visual features that are both distinctive and stable under appearance changes. However, most visual features and their combinations under realistic conditions are neither stable nor distinctive thus should not be used indiscriminately. In this paper, we propose to formulate person re-identification as a distance learning problem, which aims to learn the optimal distance that can maximises matching accuracy regardless the choice of representation. To that end, we introduce a novel Probabilistic Relative Distance Comparison (PRDC) model, which differs from most existing distance learning methods in that, rather than minimising intra-class variation whilst maximising intra-class variation, it aims to maximise the probability of a pair of true match having a smaller distance than that of a wrong match pair. This makes our model more tolerant to appearance changes and less susceptible to model over-fitting. Extensive experiments are carried out to demonstrate that 1) by formulating the person re-identification problem as a distance learning problem, notable improvement on matching accuracy can be obtained against conventional person re-identification techniques, which is particularly significant when the training sample size is small; and 2) our PRDC outperforms not only existing distance learning methods but also alternative learning methods based on boosting and learning to rank.", "title": "" }, { "docid": "35e662f6c1d75e6878a78c4c443b9448", "text": "ÐThis paper introduces a refined general definition of a skeleton that is based on a penalized-distance function and cannot create any of the degenerate cases of the earlier CEASAR and TEASAR algorithms. Additionally, we provide an algorithm that finds the skeleton accurately and rapidly. Our solution is fully automatic, which frees the user from having to engage in manual data preprocessing. We present the accurate skeletons computed on a number of test datasets. The algorithm is very efficient as demonstrated by the running times which were all below seven minutes. Index TermsÐSkeleton, centerline, medial axis, automatic preprocessing, modeling.", "title": "" }, { "docid": "21321c82a296da3c8c1f0637e3bfc3e7", "text": "We present a discrete distance transform in style of the vector propagation algorithm by Danielsson. Like other vector propagation algorithms, the proposed method is close to exact, i.e., the error can be strictly bounded from above and is significantly smaller than one pixel. Our contribution is that the algorithm runs entirely on consumer class graphics hardware, thereby achieving a throughput of up to 96 Mpixels/s. This allows the proposed method to be used in a wide range of applications that rely both on high speed and high quality.", "title": "" }, { "docid": "5bf90680117b7db4315cce18bc9aefa2", "text": "Motivated by aiding human operators in the detection of dangerous objects in passenger luggage, such as in airports, we develop an automatic object detection approach for multi-view X-ray image data. We make three main contributions: First, we systematically analyze the appearance variations of objects in X-ray images from inspection systems. We then address these variations by adapting standard appearance-based object detection approaches to the specifics of dual-energy X-ray data and the inspection scenario itself. To that end we reduce projection distortions, extend the feature representation, and address both in-plane and out-of-plane object rotations, which are a key challenge compared to many detection tasks in photographic images. Finally, we propose a novel multi-view (multi-camera) detection approach that combines single-view detections from multiple views and takes advantage of the mutual reinforcement of geometrically consistent hypotheses. While our multi-view approach can be used atop arbitrary single-view detectors, thus also for multi-camera detection in photographic images, we evaluate our method on detecting handguns in carry-on luggage. Our results show significant performance gains from all components.", "title": "" }, { "docid": "6042dab731ca69452d22eaa319365c77", "text": "An overview is presented of the current state-of-theart in silicon nanophotonic ring resonators. Basic theory of ring resonators is discussed, and applied to the peculiarities of submicron silicon photonic wire waveguides: the small dimensions and tight bend radii, sensitivity to perturbations and the boundary conditions of the fabrication processes. Theory is compared to quantitative measurements. Finally, several of the more promising applications of silicon ring resonators are discussed: filters and optical delay lines, label-free biosensors, and active rings for efficient modulators and even light sources. Silicon microring resonators Wim Bogaerts*, Peter De Heyn, Thomas Van Vaerenbergh, Katrien De Vos, Shankar Kumar Selvaraja, Tom Claes, Pieter Dumon, Peter Bienstman, Dries Van Thourhout, and Roel Baets", "title": "" }, { "docid": "a1221c2ae735a971047018911b5567e5", "text": "Market integration allows increasing the social welfare of a given society. In most markets, integration also raises the social welfare of the participating markets (partakers). However, electricity markets have complexities such as transmission network congestion and requirements of power reserve that could lead to a decrease in the social welfare of some partakers. The social welfare reduction of partakers, if it occurs, would surely be a hindrance to the development of regional markets, since participants are usually national systems. This paper shows a new model for the regional dispatch of energy and reserve, and proposes as constraints that the social welfare of partakers does not decrease with respect to that obtained from the isolated optimal operation. These social welfare constraints are characterized by their stochastic nature and their dependence on the energy price of different operating states. The problem is solved by the combination of two optimization models (hybrid optimization): A linear model embedded within a meta-heuristic algorithm, which is known as the swarm version of the Means Variance Mapping Optimization (MVMOS). MVMOS allows incorporating the stochastic nature of social welfare constraints through a dynamic penalty scheme, which considers the fulfillment degree along with the dynamics of the search process. & 2016 Published by Elsevier B.V. 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88", "title": "" }, { "docid": "f93ee5c9de994fa07e7c3c1fe6e336d1", "text": "Sleep bruxism (SB) is characterized by repetitive and coordinated mandible movements and non-functional teeth contacts during sleep time. Although the etiology of SB is controversial, the literature converges on its multifactorial origin. Occlusal factors, smoking, alcoholism, drug usage, stress, and anxiety have been described as SB trigger factors. Recent studies on this topic discussed the role of neurotransmitters on the development of SB. Thus, the purpose of this study was to detect and quantify the urinary levels of catecholamines, specifically of adrenaline, noradrenaline and dopamine, in subjects with SB and in control individuals. Urine from individuals with SB (n = 20) and without SB (n = 20) was subjected to liquid chromatography. The catecholamine data were compared by Mann–Whitney’s test (p ≤ 0.05). Our analysis showed higher levels of catecholamines in subjects with SB (adrenaline = 111.4 µg/24 h; noradrenaline = 261,5 µg/24 h; dopamine = 479.5 µg/24 h) than in control subjects (adrenaline = 35,0 µg/24 h; noradrenaline = 148,7 µg/24 h; dopamine = 201,7 µg/24 h). Statistical differences were found for the three catecholamines tested. It was concluded that individuals with SB have higher levels of urinary catecholamines.", "title": "" }, { "docid": "8cecac2a619701d7a7a16d706beadc0a", "text": "Machine learning relies on the assumption that unseen test instances of a classification problem follow the same distribution as observed training data. However, this principle can break down when machine learning is used to make important decisions about the welfare (employment, education, health) of strategic individuals. Knowing information about the classifier, such individuals may manipulate their attributes in order to obtain a better classification outcome. As a result of this behavior -- often referred to as gaming -- the performance of the classifier may deteriorate sharply. Indeed, gaming is a well-known obstacle for using machine learning methods in practice; in financial policy-making, the problem is widely known as Goodhart's law. In this paper, we formalize the problem, and pursue algorithms for learning classifiers that are robust to gaming.\n We model classification as a sequential game between a player named \"Jury\" and a player named \"Contestant.\" Jury designs a classifier, and Contestant receives an input to the classifier drawn from a distribution. Before being classified, Contestant may change his input based on Jury's classifier. However, Contestant incurs a cost for these changes according to a cost function. Jury's goal is to achieve high classification accuracy with respect to Contestant's original input and some underlying target classification function, assuming Contestant plays best response. Contestant's goal is to achieve a favorable classification outcome while taking into account the cost of achieving it.\n For a natural class of \"separable\" cost functions, and certain generalizations, we obtain computationally efficient learning algorithms which are near optimal, achieving a classification error that is arbitrarily close to the theoretical minimum. Surprisingly, our algorithms are efficient even on concept classes that are computationally hard to learn. For general cost functions, designing an approximately optimal strategy-proof classifier, for inverse-polynomial approximation, is NP-hard.", "title": "" }, { "docid": "64a3877186106c911891f4f6fe7fbede", "text": "In this paper, we present a multimodal emotion recognition framework called EmotionMeter that combines brain waves and eye movements. To increase the feasibility and wearability of EmotionMeter in real-world applications, we design a six-electrode placement above the ears to collect electroencephalography (EEG) signals. We combine EEG and eye movements for integrating the internal cognitive states and external subconscious behaviors of users to improve the recognition accuracy of EmotionMeter. The experimental results demonstrate that modality fusion with multimodal deep neural networks can significantly enhance the performance compared with a single modality, and the best mean accuracy of 85.11% is achieved for four emotions (happy, sad, fear, and neutral). We explore the complementary characteristics of EEG and eye movements for their representational capacities and identify that EEG has the advantage of classifying happy emotion, whereas eye movements outperform EEG in recognizing fear emotion. To investigate the stability of EmotionMeter over time, each subject performs the experiments three times on different days. EmotionMeter obtains a mean recognition accuracy of 72.39% across sessions with the six-electrode EEG and eye movement features. These experimental results demonstrate the effectiveness of EmotionMeter within and between sessions.", "title": "" }, { "docid": "3aaf13c82f525299b7b4e93d316bfd18", "text": "Recently, many graph based hashing methods have been emerged to tackle large-scale problems. However, there exists two major bottlenecks: (1) directly learning discrete hashing codes is an NP-hard optimization problem; (2) the complexity of both storage and computational time to build a graph with n data points is O(n2). To address these two problems, in this paper, we propose a novel yet simple supervised graph based hashing method, asymmetric discrete graph hashing, by preserving the asymmetric discrete constraint and building an asymmetric affinity matrix to learn compact binary codes. Specifically, we utilize two different instead of identical discrete matrices to better preserve the similarity of the graph with short binary codes.We generate the asymmetric affinity matrix using m (m << n) selected anchors to approximate the similarity among all training data so that computational time and storage requirement can be significantly improved. In addition, the proposed method jointly learns discrete binary codes and a low-dimensional projection matrix to further improve the retrieval accuracy. Extensive experiments on three benchmark large-scale databases demonstrate its superior performance over the recent state of the arts with lower training time costs.", "title": "" }, { "docid": "e8f86dad01a7e3bd25bdabdc7a3d7136", "text": "In this paper, a wideband monopole antenna with high gain characteristics has been proposed. Number of slits was introduced at the far radiating edge to transform it to multiple monopole radiators. Partial ground plane has been used to widen the bandwidth while by inserting suitable slits at the radiating edges return loss and bandwidth has been improved. The proposed antenna provides high gain up to 13.2dB and the achieved impedance bandwidth is wider than an earlier reported design. FR4 Epoxy with dielectric constant 4.4 and loss tangent 0.02 has been used as substrate material. Antenna has been simulated using HFSS (High Frequency Structure Simulator) as a 3D electromagnetic field simulator, based on finite element method. A good settlement has been found between simulated and measured results. The proposed design is suitable for GSM (890-960MHz), GPS (L1:1575.42MHz, L2:1227.60MHz, L3:1381.05MHz, L4:1379.913MHz, L5:1176.45MHz), DCS (1710-1880MHz), PCS (1850-1990MHz), UMTS(1920-2170MHz), Wi-Fi/WLAN/Hiper LAN/IEEE 802.11 2.4GHz (2412-2484MHz), 3.6GHz (3657.5-3690.0MHz) and 4.9/5.0GHz (4915-5825MHz), Bluetooth (2400-2484MHz), WiMAX 2.3GHz (2.3-2.5GHz), 2.5GHz (2500-2690 MHz), 3.3GHz, 3.5GHz (3400-3600MHz) and 5.8GHz (5.6-5.9GHz) & LTE applications.", "title": "" }, { "docid": "8904494e20d6761437e4d63c86c43e78", "text": "Deep residual networks (ResNets) and their variants are widely used in many computer vision applications and natural language processing tasks. However, the theoretical principles for designing and training ResNets are still not fully understood. Recently, several points of view have emerged to try to interpret ResNet theoretically, such as unraveled view, unrolled iterative estimation and dynamical systems view. In this paper, we adopt the dynamical systems point of view, and analyze the lesioning properties of ResNet both theoretically and experimentally. Based on these analyses, we additionally propose a novel method for accelerating ResNet training. We apply the proposed method to train ResNets and Wide ResNets for three image classification benchmarks, reducing training time by more than 40% with superior or on-par accuracy.", "title": "" }, { "docid": "ccbd40976208fcb7a61d67674d1115af", "text": "Requirements Management (RM) is about organising the requirements and additional information gathered during the Requirements Engineering (RE) process, and managing changes of these requirements. Practioners as well as researchers acknowledge that RM is both important and difficult, and that changing requirements is a challenging factor in many development projects. But why, then, is so little research done within RM? This position paper identifies and discusses five research areas where further research within RM is needed.", "title": "" }, { "docid": "8b2f4d597b1aa5a9579fa3e37f6acc65", "text": "This work presents a 910MHz/2.4GHz dual-band dipole antenna for Power Harvesting and/or Sensor Network applications whose main advantage lies on its easily tunable bands. Tunability is achieved via the low and high frequency dipole separation Wgap. This separation is used to increase or decrease the S11 magnitude of the required bands. Such tunability can be used to harvest energy in environments where the electric field strength of one carrier band is dominant over the other one, or in the case when both carriers have similar electric field strength. If the environment is crowed by 820MHz-1.02GHz carries Wgap is adjusted to 1mm in order to harvest/sense only the selected band; if the environment is full of 2.24GHz - 2.52 GHz carriers Wgap is set to 7mm. When Wgap is selected to 4mm both bands can be harvested/sensed. The proposed antenna works for UHF-RFID, GSM-840MHz, 3G-UMTS, Wi-Fi and Bluetooth standards. Simulations are carried out in Advanced Design System (ADS) Momentum using commercial FR4 printed circuit board specification.", "title": "" } ]
scidocsrr
7b759c86d2bdee3deb215499d076f94b
"Killing Spree": Exploring the Connection Between Competitive Game Play and Aggressive Cognition
[ { "docid": "eded90c762031357c1f5366fefca007c", "text": "The authors examined whether the nature of the opponent (computer, friend, or stranger) influences spatial presence, emotional responses, and threat and challenge appraisals when playing video games. In a within-subjects design, participants played two different video games against a computer, a friend, and a stranger. In addition to self-report ratings, cardiac interbeat intervals (IBIs) and facial electromyography (EMG) were measured to index physiological arousal and emotional valence. When compared to playing against a computer, playing against another human elicited higher spatial presence, engagement, anticipated threat, post-game challenge appraisals, and physiological arousal, as well as more positively valenced emotional responses. In addition, playing against a friend elicited greater spatial presence, engagement, and self-reported and physiological arousal, as well as more positively valenced facial EMG responses, compared to playing against a stranger. The nature of the opponent influences spatial presence when playing video games, possibly through the mediating influence on arousal and attentional processes.", "title": "" }, { "docid": "20d96905880332d6ef5a33b4dd0d8827", "text": "In spite of the fact that equal opportunities for men and women have been a priority in many countries, enormous gender differences prevail in most competitive high-ranking positions. We conduct a series of controlled experiments to investigate whether women might react differently than men to competitive incentive schemes commonly used in job evaluation and promotion. We observe no significant gender difference in mean performance when participants are paid proportional to their performance. But in the competitive environment with mixed gender groups we observe a significant gender difference: the mean performance of men has a large and significant, that of women is unchanged. This gap is not due to gender differences in risk aversion. We then run the same test with homogeneous groups, to investigate whether women under-perform only when competing against men. Women do indeed increase their performance and gender differences in mean performance are now insignificant. These results may be due to lower skill of women, or more likely to the fact that women dislike competition, or alternatively that they feel less competent than their male competitors, which depresses their performance in mixed tournaments. Our last experiment provides support for this hypothesis.", "title": "" } ]
[ { "docid": "04e4c1b80bcf1a93cafefa73563ea4d3", "text": "The last decade has produced an explosion in neuroscience research examining young children's early processing of language. Noninvasive, safe functional brain measurements have now been proven feasible for use with children starting at birth. The phonetic level of language is especially accessible to experimental studies that document the innate state and the effect of learning on the brain. The neural signatures of learning at the phonetic level can be documented at a remarkably early point in development. Continuity in linguistic development from infants' earliest brain responses to phonetic stimuli is reflected in their language and prereading abilities in the second, third, and fifth year of life, a finding with theoretical and clinical impact. There is evidence that early mastery of the phonetic units of language requires learning in a social context. Neuroscience on early language learning is beginning to reveal the multiple brain systems that underlie the human language faculty.", "title": "" }, { "docid": "0dfcbae479f0af59236a5213cb37983a", "text": "The objective of this work is to detect the use of automated programs, known as game bots, based on social interactions in MMORPGs. Online games, especially MMORPGs, have become extremely popular among internet users in the recent years. Not only the popularity but also security threats such as the use of game bots and identity theft have grown manifold. As bot players can obtain unjustified assets without corresponding efforts, the gaming community does not allow players to use game bots. However, the task of identifying game bots is not an easy one because of the velocity and variety of their evolution in mimicking human behavior. Existing methods for detecting game bots have a few drawbacks like reducing immersion of players, low detection accuracy rate, and collision with other security programs. We propose a novel method for detecting game bots based on the fact that humans and game bots tend to form their social network in contrasting ways. In this work we focus particularly on the in game mentoring network from amongst several social networks. We construct a couple of new features based on eigenvector centrality to capture this intuition and establish their importance for detecting game bots. The results show a significant increase in the classification accuracy of various classifiers with the introduction of these features.", "title": "" }, { "docid": "f752f66cbd7a43c3d45940a8fbec0dbf", "text": "ChEMBL is an Open Data database containing binding, functional and ADMET information for a large number of drug-like bioactive compounds. These data are manually abstracted from the primary published literature on a regular basis, then further curated and standardized to maximize their quality and utility across a wide range of chemical biology and drug-discovery research problems. Currently, the database contains 5.4 million bioactivity measurements for more than 1 million compounds and 5200 protein targets. Access is available through a web-based interface, data downloads and web services at: https://www.ebi.ac.uk/chembldb.", "title": "" }, { "docid": "5f3dc141b69eb50e17bdab68a2195e13", "text": "The purpose of this study is to develop a fuzzy-AHP multi-criteria decision making model for procurement process. It aims to measure the procurement performance in the automotive industry. As such measurement of procurement will enable competitive advantage and provide a model for continuous improvement. The rapid growth in the market and the level of competition in the global economy transformed procurement as a strategic issue; which is broader in scope and responsibilities as compared to purchasing. This study reviews the existing literature in procurement performance measurement to identify the key areas of measurement and a hierarchical model is developed with a set of generic measures. In addition, a questionnaire is developed for pair-wise comparison and to collect opinion from practitioners, researchers, managers etc. The relative importance of the measurement criteria are assessed using Analytical Hierarchy Process (AHP) and fuzzy-AHP. The validity of the model is c onfirmed with the results obtained.", "title": "" }, { "docid": "37f55e03f4d1ff3b9311e537dc7122b5", "text": "Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.", "title": "" }, { "docid": "aa64bd9576044ec5e654c9f29c4f7d84", "text": "BACKGROUND\nSocial media are dynamic and interactive computer-mediated communication tools that have high penetration rates in the general population in high-income and middle-income countries. However, in medicine and health care, a large number of stakeholders (eg, clinicians, administrators, professional colleges, academic institutions, ministries of health, among others) are unaware of social media's relevance, potential applications in their day-to-day activities, as well as the inherent risks and how these may be attenuated and mitigated.\n\n\nOBJECTIVE\nWe conducted a narrative review with the aim to present case studies that illustrate how, where, and why social media are being used in the medical and health care sectors.\n\n\nMETHODS\nUsing a critical-interpretivist framework, we used qualitative methods to synthesize the impact and illustrate, explain, and provide contextual knowledge of the applications and potential implementations of social media in medicine and health care. Both traditional (eg, peer-reviewed) and nontraditional (eg, policies, case studies, and social media content) sources were used, in addition to an environmental scan (using Google and Bing Web searches) of resources.\n\n\nRESULTS\nWe reviewed, evaluated, and synthesized 76 articles, 44 websites, and 11 policies/reports. Results and case studies are presented according to 10 different categories of social media: (1) blogs (eg, WordPress), (2) microblogs (eg, Twitter), (3) social networking sites (eg, Facebook), (4) professional networking sites (eg, LinkedIn, Sermo), (5) thematic networking sites (eg, 23andMe), (6) wikis (eg, Wikipedia), (7) mashups (eg, HealthMap), (8) collaborative filtering sites (eg, Digg), (9) media sharing sites (eg, YouTube, Slideshare), and others (eg, SecondLife). Four recommendations are provided and explained for stakeholders wishing to engage with social media while attenuating risk: (1) maintain professionalism at all times, (2) be authentic, have fun, and do not be afraid, (3) ask for help, and (4) focus, grab attention, and engage.\n\n\nCONCLUSIONS\nThe role of social media in the medical and health care sectors is far reaching, and many questions in terms of governance, ethics, professionalism, privacy, confidentiality, and information quality remain unanswered. By following the guidelines presented, professionals have a starting point to engage with social media in a safe and ethical manner. Future research will be required to understand the synergies between social media and evidence-based practice, as well as develop institutional policies that benefit patients, clinicians, public health practitioners, and industry alike.", "title": "" }, { "docid": "47baa10f94368bc056bbca3dd4caec0c", "text": "We characterize the problem of pose estimation for rigid objects in terms of determining viewpoint to explain coarse pose and keypoint prediction to capture the finer details. We address both these tasks in two different settings - the constrained setting with known bounding boxes and the more challenging detection setting where the aim is to simultaneously detect and correctly estimate pose of objects. We present Convolutional Neural Network based architectures for these and demonstrate that leveraging viewpoint estimates can substantially improve local appearance based keypoint predictions. In addition to achieving significant improvements over state-of-the-art in the above tasks, we analyze the error modes and effect of object characteristics on performance to guide future efforts towards this goal.", "title": "" }, { "docid": "92699fa23a516812c7fcb74ba38f42c6", "text": "Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higherlevel understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field.", "title": "" }, { "docid": "58677916e11e6d5401b7396d117a517b", "text": "This work contributes to the development of a common framework for the discussion and analysis of dexterous manipulation across the human and robotic domains. An overview of previous work is first provided along with an analysis of the tradeoffs between arm and hand dexterity. A hand-centric and motion-centric manipulation classification is then presented and applied in four different ways. It is first discussed how the taxonomy can be used to identify a manipulation strategy. Then, applications for robot hand analysis and engineering design are explained. Finally, the classification is applied to three activities of daily living (ADLs) to distinguish the patterns of dexterous manipulation involved in each task. The same analysis method could be used to predict problem ADLs for various impairments or to produce a representative benchmark set of ADL tasks. Overall, the classification scheme proposed creates a descriptive framework that can be used to effectively describe hand movements during manipulation in a variety of contexts and might be combined with existing object centric or other taxonomies to provide a complete description of a specific manipulation task.", "title": "" }, { "docid": "104fa95b500df05a052a230e80797f59", "text": "Stochastic variational inference finds good posterior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. However, the algorithm is sensitive to that rate, which usually requires hand-tuning to each application. We solve this problem by developing an adaptive learning rate for stochastic inference. Our method requires no tuning and is easily implemented with computations already made in the algorithm. We demonstrate our approach with latent Dirichlet allocation applied to three large text corpora. Inference with the adaptive learning rate converges faster and to a better approximation than the best settings of hand-tuned rates.", "title": "" }, { "docid": "fe2ef685733bae2737faa04e8a10087d", "text": "Federal health agencies are currently developing regulatory strategies for Artificial Intelligence based medical products. Regulatory regimes need to account for the new risks and benefits that come with modern AI, along with safety concerns and potential for continual autonomous learning that makes AI non-static and dramatically different than the drugs and products that agencies are used to regulating. Currently, the U.S. Food and Drug Administration (FDA) and other regulatory agencies treat AI-enabled products as medical devices. Alternatively, we propose that AI regulation in the medical domain can analogously adopt aspects of the models used to regulate medical providers.", "title": "" }, { "docid": "2d4357831f83de026759776e019934da", "text": "Mapping the physical location of nodes within a wireless sensor network (WSN) is critical in many applications such as tracking and environmental sampling. Passive RFID tags pose an interesting solution to localizing nodes because an outside reader, rather than the tag, supplies the power to the tag. Thus, utilizing passive RFID technology allows a localization scheme to not be limited to objects that have wireless communication capability because the technique only requires that the object carries a RFID tag. This paper illustrates a method in which objects can be localized without the need to communicate received signal strength information between the reader and the tagged item. The method matches tag count percentage patterns under different signal attenuation levels to a database of tag count percentages, attenuations and distances from the base station reader.", "title": "" }, { "docid": "660465cbd4bd95108a2381ee5a97cede", "text": "In this paper we discuss the design and implementation of an automated usability evaluation method for iOS applications. In contrast to common usability testing methods, it is not explicitly necessary to involve an expert or subjects. These circumstances reduce costs, time and personnel expenditures. Professionals are replaced by the automation tool while test participants are exchanged with consumers of the launched application. Interactions of users are captured via a fully automated capturing framework which creates a record of user interactions for each session and sends them to a central server. A usability problem is defined as a sequence of interactions and pattern recognition specified by interaction design patterns is applied to find these problems. Nevertheless, it falls back to the user input for accurate results. Similar to the problem, the solution of the problem is based on the HCI design pattern. An evaluation shows the functionality of our approach compared to a traditional usability evaluation method.", "title": "" }, { "docid": "c35a4278aa4a084d119238fdd68d9eb6", "text": "ARM TrustZone, which provides a Trusted Execution Environment (TEE), normally plays a role in keeping security-sensitive resources safe. However, to properly control access to the resources, it is not enough to just isolate them from the Rich Execution Environment (REE). In addition to the isolation, secure communication should be guaranteed between security-critical resources in the TEE and legitimate REE processes that are permitted to use them. Even though there is a TEE security solution — namely, a kernel-integrity monitor — it aims to protect the REE kernel’s static regions, not to secure communication between the REE and TEE. We propose SeCReT to ameliorate this problem. SeCReT is a framework that builds a secure channel between the REE and TEE by enabling REE processes to use session keys in the REE that is regarded as unsafe region. SeCReT provides the session key to a requestor process only when the requestor’s code and control flow integrity are verified. To prevent the key from being exposed to an attacker who already compromised the REE kernel, SeCReT flushes the key from the memory every time the processor switches into kernel mode. In this paper, we present the design and implementation of SeCReT to show how it protects the key in the REE. Our prototype is implemented on Arndale board, which offers a Cortex-A15 dual-core processor with TrustZone as its security extension. We performed a security analysis by using a kernel rootkit and also ran LMBench microbenchmark to evaluate the performance overhead imposed by SeCReT.", "title": "" }, { "docid": "257f00fc5a4b2a0addbd7e9cc2bf6fec", "text": "Security experts have demonstrated numerous risks imposed by Internet of Things (IoT) devices on organizations. Due to the widespread adoption of such devices, their diversity, standardization obstacles, and inherent mobility, organizations require an intelligent mechanism capable of automatically detecting suspicious IoT devices connected to their networks. In particular, devices not included in a white list of trustworthy IoT device types (allowed to be used within the organizational premises) should be detected. In this research, Random Forest, a supervised machine learning algorithm, was applied to features extracted from network traffic data with the aim of accurately identifying IoT device types from the white list. To train and evaluate multi-class classifiers, we collected and manually labeled network traffic data from 17 distinct IoT devices, representing nine types of IoT devices. Based on the classification of 20 consecutive sessions and the use of majority rule, IoT device types that are not on the white list were correctly detected as unknown in 96% of test cases (on average), and white listed device types were correctly classified by their actual types in 99% of cases. Some IoT device types were identified quicker than others (e.g., sockets and thermostats were successfully detected within five TCP sessions of connecting to the network). Perfect detection of unauthorized IoT device types was achieved upon analyzing 110 consecutive sessions; perfect classification of white listed types required 346 consecutive sessions, 110 of which resulted in 99.49% accuracy. Further experiments demonstrated the successful applicability of classifiers trained in one location and tested on another. In addition, a discussion is provided regarding the resilience of our machine learning-based IoT white listing method to adversarial attacks.", "title": "" }, { "docid": "adc51e9fdbbb89c9a47b55bb8823c7fe", "text": "State-of-the-art model counters are based on exhaustive DPLL algorithms, and have been successfully used in probabilistic reasoning, one of the key problems in AI. In this article, we present a new exhaustive DPLL algorithm with a formal semantics, a proof of correctness, and a modular design. The modular design is based on the separation of the core model counting algorithm from SAT solving techniques. We also show that the trace of our algorithm belongs to the language of Sentential Decision Diagrams (SDDs), which is a subset of Decision-DNNFs, the trace of existing state-of-the-art model counters. Still, our experimental analysis shows comparable results against state-of-the-art model counters. Furthermore, we obtain the first top-down SDD compiler, and show orders-of-magnitude improvements in SDD construction time against the existing bottom-up SDD compiler.", "title": "" }, { "docid": "b1a538752056e91fd5800911f36e6eb0", "text": "BACKGROUND\nThe current, so-called \"Millennial\" generation of learners is frequently characterized as having deep understanding of, and appreciation for, technology and social connectedness. This generation of learners has also been molded by a unique set of cultural influences that are essential for medical educators to consider in all aspects of their teaching, including curriculum design, student assessment, and interactions between faculty and learners.\n\n\nAIM\n The following tips outline an approach to facilitating learning of our current generation of medical trainees.\n\n\nMETHOD\n The method is based on the available literature and the authors' experiences with Millennial Learners in medical training.\n\n\nRESULTS\n The 12 tips provide detailed approaches and specific strategies for understanding and engaging Millennial Learners and enhancing their learning.\n\n\nCONCLUSION\n With an increased understanding of the characteristics of the current generation of medical trainees, faculty will be better able to facilitate learning and optimize interactions with Millennial Learners.", "title": "" }, { "docid": "84f7b499cd608de1ee7443fcd7194f19", "text": "In this paper, we present a new computationally efficient numerical scheme for the minimizing flow approach for optimal mass transport (OMT) with applications to non-rigid 3D image registration. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. Our implementation also employs multigrid, and parallel methodologies on a consumer graphics processing unit (GPU) for fast computation. Although computing the optimal map has been shown to be computationally expensive in the past, we show that our approach is orders of magnitude faster then previous work and is capable of finding transport maps with optimality measures (mean curl) previously unattainable by other works (which directly influences the accuracy of registration). We give results where the algorithm was used to compute non-rigid registrations of 3D synthetic data as well as intra-patient pre-operative and post-operative 3D brain MRI datasets.", "title": "" }, { "docid": "adb17811b539f419285779d62932736d", "text": "This paper proposes a high efficiency low cost AC/DC converter for adapter application. In order to achieve high efficiency and low cost for adapter with universal AC input, a single stage bridgeless flyback PFC converter with peak current clamping technique was proposed. Compared with conventional flyback PFC converter, the conduction loss is reduced due to bridgeless structure. And the size of transformer can also be significantly reduced due to lower peak current, which results in lower cost and higher power density. Detailed operation principles and design considerations are illustrated. Experimental results from a 90W prototype with universal input and 20V/4.5A output are presented to verify the operation and performance of the proposed converter. The minimum efficiency at full load is above 91% over the entire input range.", "title": "" }, { "docid": "c9a04b21e60e971908e02e2804962283", "text": "We used a dynamically scaled model insect to measure the rotational forces produced by a flapping insect wing. A steadily translating wing was rotated at a range of constant angular velocities, and the resulting aerodynamic forces were measured using a sensor attached to the base of the wing. These instantaneous forces were compared with quasi-steady estimates based on translational force coefficients. Because translational and rotational velocities were constant, the wing inertia was negligible, and any difference between measured forces and estimates based on translational force coefficients could be attributed to the aerodynamic effects of wing rotation. By factoring out the geometry and kinematics of the wings from the rotational forces, we determined rotational force coefficients for a range of angular velocities and different axes of rotation. The measured coefficients were compared with a mathematical model developed for two-dimensional motions in inviscid fluids, which we adapted to the three-dimensional case using blade element theory. As predicted by theory, the rotational coefficient varied linearly with the position of the rotational axis for all angular velocities measured. The coefficient also, however, varied with angular velocity, in contrast to theoretical predictions. Using the measured rotational coefficients, we modified a standard quasi-steady model of insect flight to include rotational forces, translational forces and the added mass inertia. The revised model predicts the time course of force generation for several different patterns of flapping kinematics more accurately than a model based solely on translational force coefficients. By subtracting the improved quasi-steady estimates from the measured forces, we isolated the aerodynamic forces due to wake capture.", "title": "" } ]
scidocsrr
aa5b2ef46839c758932a01d215f2a377
Visual analytics in healthcare - opportunities and research challenges
[ { "docid": "4457aa3443d756a4afeb76f0571d3e25", "text": "THE AMOUNT OF DATA BEING DIGITALLY COLLECTED AND stored is vast and expanding rapidly. As a result, the science of data management and analysis is also advancing to enable organizations to convert this vast resource into information and knowledge that helps them achieve their objectives. Computer scientists have invented the term big data to describe this evolving technology. Big data has been successfully used in astronomy (eg, the Sloan Digital Sky Survey of telescopic information), retail sales (eg, Walmart’s expansive number of transactions), search engines (eg, Google’s customization of individual searches based on previous web data), and politics (eg, a campaign’s focus of political advertisements on people most likely to support their candidate based on web searches). In this Viewpoint, we discuss the application of big data to health care, using an economic framework to highlight the opportunities it will offer and the roadblocks to implementation. We suggest that leveraging the collection of patient and practitioner data could be an important way to improve quality and efficiency of health care delivery. Widespread uptake of electronic health records (EHRs) has generated massive data sets. A survey by the American Hospital Association showed that adoption of EHRs has doubled from 2009 to 2011, partly a result of funding provided by the Health Information Technology for Economic and Clinical Health Act of 2009. Most EHRs now contain quantitative data (eg, laboratory values), qualitative data (eg, text-based documents and demographics), and transactional data (eg, a record of medication delivery). However, much of this rich data set is currently perceived as a byproduct of health care delivery, rather than a central asset to improve its efficiency. The transition of data from refuse to riches has been key in the big data revolution of other industries. Advances in analytic techniques in the computer sciences, especially in machine learning, have been a major catalyst for dealing with these large information sets. These analytic techniques are in contrast to traditional statistical methods (derived from the social and physical sciences), which are largely not useful for analysis of unstructured data such as text-based documents that do not fit into relational tables. One estimate suggests that 80% of business-related data exist in an unstructured format. The same could probably be said for health care data, a large proportion of which is text-based. In contrast to most consumer service industries, medicine adopted a practice of generating evidence from experimental (randomized trials) and quasi-experimental studies to inform patients and clinicians. The evidence-based movement is founded on the belief that scientific inquiry is superior to expert opinion and testimonials. In this way, medicine was ahead of many other industries in terms of recognizing the value of data and information guiding rational decision making. However, health care has lagged in uptake of newer techniques to leverage the rich information contained in EHRs. There are 4 ways big data may advance the economic mission of health care delivery by improving quality and efficiency. First, big data may greatly expand the capacity to generate new knowledge. The cost of answering many clinical questions prospectively, and even retrospectively, by collecting structured data is prohibitive. Analyzing the unstructured data contained within EHRs using computational techniques (eg, natural language processing to extract medical concepts from free-text documents) permits finer data acquisition in an automated fashion. For instance, automated identification within EHRs using natural language processing was superior in detecting postoperative complications compared with patient safety indicators based on discharge coding. Big data offers the potential to create an observational evidence base for clinical questions that would otherwise not be possible and may be especially helpful with issues of generalizability. The latter issue limits the application of conclusions derived from randomized trials performed on a narrow spectrum of participants to patients who exhibit very different characteristics. Second, big data may help with knowledge dissemination. Most physicians struggle to stay current with the latest evidence guiding clinical practice. The digitization of medical literature has greatly improved access; however, the sheer", "title": "" }, { "docid": "3d4f6ba4239854a91cee61bded978057", "text": "OBJECTIVE\nThe aim of this study is to analyze and visualize the polymorbidity associated with chronic kidney disease (CKD). The study shows diseases associated with CKD before and after CKD diagnosis in a time-evolutionary type visualization.\n\n\nMATERIALS AND METHODS\nOur sample data came from a population of one million individuals randomly selected from the Taiwan National Health Insurance Database, 1998 to 2011. From this group, those patients diagnosed with CKD were included in the analysis. We selected 11 of the most common diseases associated with CKD before its diagnosis and followed them until their death or up to 2011. We used a Sankey-style diagram, which quantifies and visualizes the transition between pre- and post-CKD states with various lines and widths. The line represents groups and the width of a line represents the number of patients transferred from one state to another.\n\n\nRESULTS\nThe patients were grouped according to their states: that is, diagnoses, hemodialysis/transplantation procedures, and events such as death. A Sankey diagram with basic zooming and planning functions was developed that temporally and qualitatively depicts they had amid change of comorbidities occurred in pre- and post-CKD states.\n\n\nDISCUSSION\nThis represents a novel visualization approach for temporal patterns of polymorbidities associated with any complex disease and its outcomes. The Sankey diagram is a promising method for visualizing complex diseases and exploring the effect of comorbidities on outcomes in a time-evolution style.\n\n\nCONCLUSIONS\nThis type of visualization may help clinicians foresee possible outcomes of complex diseases by considering comorbidities that the patients have developed.", "title": "" } ]
[ { "docid": "4b988535edefeb3ff7df89bcb900dd1c", "text": "Context: As a result of automated software testing, large amounts of software test code (script) are usually developed by software teams. Automated test scripts provide many benefits, such as repeatable, predictable, and efficient test executions. However, just like any software development activity, development of test scripts is tedious and error prone. We refer, in this study, to all activities that should be conducted during the entire lifecycle of test-code as Software Test-Code Engineering (STCE). Objective: As the STCE research area has matured and the number of related studies has increased, it is important to systematically categorize the current state-of-the-art and to provide an overview of the trends in this field. Such summarized and categorized results provide many benefits to the broader community. For example, they are valuable resources for new researchers (e.g., PhD students) aiming to conduct additional secondary studies. Method: In this work, we systematically classify the body of knowledge related to STCE through a systematic mapping (SM) study. As part of this study, we pose a set of research questions, define selection and exclusion criteria, and systematically develop and refine a systematic map. Results: Our study pool includes a set of 60 studies published in the area of STCE between 1999 and 2012. Our mapping data is available through an online publicly-accessible repository. We derive the trends for various aspects of STCE. Among our results are the following: (1) There is an acceptable mix of papers with respect to different contribution facets in the field of STCE and the top two leading facets are tool (68%) and method (65%). The studies that presented new processes, however, had a low rate (3%), which denotes the need for more process-related studies in this area. (2) Results of investigation about research facet of studies and comparing our result to other SM studies shows that, similar to other fields in software engineering, STCE is moving towards more rigorous validation approaches. (3) A good mixture of STCE activities has been presented in the primary studies. Among them, the two leading activities are quality assessment and co-maintenance of test-code with production code. The highest growth rate for co-maintenance activities in recent years shows the importance and challenges involved in this activity. (4) There are two main categories of quality assessment activity: detection of test smells and oracle assertion adequacy. (5) JUnit is the leading test framework which has been used in about 50% of the studies. (6) There is a good mixture of SUT types used in the studies: academic experimental systems (or simple code examples), real open-source and commercial systems. (7) Among 41 tools that are proposed for STCE, less than half of the tools (45%) were available for download. It is good to have this percentile of tools to be available, although not perfect, since the availability of tools can lead to higher impact on research community and industry. Conclusion: We discuss the emerging trends in STCE, and discuss the implications for researchers and practitioners in this area. The results of our systematic mapping can help researchers to obtain an overview of existing STCE approaches and spot areas in the field that require more attention from the", "title": "" }, { "docid": "ca8aa3e930fd36a16ac36546a25a1fde", "text": "Accurate State-of-Charge (SOC) estimation of Li-ion batteries is essential for effective battery control and energy management of electric and hybrid electric vehicles. To this end, first, the battery is modelled by an OCV-R-RC equivalent circuit. Then, a dual Bayesian estimation scheme is developed-The battery model parameters are identified online and fed to the SOC estimator, the output of which is then fed back to the parameter identifier. Both parameter identification and SOC estimation are treated in a Bayesian framework. The square-root recursive least-squares estimator and the extended Kalman-Bucy filter are systematically paired up for the first time in the battery management literature to tackle the SOC estimation problem. The proposed method is finally compared with the convectional Coulomb counting method. The results indicate that the proposed method significantly outperforms the Coulomb counting method in terms of accuracy and robustness.", "title": "" }, { "docid": "083f43f1cc8fe2ad186567f243ee04de", "text": "We consider the task of recognition of Australian vehicle number plates (also called license plates or registration plates in other countries). A system for Australian number plate recognition must cope with wide variations in the appearance of the plates. Each state uses its own range of designs with font variations between the designs. There are special designs issued for significant events such as the Sydney 2000 Olympic Games. Also, vehicle owners may place the plates inside glass covered frames or use plates made of non-standard materials. These issues compound the complexity of automatic number plate recognition, making existing approaches inadequate. We have developed a system that incorporates a novel combination of image processing and artificial neural network technologies to successfully locate and read Australian vehicle number plates in digital images. Commercial application of the system is envisaged.", "title": "" }, { "docid": "10ebda480df1157da5581b6219a9464a", "text": "Our goal is to create a convenient natural language interface for performing wellspecified but complex actions such as analyzing data, manipulating text, and querying databases. However, existing natural language interfaces for such tasks are quite primitive compared to the power one wields with a programming language. To bridge this gap, we start with a core programming language and allow users to “naturalize” the core language incrementally by defining alternative, more natural syntax and increasingly complex concepts in terms of compositions of simpler ones. In a voxel world, we show that a community of users can simultaneously teach a common system a diverse language and use it to build hundreds of complex voxel structures. Over the course of three days, these users went from using only the core language to using the naturalized language in 85.9% of the last 10K utterances.", "title": "" }, { "docid": "df5778fce3318029d249de1ff37b0715", "text": "The Switched Reluctance Machine (SRM) is a robust machine and is a candidate for ultra high speed applications. Until now the area of ultra high speed machines has been dominated by permanent magnet machines (PM). The PM machine has a higher torque density and some other advantages compared to SRMs. However, the soaring prices of the rare earth materials are driving the efforts to find an alternative to PM machines without significantly impacting the performance. At the same time significant progress has been made in the design and control of the SRM. This paper reviews the progress of the SRM as a high speed machine and proposes a novel rotor structure design to resolve the challenge of high windage losses at ultra high speed. It then elaborates on the path of modifying the design to achieve optimal performance. The simulation result of the final design is verified on FEA software. Finally, a prototype machine with similar design is built and tested to verify the simulation model. The experimental waveform indicates good agreement with the simulation result. Therefore, the performance of the prototype machine is analyzed and presented at the end of this paper.", "title": "" }, { "docid": "3ad0b3baa7d9f55d4d2f2b8d8c54b86d", "text": "In this work we solve the uncalibrated photometric stereo problem with lights placed near the scene. Although the devised model is more complex than its far-light counterpart, we show that under a global linear ambiguity the reconstruction is possible up to a rotation and scaling, which can be easily fixed. We also propose a solution for reconstructing the normal map, the albedo, the light positions and the light intensities of a scene given only a sequence of near-light images. This is done in an alternating minimization framework which first estimates both the normals and the albedo, and then the light positions and intensities. We validate our method on real world experiments and show that a near-light model leads to a significant improvement in the surface reconstruction compared to the classic distant illumination case.", "title": "" }, { "docid": "6cf2ffb0d541320b1ad04dc3b9e1c9a4", "text": "Prediction of potential fraudulent activities may prevent both the stakeholders and the appropriate regulatory authorities of national or international level from being deceived. The objective difficulties on collecting adequate data that are obsessed by completeness affects the reliability of the most supervised Machine Learning methods. This work examines the effectiveness of forecasting fraudulent financial statements using semi-supervised classification techniques (SSC) that require just a few labeled examples for achieving robust learning behaviors mining useful data patterns from a larger pool of unlabeled examples. Based on data extracted from Greek firms, a number of comparisons between supervised and semi-supervised algorithms has been conducted. According to the produced results, the later algorithms are favored being examined over several scenarios of different Labeled Ratio (R) values.", "title": "" }, { "docid": "9ae0078ef9dcc3bccca9efd87ac43f26", "text": "Delusions are the false and often incorrigible beliefs that can cause severe suffering in mental illness. We cannot yet explain them in terms of underlying neurobiological abnormalities. However, by drawing on recent advances in the biological, computational and psychological processes of reinforcement learning, memory, and perception it may be feasible to account for delusions in terms of cognition and brain function. The account focuses on a particular parameter, prediction error--the mismatch between expectation and experience--that provides a computational mechanism common to cortical hierarchies, fronto-striatal circuits and the amygdala as well as parietal cortices. We suggest that delusions result from aberrations in how brain circuits specify hierarchical predictions, and how they compute and respond to prediction errors. Defects in these fundamental brain mechanisms can vitiate perception, memory, bodily agency and social learning such that individuals with delusions experience an internal and external world that healthy individuals would find difficult to comprehend. The present model attempts to provide a framework through which we can build a mechanistic and translational understanding of these puzzling symptoms.", "title": "" }, { "docid": "0e521af53f9faf4fee38843a22ec2185", "text": "Steering of main beam of radiation at fixed millimeter wave frequency in a Substrate Integrated Waveguide (SIW) Leaky Wave Antenna (LWA) has not been investigated so far in literature. In this paper a Half-Mode Substrate Integrated Waveguide (HMSIW) LWA is proposed which has the capability to steer its main beam at fixed millimeter wave frequency of 24GHz. Beam steering is made feasible by changing the capacitance of the capacitors, connected at the dielectric side of HMSIW. The full wave EM simulations show that the main beam scans from 36° to 57° in the first quadrant.", "title": "" }, { "docid": "4a52f4c8f08cefac9d81296dbb853d6e", "text": "Echo chambers, i.e., situations where one is exposed only to opinions that agree with their own, are an increasing concern for the political discourse in many democratic countries. This paper studies the phenomenon of political echo chambers on social media. We identify the two components in the phenomenon: the opinion that is shared (“echo”), and the place that allows its exposure (“chamber” — the social network), and examine closely at how these two components interact. We de ne a production and consumption measure for social-media users, which captures the political leaning of the content shared and received by them. By comparing the two, we nd that Twitter users are, to a large degree, exposed to political opinions that agree with their own. We also nd that users who try to bridge the echo chambers, by sharing content with diverse leaning, have to pay a “price of bipartisanship” in terms of their network centrality and content appreciation. In addition, we study the role of “gatekeepers,” users who consume content with diverse leaning but produce partisan content (with a single-sided leaning), in the formation of echo chambers. Finally, we apply these ndings to the task of predicting partisans and gatekeepers from social and content features. While partisan users turn out relatively easy to identify, gatekeepers prove to be more challenging. ACM Reference format: Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2018. Political Discourse on Social Media: Echo Chambers, Gatekeepers, and the Price of Bipartisanship. In Proceedings of WWW ’18, Lyon, France, April 23–27, 2018, 10 pages. DOI: 10.1145/nnnnnnn.nnnnnnn", "title": "" }, { "docid": "80d4f6a622edea6530ffc7e29590af74", "text": "Data protection is the process of backing up data in case of a data loss event. It is one of the most critical routine activities for every organization. Detecting abnormal backup jobs is important to prevent data protection failures and ensure the service quality. Given the large scale backup endpoints and the variety of backup jobs, from a backup-as-a-service provider viewpoint, we need a scalable and flexible outlier detection method that can model a huge number of objects and well capture their diverse patterns. In this paper, we introduce H2O, a novel hybrid and hierarchical method to detect outliers from millions of backup jobs for large scale data protection. Our method automatically selects an ensemble of outlier detection models for each multivariate time series composed by the backup metrics collected for each backup endpoint by learning their exhibited characteristics. Interactions among multiple variables are considered to better detect true outliers and reduce false positives. In particular, a new seasonal-trend decomposition based outlier detection method is developed, considering the interactions among variables in the form of common trends, which is robust to the presence of outliers in the training data. The model selection process is hierarchical, following a global to local fashion. The final outlier is determined through an ensemble learning by multiple models. Built on top of Apache Spark, H2O has been deployed to detect outliers in a large and complex data protection environment with more than 600,000 backup endpoints and 3 million daily backup jobs. To the best of our knowledge, this is the first work that selects and constructs large scale outlier detection models for multivariate time series on big data platforms.", "title": "" }, { "docid": "508ffcdbc7d059ad8b7ee64d562d14b5", "text": "A young manager faces an impasse in his career. He goes to see his mentor at the company, who closes the office door, offers the young man a chair, recounts a few war stories, and serves up a few specific pointers about the problem at hand. Then, just as the young manager is getting up to leave, the elder executive adds one small kernel of avuncular wisdom--which the junior manager carries with him through the rest of his career. Such is the nature of business advice. Or is it? The six essays in this article suggest otherwise. Few of the leaders who tell their stories here got their best advice in stereotypical form, as an aphorism or a platitude. For Ogilvy & Mather chief Shelly Lazarus, profound insight came from a remark aimed at relieving the tension of the moment. For Novartis CEO Daniel Vasella, it was an apt comment, made on a snowy day, back when he was a medical resident. For publishing magnate Earl Graves and Starwood Hotels' Barry Sternlicht, advice they received about trust from early bosses took on ever deeper and more practical meaning as their careers progressed. For Goldman Sachs chairman Henry Paulson, Jr., it was as much his father's example as it was a specific piece of advice his father handed down to him. And fashion designer Liz Lange rejects the very notion that there's inherent wisdom in accepting other people's advice. As these stories demonstrate, people find wisdom when they least expect to, and they never really know what piece of advice will transcend the moment, profoundly affecting how they later make decisions, evaluate people, and examine--and reexamine--their own actions.", "title": "" }, { "docid": "36347412c7d30ae6fde3742bbc4f21b9", "text": "iii", "title": "" }, { "docid": "a3fe3b92fe53109888b26bb03c200180", "text": "Using Artificial Neural Networh (A\".) in critical applications can be challenging due to the often experimental nature of A\" construction and the \"black box\" label that is fiequently attached to A\".. Wellaccepted process models exist for algorithmic sofhyare development which facilitate software validation and acceptance. The sojiware development process model presented herein is targeted specifically toward artificial neural networks in crik-al appliicationr. 7% model is not unwieldy, and could easily be used on projects without critical aspects. This should be of particular interest to organizations that use AMVs and need to maintain or achieve a Capability Maturity Model (CM&?I or IS0 sofhyare development rating. Further, while this model is aimed directly at neural network development, with minor moda&ations, the model could be applied to any technique wherein knowledge is extractedfiom existing &ka, such as other numeric approaches or knowledge-based systems.", "title": "" }, { "docid": "260b39661df5cb7ddb9c4cf7ab8a36ba", "text": "Deblurring camera-based document image is an important task in digital document processing, since it can improve both the accuracy of optical character recognition systems and the visual quality of document images. Traditional deblurring algorithms have been proposed to work for natural-scene images. However the natural-scene images are not consistent with document images. In this paper, the distinct characteristics of document images are investigated. We propose a content-aware prior for document image deblurring. It is based on document image foreground segmentation. Besides, an upper-bound constraint combined with total variation based method is proposed to suppress the rings in the deblurred image. Comparing with the traditional general purpose deblurring methods, the proposed deblurring algorithm can produce more pleasing results on document images. Encouraging experimental results demonstrate the efficacy of the proposed method.", "title": "" }, { "docid": "1448b02c9c14e086a438d76afa1b2fde", "text": "This paper analyzes the classification of hyperspectral remote sensing images with linear discriminant analysis (LDA) in the presence of a small ratio between the number of training samples and the number of spectral features. In these particular ill-posed problems, a reliable LDA requires one to introduce regularization for problem solving. Nonetheless, in such a challenging scenario, the resulting regularized LDA (RLDA) is highly sensitive to the tuning of the regularization parameter. In this context, we introduce in the remote sensing community an efficient version of the RLDA recently presented by Ye to cope with critical ill-posed problems. In addition, several LDA-based classifiers (i.e., penalized LDA, orthogonal LDA, and uncorrelated LDA) are compared theoretically and experimentally with the standard LDA and the RLDA. Method differences are highlighted through toy examples and are exhaustively tested on several ill-posed problems related to the classification of hyperspectral remote sensing images. Experimental results confirm the effectiveness of the presented RLDA technique and point out the main properties of other analyzed LDA techniques in critical ill-posed hyperspectral image classification problems.", "title": "" }, { "docid": "d84179bb22103150f3eae95e6ea7b3ab", "text": "Profile hidden Markov models (profile HMMs) and probabilistic inference methods have made important contributions to the theory of sequence database homology search. However, practical use of profile HMM methods has been hindered by the computational expense of existing software implementations. Here I describe an acceleration heuristic for profile HMMs, the \"multiple segment Viterbi\" (MSV) algorithm. The MSV algorithm computes an optimal sum of multiple ungapped local alignment segments using a striped vector-parallel approach previously described for fast Smith/Waterman alignment. MSV scores follow the same statistical distribution as gapped optimal local alignment scores, allowing rapid evaluation of significance of an MSV score and thus facilitating its use as a heuristic filter. I also describe a 20-fold acceleration of the standard profile HMM Forward/Backward algorithms using a method I call \"sparse rescaling\". These methods are assembled in a pipeline in which high-scoring MSV hits are passed on for reanalysis with the full HMM Forward/Backward algorithm. This accelerated pipeline is implemented in the freely available HMMER3 software package. Performance benchmarks show that the use of the heuristic MSV filter sacrifices negligible sensitivity compared to unaccelerated profile HMM searches. HMMER3 is substantially more sensitive and 100- to 1000-fold faster than HMMER2. HMMER3 is now about as fast as BLAST for protein searches.", "title": "" }, { "docid": "3af28edbed06ef6db9fdb27a73e784de", "text": "The study aimed to investigate factors influencing older adults' physical activity engagement over time. The authors analyzed 3 waves of data from a sample of Israelis age 75-94 (Wave 1 n = 1,369, Wave 2 n = 687, Wave 3 n = 154). Findings indicated that physical activity engagement declined longitudinally. Logistic regressions showed that female gender, older age, and taking more medications were significant risk factors for stopping exercise at Wave 2 in those physically active at Wave 1. In addition, higher functional and cognitive status predicted initiating exercise at Wave 2 in those who did not exercise at Wave 1. By clarifying the influence of personal characteristics on physical activity engagement in the Israeli old-old, this study sets the stage for future investigation and intervention, stressing the importance of targeting at-risk populations, accommodating risk factors, and addressing both the initiation and the maintenance of exercise in the face of barriers.", "title": "" }, { "docid": "a25fa0c0889b62b70bf95c16f9966cc4", "text": "We deal with the problem of document representation for the task of measuring semantic relatedness between documents. A document is represented as a compact concept graph where nodes represent concepts extracted from the document through references to entities in a knowledge base such as DBpedia. Edges represent the semantic and structural relationships among the concepts. Several methods are presented to measure the strength of those relationships. Concepts are weighted through the concept graph using closeness centrality measure which reflects their relevance to the aspects of the document. A novel similarity measure between two concept graphs is presented. The similarity measure first represents concepts as continuous vectors by means of neural networks. Second, the continuous vectors are used to accumulate pairwise similarity between pairs of concepts while considering their assigned weights. We evaluate our method on a standard benchmark for document similarity. Our method outperforms state-of-the-art methods including ESA (Explicit Semantic Annotation) while our concept graphs are much smaller than the concept vectors generated by ESA. Moreover, we show that by combining our concept graph with ESA, we obtain an even further improvement.", "title": "" }, { "docid": "9d2b3aaf57e31a2c0aa517d642f39506", "text": "3.1. URINARY TRACT INFECTION Urinary tract infection is one of the important causes of morbidity and mortality in Indian population, affecting all age groups across the life span. Anatomically, urinary tract is divided into an upper portion composed of kidneys, renal pelvis, and ureters and a lower portion made up of urinary bladder and urethra. UTI is an inflammatory response of the urothelium to bacterial invasion that is usually associated with bacteriuria and pyuria. UTI may involve only the lower urinary tract or both the upper and lower tract [19].", "title": "" } ]
scidocsrr
9cd4fddb361734c215782018b8b9a529
Video games and prosocial behavior: A study of the effects of non-violent, violent and ultra-violent gameplay
[ { "docid": "b117e0e32d754f59c7d3eacdc609f63b", "text": "Mass media campaigns are widely used to expose high proportions of large populations to messages through routine uses of existing media, such as television, radio, and newspapers. Exposure to such messages is, therefore, generally passive. Such campaigns are frequently competing with factors, such as pervasive product marketing, powerful social norms, and behaviours driven by addiction or habit. In this Review we discuss the outcomes of mass media campaigns in the context of various health-risk behaviours (eg, use of tobacco, alcohol, and other drugs, heart disease risk factors, sex-related behaviours, road safety, cancer screening and prevention, child survival, and organ or blood donation). We conclude that mass media campaigns can produce positive changes or prevent negative changes in health-related behaviours across large populations. We assess what contributes to these outcomes, such as concurrent availability of required services and products, availability of community-based programmes, and policies that support behaviour change. Finally, we propose areas for improvement, such as investment in longer better-funded campaigns to achieve adequate population exposure to media messages.", "title": "" }, { "docid": "6fb168b933074250236980742e33f064", "text": "Recent research reveals that playing prosocial video games increases prosocial cognitions, positive affect, and helpful behaviors [Gentile et al., 2009; Greitemeyer and Osswald, 2009, 2010, 2011]. These results are consistent with the social-cognitive models of social behavior such as the general learning model [Buckley and Anderson, 2006]. However, no experimental studies have examined such effects on children. Previous research on violent video games suggests that short-term effects of video games are largely based on priming of existing behavioral scripts. Thus, it is unclear whether younger children will show similar effects. This research had 9-14 years olds play a prosocial, neutral, or violent video game, and assessed helpful and hurtful behaviors simultaneously through a new tangram measure. Prosocial games increased helpful and decreased hurtful behavior, whereas violent games had the opposite effects.", "title": "" } ]
[ { "docid": "162f46d8f789e39423b8cc80cae2461c", "text": "Various key-value (KV) stores are widely employed for data management to support Internet services as they offer higher efficiency, scalability, and availability than relational database systems. The log-structured merge tree (LSM-tree) based KV stores have attracted growing attention because they can eliminate random writes and maintain acceptable read performance. Recently, as the price per unit capacity of NAND flash decreases, solid state disks (SSDs) have been extensively adopted in enterprise-scale data centers to provide high I/O bandwidth and low access latency. However, it is inefficient to naively combine LSM-tree-based KV stores with SSDs, as the high parallelism enabled within the SSD cannot be fully exploited. Current LSM-tree-based KV stores are designed without assuming SSD's multi-channel architecture.\n To address this inadequacy, we propose LOCS, a system equipped with a customized SSD design, which exposes its internal flash channels to applications, to work with the LSM-tree-based KV store, specifically LevelDB in this work. We extend LevelDB to explicitly leverage the multiple channels of an SSD to exploit its abundant parallelism. In addition, we optimize scheduling and dispatching polices for concurrent I/O requests to further improve the efficiency of data access. Compared with the scenario where a stock LevelDB runs on a conventional SSD, the throughput of storage system can be improved by more than 4X after applying all proposed optimization techniques.", "title": "" }, { "docid": "f9c56d14c916bff37ab69bd949c30b04", "text": "We have examined 365 versions of Linux. For every versio n, we counted the number of instances of common (global) coupling between each of the 17 kernel modules and all the other modules in that version of Linux. We found that the num ber of instances of common coupling grows exponentially with version number. This result is significant at the 99.99% level, and no additional variables are needed to explain this increase. On the other hand, the number of lines of code in each kernel modules grows only linearly with v ersion number. We conclude that, unless Linux is restructured with a bare minimum of common c upling, the dependencies induced by common coupling will, at some future date, make Linu x exceedingly hard to maintain without inducing regression faults.", "title": "" }, { "docid": "dda120b6a1e76b0920f831325e9529da", "text": "This paper describes a practical and systematic procedure for modeling and identifying the flight dynamics of small, low-cost, fixed-wing uninhabited aerial vehicles (UAVs). The procedure is applied to the Ultra Stick 25e flight test vehicle of the University of Minnesota UAV flight control research group. The procedure hinges on a general model structure for fixed-wing UAV flight dynamics derived using first principles analysis. Wind tunnel tests and simplifying assumptions are applied to populate the model structure with an approximation of the Ultra Stick 25e flight dynamics. This baseline model is used to design informative flight experiments for the subsequent frequency domain system identification. The final identified model is validated against separately acquired time domain flight data.", "title": "" }, { "docid": "aea440261647e7e7d9880c0929c04f0d", "text": "This paper deals with parking space detection by using ultrasonic sensor. Using the multiple echo function, the accuracy of edge detection was increased. After inspecting effect on the multiple echo function in indoor experiment, we applied to 11 types of vehicles in real parking environment and made experiments on edge detection with various values of resolution. We can scan parking space more accurately in real parking environment. We propose the diagonal sensor to get information about the side of parking space. Our proposed method has benefit calculation and implementation is very simple.", "title": "" }, { "docid": "e74a57805aef21974b263b65f5d4b67a", "text": "Status epilepticus (SE) may cause death or severe sequelae unless seizures are terminated promptly. Various types of SE exist, and treatment should be adjusted to the specific type. Yet some basic guiding principles are broadly applicable: (1) early treatment is most effective, (2) benzodiazepines are the best first line agents, (3) electroencephalography should be used to confirm the termination of seizures in patients who are not alert and to monitor therapy in refractory cases, and (4) close attention to the appearance of systemic complications (from the SE per se or from the medications used to treat it) is essential. This article expands on these principles and summarizes current knowledge on the definition, classification, diagnosis, and treatment of SE.", "title": "" }, { "docid": "3fd8092faee792a316fb3d1d7c2b6244", "text": "The complete dynamics model of a four-Mecanum-wheeled robot considering mass eccentricity and friction uncertainty is derived using the Lagrange’s equation. Then based on the dynamics model, a nonlinear stable adaptive control law is derived using the backstepping method via Lyapunov stability theory. In order to compensate for the model uncertainty, a nonlinear damping term is included in the control law, and the parameter update law with σ-modification is considered for the uncertainty estimation. Computer simulations are conducted to illustrate the suggested control approach.", "title": "" }, { "docid": "6557347e1c0ebf014842c9ae2c77dbed", "text": "----------------------------------------------------------------------ABSTRACT-------------------------------------------------------------Steganography is derived from the Greek word steganos which literally means “Covered” and graphy means “Writing”, i.e. covered writing. Steganography refers to the science of “invisible” communication. For hiding secret information in various file formats, there exists a large variety of steganographic techniques some are more complex than others and all of them have respective strong and weak points. The Least Significant Bit (LSB) embedding technique suggests that data can be hidden in the least significant bits of the cover image and the human eye would be unable to notice the hidden image in the cover file. This technique can be used for hiding images in 24-Bit, 8-Bit, Gray scale format. This paper explains the LSB Embedding technique and Presents the evaluation for various file formats.", "title": "" }, { "docid": "f68f82e0d7f165557433580ad1e3e066", "text": "Four experiments demonstrate effects of prosodic structure on speech production latencies. Experiments 1 to 3 exploit a modified version of the Sternberg et al. (1978, 1980) prepared speech production paradigm to look for evidence of the generation of prosodic structure during the final stages of sentence production. Experiment 1 provides evidence that prepared sentence production latency is a function of the number of phonological words that a sentence comprises when syntactic structure, number of lexical items, and number of syllables are held constant. Experiment 2 demonstrated that production latencies in Experiment 1 were indeed determined by prosodic structure rather than the number of content words that a sentence comprised. The phonological word effect was replicated in Experiment 3 using utterances with a different intonation pattern and phrasal structure. Finally, in Experiment 4, an on-line version of the sentence production task provides evidence for the phonological word as the preferred unit of articulation during the on-line production of continuous speech. Our findings are consistent with the hypothesis that the phonological word is a unit of processing during the phonological encoding of connected speech. q 1997 Academic Press", "title": "" }, { "docid": "c26919afa32708786ae7f96b88883ed9", "text": "A Privacy Enhancement Technology (PET) is an application or a mechanism which allows users to protect the privacy of their personally identifiable information. Early PETs were about enabling anonymous mailing and anonymous browsing, but lately there have been active research and development efforts in many other problem domains. This paper describes the first pattern language for developing privacy enhancement technologies. Currently, it contains 12 patterns. These privacy patterns are not limited to a specific problem domain; they can be applied to design anonymity systems for various types of online communication, online data sharing, location monitoring, voting and electronic cash management. The pattern language guides a developer when he or she is designing a PET for an existing problem, or innovating a solution for a new problem.", "title": "" }, { "docid": "b829049a8abf47f8f13595ca54eaa009", "text": "This paper describes a face recognition-based people tracking and re-identification system for RGB-D camera networks. The system tracks people and learns their faces online to keep track of their identities even if they move out from the camera's field of view once. For robust people re-identification, the system exploits the combination of a deep neural network- based face representation and a Bayesian inference-based face classification method. The system also provides a predefined people identification capability: it associates the online learned faces with predefined people face images and names to know the people's whereabouts, thus, allowing a rich human-system interaction. Through experiments, we validate the re-identification and the predefined people identification capabilities of the system and show an example of the integration of the system with a mobile robot. The overall system is built as a Robot Operating System (ROS) module. As a result, it simplifies the integration with the many existing robotic systems and algorithms which use such middleware. The code of this work has been released as open-source in order to provide a baseline for the future publications in this field.", "title": "" }, { "docid": "31abfd6e4f6d9e56bc134ffd7c7b7ffc", "text": "Online social networks like Facebook recommend new friends to users based on an explicit social network that users build by adding each other as friends. The majority of earlier work in link prediction infers new interactions between users by mainly focusing on a single network type. However, users also form several implicit social networks through their daily interactions like commenting on people’s posts or rating similarly the same products. Prior work primarily exploited both explicit and implicit social networks to tackle the group/item recommendation problem that recommends to users groups to join or items to buy. In this paper, we show that auxiliary information from the useritem network fruitfully combines with the friendship network to enhance friend recommendations. We transform the well-known Katz algorithm to utilize a multi-modal network and provide friend recommendations. We experimentally show that the proposed method is more accurate in recommending friends when compared with two single source path-based algorithms using both synthetic and real data sets.", "title": "" }, { "docid": "2d54a447df50a31c6731e513bfbac06b", "text": "Lumbar intervertebral disc diseases are among the main causes of lower back pain (LBP). Desiccation is a common disease resulting from various reasons and ultimately most people are affected by desiccation at some age. We propose a probabilistic model that incorporates intervertebral disc appearance and contextual information for automating the diagnosis of lumbar disc desiccation. We utilize a Gibbs distribution for processing localized lumbar intervertebral discs' appearance and contextual information. We use 55 clinical T2-weighted MRI for lumbar area and achieve over 96% accuracy on a cross validation experiment.", "title": "" }, { "docid": "4d0e3b6681c45d6cc89ddc98fb6d447a", "text": "Voxel-based modeling techniques are known for their robustness and flexibility. However, they have three major shortcomings: (1) Memory intensive, since a large number of voxels are needed to represent high-resolution models (2) Computationally expensive, since a large number of voxels need to be visited (3) Computationally expensive isosurface extraction is needed to visualize the results. We describe techniques which alleviate these by taking advantage of self-similarity in the data making voxel-techniques practical and attractive. We describe algorithms for MEMS process emulation, isosurface extraction and visualization which utilize these techniques.", "title": "" }, { "docid": "c00470d69400066d11374539052f4a86", "text": "When individuals learn facts (e.g., foreign language vocabulary) over multiple study sessions, the temporal spacing of study has a significant impact on memory retention. Behavioral experiments have shown a nonmonotonic relationship between spacing and retention: short or long intervals between study sessions yield lower cued-recall accuracy than intermediate intervals. Appropriate spacing of study can double retention on educationally relevant time scales. We introduce a Multiscale Context Model (MCM) that is able to predict the influence of a particular study schedule on retention for specific material. MCM’s prediction is based on empirical data characterizing forgetting of the material following a single study session. MCM is a synthesis of two existing memory models (Staddon, Chelaru, & Higa, 2002; Raaijmakers, 2003). On the surface, these models are unrelated and incompatible, but we show they share a core feature that allows them to be integrated. MCM can determine study schedules that maximize the durability of learning, and has implications for education and training. MCM can be cast either as a neural network with inputs that fluctuate over time, or as a cascade of leaky integrators. MCM is intriguingly similar to a Bayesian multiscale model of memory (Kording, Tenenbaum, & Shadmehr, 2007), yet MCM is better able to account for human declarative memory.", "title": "" }, { "docid": "6d262139067d030c3ebb1169e93c6422", "text": "In this paper, we present a study on learning visual recognition models from large scale noisy web data. We build a new database called WebVision, which contains more than 2.4 million web images crawled from the Internet by using queries generated from the 1, 000 semantic concepts of the ILSVRC 2012 benchmark. Meta information along with those web images (e.g., title, description, tags, etc.) are also crawled. A validation set and test set containing human annotated images are also provided to facilitate algorithmic development. Based on our new database, we obtain a few interesting observations: 1) the noisy web images are sufficient for training a good deep CNN model for visual recognition; 2) the model learnt from our WebVision database exhibits comparable or even better generalization ability than the one trained from the ILSVRC 2012 dataset when being transferred to new datasets and tasks; 3) a domain adaptation issue (a.k.a., dataset bias) is observed, which means the dataset can be used as the largest benchmark dataset for visual domain adaptation. Our new WebVision database and relevant studies in this work would benefit the advance of learning state-of-the-art visual models with minimum supervision based on web data.", "title": "" }, { "docid": "cdca4a6cb35cbc674c06465c742dfe50", "text": "The generation of new lymphatic vessels through lymphangiogenesis and the remodelling of existing lymphatics are thought to be important steps in cancer metastasis. The past decade has been exciting in terms of research into the molecular and cellular biology of lymphatic vessels in cancer, and it has been shown that the molecular control of tumour lymphangiogenesis has similarities to that of tumour angiogenesis. Nevertheless, there are significant mechanistic differences between these biological processes. We are now developing a greater understanding of the specific roles of distinct lymphatic vessel subtypes in cancer, and this provides opportunities to improve diagnostic and therapeutic approaches that aim to restrict the progression of cancer.", "title": "" }, { "docid": "6d9393c95ca9c6534c98c0d0a4451fbc", "text": "The recent work of Clark et al. (2018) introduces the AI2 Reasoning Challenge (ARC) and the associated ARC dataset that partitions open domain, complex science questions into an Easy Set and a Challenge Set. That paper includes an analysis of 100 questions with respect to the types of knowledge and reasoning required to answer them; however, it does not include clear definitions of these types, nor does it offer information about the quality of the labels. We propose a comprehensive set of definitions of knowledge and reasoning types necessary for answering the questions in the ARC dataset. Using ten annotators and a sophisticated annotation interface, we analyze the distribution of labels across the Challenge Set and statistics related to them. Additionally, we demonstrate that although naive information retrieval methods return sentences that are irrelevant to answering the query, sufficient supporting text is often present in the (ARC) corpus. Evaluating with human-selected relevant sentences improves the performance of a neural machine comprehension model by 42 points.", "title": "" }, { "docid": "db4c2238363a173ba1c1e28da809d567", "text": "In most applications of Ground Penetrating Radar (GPR), it is very important to combine the radar with an accurate positioning system. This allows solving errors in the localisation of buried objects, which may be generated by measurement conditions such as the soil slope, in the case of a ground-coupled GPR, and the aerial vehicle altitude, in the case of a GPR mounted on a drone or helicopter. This paper presents the implementation of a low-cost system for positioning, tracking and trimming of GPR data. The proposed system integrates Global Positioning System (GPS) data with those of an Inertial Measurement Unit (IMU). So far, the electronic board including GPS and IMU was designed, developed and tested in the laboratory. As a next step, GPR results will be collected in outdoor scenarios of practical interest and the accuracy of data measured by using our positioning system will be compared to the accuracy of data measured without using it.", "title": "" }, { "docid": "e56abb473e262fec3c0260202564be0a", "text": "This paper presents and analyzes an annotated corpus of definitions, created to train an algorithm for the automatic extraction of definitions and hypernyms from Web documents. As an additional resource, we also include a corpus of non-definitions with syntactic patterns similar to those of definition sentences, e.g.: “An android is a robot” vs. “Snowcap is unmistakable”. Domain and style independence is obtained thanks to the annotation of a sample of the Wikipedia corpus and to a novel pattern generalization algorithm based on wordclass lattices (WCL). A lattice is a directed acyclic graph (DAG), a subclass of nondeterministic finite state automata (NFA). The lattice structure has the purpose of preserving the salient differences among distinct sequences, while eliminating redundant information. The WCL algorithm will be integrated into an improved version of the GlossExtractor Web application (Velardi et al., 2008). This paper is mostly concerned with a description of the corpus, the annotation strategy, and a linguistic analysis of the data. A summary of the WCL algorithm is also provided for the sake of completeness.", "title": "" }, { "docid": "985df151ccbc9bf47b05cffde47a6342", "text": "This paper establishes the criteria to ensure stable operation of two-stage, bidirectional, isolated AC-DC converters. The bi-directional converter is analyzed in the context of a building block module (BBM) that enables a fully modular architecture for universal power flow conversion applications (AC-DC, DC-AC and DC-DC). The BBM consists of independently controlled AC-DC and isolated DC-DC converters that are cascaded for bidirectional power flow applications. The cascaded converters have different control objectives in different directions of power flow. This paper discusses methods to obtain the appropriate input and output impedances that determine stability in the context of bi-directional AC-DC power conversion. Design procedures to ensure stable operation with minimal interaction between the cascaded stages are presented. The analysis and design methods are validated through extensive simulation and hardware results.", "title": "" } ]
scidocsrr
a51b226da1008a52c9ad1870f0497e60
UiLog: Improving Log-Based Fault Diagnosis by Log Analysis
[ { "docid": "4dc9360837b5793a7c322f5b549fdeb1", "text": "Today, event logs contain vast amounts of data that can easily overwhelm a human. Therefore, mining patterns from event logs is an important system management task. This paper presents a novel clustering algorithm for log file data sets which helps one to detect frequent patterns from log files, to build log file profiles, and to identify anomalous log file lines. Keywords—system monitoring, data mining, data clustering", "title": "" } ]
[ { "docid": "f333bc03686cf85aee0a65d4a81e8b34", "text": "A large portion of data mining and analytic services use modern machine learning techniques, such as deep learning. The state-of-the-art results by deep learning come at the price of an intensive use of computing resources. The leading frameworks (e.g., TensorFlow) are executed on GPUs or on high-end servers in datacenters. On the other end, there is a proliferation of personal devices with possibly free CPU cycles; this can enable services to run in users' homes, embedding machine learning operations. In this paper, we ask the following question: Is distributed deep learning computation on WAN connected devices feasible, in spite of the traffic caused by learning tasks? We show that such a setup rises some important challenges, most notably the ingress traffic that the servers hosting the up-to-date model have to sustain. In order to reduce this stress, we propose AdaComp, a novel algorithm for compressing worker updates to the model on the server. Applicable to stochastic gradient descent based approaches, it combines efficient gradient selection and learning rate modulation. We then experiment and measure the impact of compression, device heterogeneity and reliability on the accuracy of learned models, with an emulator platform that embeds TensorFlow into Linux containers. We report a reduction of the total amount of data sent by workers to the server by two order of magnitude (e.g., 191-fold reduction for a convolutional network on the MNIST dataset), when compared to a standard asynchronous stochastic gradient descent, while preserving model accuracy.", "title": "" }, { "docid": "5e0898aa58d092a1f3d64b37af8cf838", "text": "In this paper, we design a Deep Dual-Domain (D3) based fast restoration model to remove artifacts of JPEG compressed images. It leverages the large learning capacity of deep networks, as well as the problem-specific expertise that was hardly incorporated in the past design of deep architectures. For the latter, we take into consideration both the prior knowledge of the JPEG compression scheme, and the successful practice of the sparsity-based dual-domain approach. We further design the One-Step Sparse Inference (1-SI) module, as an efficient and lightweighted feed-forward approximation of sparse coding. Extensive experiments verify the superiority of the proposed D3 model over several state-of-the-art methods. Specifically, our best model is capable of outperforming the latest deep model for around 1 dB in PSNR, and is 30 times faster.", "title": "" }, { "docid": "645faf32f40732d291e604d7240f0546", "text": "Fault Diagnostics and Prognostics has been an increasing interest in recent years, as a result of the increased degree of automation and the growing demand for higher performance, efficiency, reliability and safety in industrial systems. On-line fault detection and isolation methods have been developed for automated processes. These methods include data mining methodologies, artificial intelligence methodologies or combinations of the two. Data Mining is the statistical approach of extracting knowledge from data. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. Activities in AI include searching, recognizing patterns and making logical inferences. This paper focuses on the various techniques used for Fault Diagnostics and Prognostics in Industry application domains.", "title": "" }, { "docid": "d4a4c4a1d933488ab686097e18b4373a", "text": "Psychological stress is an important factor for the development of irritable bowel syndrome (IBS). More and more clinical and experimental evidence showed that IBS is a combination of irritable bowel and irritable brain. In the present review we discuss the potential role of psychological stress in the pathogenesis of IBS and provide comprehensive approaches in clinical treatment. Evidence from clinical and experimental studies showed that psychological stresses have marked impact on intestinal sensitivity, motility, secretion and permeability, and the underlying mechanism has a close correlation with mucosal immune activation, alterations in central nervous system, peripheral neurons and gastrointestinal microbiota. Stress-induced alterations in neuro-endocrine-immune pathways acts on the gut-brain axis and microbiota-gut-brain axis, and cause symptom flare-ups or exaggeration in IBS. IBS is a stress-sensitive disorder, therefore, the treatment of IBS should focus on managing stress and stress-induced responses. Now, non-pharmacological approaches and pharmacological strategies that target on stress-related alterations, such as antidepressants, antipsychotics, miscellaneous agents, 5-HT synthesis inhibitors, selective 5-HT reuptake inhibitors, and specific 5-HT receptor antagonists or agonists have shown a critical role in IBS management. A integrative approach for IBS management is a necessary.", "title": "" }, { "docid": "5cd6debed0333d480aeafe406f526d2b", "text": "In the coming advanced age society, an innovative technology to assist the activities of daily living of elderly and disabled people and the heavy work in nursing is desired. To develop such a technology, an actuator safe and friendly for human is required. It should be small, lightweight and has to provide a proper softness. A pneumatic rubber artificial muscle is available as such actuators. We have developed some types of pneumatic rubber artificial muscles and applied them to wearable power assist devices. A wearable power assist device is equipped to the human body to assist the muscular force, which supports activities of daily living, rehabilitation, heavy working, training and so on. In this paper, some types of pneumatic rubber artificial muscles developed in our laboratory are introduced. Further, two kinds of wearable power assist devices driven with the rubber artificial muscles are described. Some evaluations can clarify the effectiveness of pneumatic rubber artificial muscle for such an innovative human assist technology.", "title": "" }, { "docid": "79cdd24d14816f45b539f31606a3d5ee", "text": "The huge increase in type 2 diabetes is a burden worldwide. Many marketed compounds do not address relevant aspects of the disease; they may already compensate for defects in insulin secretion and insulin action, but loss of secreting cells (β-cell destruction), hyperglucagonemia, gastric emptying, enzyme activation/inhibition in insulin-sensitive cells, substitution or antagonizing of physiological hormones and pathways, finally leading to secondary complications of diabetes, are not sufficiently addressed. In addition, side effects for established therapies such as hypoglycemias and weight gain have to be diminished. At present, nearly 1000 compounds have been described, and approximately 180 of these are going to be developed (already in clinical studies), some of them directly influencing enzyme activity, influencing pathophysiological pathways, and some using G-protein-coupled receptors. In addition, immunological approaches and antisense strategies are going to be developed. Many compounds are derived from physiological compounds (hormones) aiming at improving their kinetics and selectivity, and others are chemical compounds that were obtained by screening for a newly identified target in the physiological or pathophysiological machinery. In some areas, great progress is observed (e.g., incretin area); in others, no great progress is obvious (e.g., glucokinase activators), and other areas are not recommended for further research. For all scientific areas, conclusions with respect to their impact on diabetes are given. Potential targets for which no chemical compound has yet been identified as a ligand (agonist or antagonist) are also described.", "title": "" }, { "docid": "da694b74b3eaae46d15f589e1abef4b8", "text": "Impaired water quality caused by human activity and the spread of invasive plant and animal species has been identified as a major factor of degradation of coastal ecosystems in the tropics. The main goal of this study was to evaluate the performance of AnnAGNPS (Annualized NonPoint Source Pollution Model), in simulating runoff and soil erosion in a 48 km watershed located on the Island of Kauai, Hawaii. The model was calibrated and validated using 2 years of observed stream flow and sediment load data. Alternative scenarios of spatial rainfall distribution and canopy interception were evaluated. Monthly runoff volumes predicted by AnnAGNPS compared well with the measured data (R 1⁄4 0.90, P < 0.05); however, up to 60% difference between the actual and simulated runoff were observed during the driest months (May and July). Prediction of daily runoff was less accurate (R 1⁄4 0.55, P < 0.05). Predicted and observed sediment yield on a daily basis was poorly correlated (R 1⁄4 0.5, P < 0.05). For the events of small magnitude, the model generally overestimated sediment yield, while the opposite was true for larger events. Total monthly sediment yield varied within 50% of the observed values, except for May 2004. Among the input parameters the model was most sensitive to the values of ground residue cover and canopy cover. It was found that approximately one third of the watershed area had low sediment yield (0e1 t ha 1 y ), and presented limited erosion threat. However, 5% of the area had sediment yields in excess of 5 t ha 1 y . Overall, the model performed reasonably well, and it can be used as a management tool on tropical watersheds to estimate and compare sediment loads, and identify ‘‘hot spots’’ on the landscape. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7c98ac06ea8cb9b83673a9c300fb6f4c", "text": "Heart rate monitoring from wrist-type photoplethysmographic (PPG) signals during subjects' intensive exercise is a difficult problem, since the PPG signals are contaminated by extremely strong motion artifacts caused by subjects' hand movements. In this work, we formulate the heart rate estimation problem as a sparse signal recovery problem, and use a sparse signal recovery algorithm to calculate high-resolution power spectra of PPG signals, from which heart rates are estimated by selecting corresponding spectrum peaks. To facilitate the use of sparse signal recovery, we propose using bandpass filtering, singular spectrum analysis, and temporal difference operation to partially remove motion artifacts and sparsify PPG spectra. The proposed method was tested on PPG recordings from 10 subjects who were fast running at the peak speed of 15km/hour. The results showed that the averaged absolute estimation error was only 2.56 Beats/Minute, or 1.94% error compared to ground-truth heart rates from simultaneously recorded ECG.", "title": "" }, { "docid": "302079b366d2bc0c951e3c7d8eb30815", "text": "The rapid traffic growth and ubiquitous access requirements make it essential to explore the next generation (5G) wireless communication networks. In the current 5G research area, non-orthogonal multiple access has been proposed as a paradigm shift of physical layer technologies. Among all the existing non-orthogonal technologies, the recently proposed sparse code multiple access (SCMA) scheme is shown to achieve a better link level performance. In this paper, we extend the study by proposing an unified framework to analyze the energy efficiency of SCMA scheme and a low complexity decoding algorithm which is critical for prototyping. We show through simulation and prototype measurement results that SCMA scheme provides extra multiple access capability with reasonable complexity and energy consumption, and hence, can be regarded as an energy efficient approach for 5G wireless communication systems.", "title": "" }, { "docid": "d81fb36cad466df8629fada7e7f7cc8d", "text": "The limitations of each security technology combined with the growth of cyber attacks impact the efficiency of information security management and increase the activities to be performed by network administrators and security staff. Therefore, there is a need for the increase of automated auditing and intelligent reporting mechanisms for the cyber trust. Intelligent systems are emerging computing systems based on intelligent techniques that support continuous monitoring and controlling plant activities. Intelligence improves an individual’s ability to make better decisions. This paper presents a proposed architecture of an Intelligent System for Information Security Management (ISISM). The objective of this system is to improve security management processes such as monitoring, controlling, and decision making with an effect size that is higher than an expert in security by providing mechanisms to enhance the active construction of knowledge about threats, policies, procedures, and risks. We focus on requirements and design issues for the basic components of the intelligent system.", "title": "" }, { "docid": "2a8f464e709dcae4e34f73654aefe31f", "text": "LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems.\n In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.", "title": "" }, { "docid": "f70c07e15c4070edf75e8846b4dff0b3", "text": "Polyphenols, including flavonoids, phenolic acids, proanthocyanidins and resveratrol, are a large and heterogeneous group of phytochemicals in plant-based foods, such as tea, coffee, wine, cocoa, cereal grains, soy, fruits and berries. Growing evidence indicates that various dietary polyphenols may influence carbohydrate metabolism at many levels. In animal models and a limited number of human studies carried out so far, polyphenols and foods or beverages rich in polyphenols have attenuated postprandial glycemic responses and fasting hyperglycemia, and improved acute insulin secretion and insulin sensitivity. The possible mechanisms include inhibition of carbohydrate digestion and glucose absorption in the intestine, stimulation of insulin secretion from the pancreatic beta-cells, modulation of glucose release from the liver, activation of insulin receptors and glucose uptake in the insulin-sensitive tissues, and modulation of intracellular signalling pathways and gene expression. The positive effects of polyphenols on glucose homeostasis observed in a large number of in vitro and animal models are supported by epidemiological evidence on polyphenol-rich diets. To confirm the implications of polyphenol consumption for prevention of insulin resistance, metabolic syndrome and eventually type 2 diabetes, human trials with well-defined diets, controlled study designs and clinically relevant end-points together with holistic approaches e.g., systems biology profiling technologies are needed.", "title": "" }, { "docid": "2b2cd290f12d98667d6a4df12697a05e", "text": "The chapter proposes three ways of integration of the two different worlds of relational and NoSQL databases: native, hybrid, and reducing to one option, either relational or NoSQL. The native solution includes using vendors’ standard APIs and integration on the business layer. In a relational environment, APIs are based on SQL standards, while the NoSQL world has its own, unstandardized solutions. The native solution means using the APIs of the individual systems that need to be connected, leaving to the businesslayer coding the task of linking and separating data in extraction and storage operations. A hybrid solution introduces an additional layer that provides SQL communication between the business layer and the data layer. The third integration solution includes vendors’ effort to foresee functionalities of “opposite” side, thus convincing developers’ community that their solution is sufficient.", "title": "" }, { "docid": "4421a42fc5589a9b91215b68e1575a3f", "text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.", "title": "" }, { "docid": "986df17e2fe07cf2c70c37391f99a5da", "text": "This paper is the last in a series of 16 which have explored current uses of information communications technology (ICT) in all areas of dentistry in general, and in dental education in particular. In this paper the authors explore current developments, referring back to the previous 15 papers, and speculate on how ICT should increasingly contribute to dental education in the future. After describing a vision of dental education in the next 50 years, the paper considers how ICT can help to fulfil the vision. It then takes a brief look at three aspects of the use of ICT in the world in general and speculates how dentistry can learn from other areas of human endeavour. Barriers to the use of ICT in dental education are then discussed. The final section of the paper outlines new developments in haptics, immersive environments, the semantic web, the IVIDENT project, nanotechnology and ergonometrics. The paper concludes that ICT will offer great opportunities to dental education but questions whether or not human limitations will allow it to be used to maximum effect.", "title": "" }, { "docid": "a8858713a7040ce6dd25706c9b72b45c", "text": "A new type of wearable button antenna for wireless local area network (WLAN) applications is proposed. The antenna is composed of a button with a diameter of circa 16 mm incorporating a patch on top of a dielectric disc. The button is located on top of a textile substrate and a conductive textile ground that are to be incorporated in clothing. The main characteristic feature of this antenna is that it shows two different types of radiation patterns, a monopole type pattern in the 2.4 GHz band for on-body communications and a broadside type pattern in the 5 GHz band for off-body communications. A very high efficiency of about 90% is obtained, which is much higher than similar full textile solutions in the literature. A prototype has been fabricated and measured. The effect of several real-life situations such as a tilted button and bending of the textile ground have been studied. Measurements agree very well with simulations.", "title": "" }, { "docid": "43c49bb7d9cebb8f476079ac9dd0af27", "text": "Nowadays, most recommender systems (RSs) mainly aim to suggest appropriate items for individuals. Due to the social nature of human beings, group activities have become an integral part of our daily life, thus motivating the study on group RS (GRS). However, most existing methods used by GRS make recommendations through aggregating individual ratings or individual predictive results rather than considering the collective features that govern user choices made within a group. As a result, such methods are heavily sensitive to data, hence they often fail to learn group preferences when the data are slightly inconsistent with predefined aggregation assumptions. To this end, we devise a novel GRS approach which accommodates both individual choices and group decisions in a joint model. More specifically, we propose a deep-architecture model built with collective deep belief networks and dual-wing restricted Boltzmann machines. With such a deep model, we can use high-level features, which are induced from lower-level features, to represent group preference so as to relieve the vulnerability of data. Finally, the experiments conducted on a real-world dataset prove the superiority of our deep model over other state-of-the-art methods.", "title": "" }, { "docid": "ca095eee8abefd4aef9fd8971efd7fb5", "text": "A radio-frequency identification (RFID) tag is a small, inexpensive microchip that emits an identifier in response to a query from a nearby reader. The price of these tags promises to drop to the range of $0.05 per unit in the next several years, offering a viable and powerful replacement for barcodes. The challenge in providing security for low-cost RFID tags is that they are computationally weak devices, unable to perform even basic symmetric-key cryptographic operations. Security researchers often therefore assume that good privacy protection in RFID tags is unattainable. In this paper, we explore a notion of minimalist cryptography suitable for RFID tags. We consider the type of security obtainable in RFID devices with a small amount of rewritable memory, but very limited computing capability. Our aim is to show that standard cryptography is not necessary as a starting point for improving security of very weak RFID devices. Our contribution is threefold: 1. We propose a new formal security model for authentication and privacy in RFID tags. This model takes into account the natural computational limitations and the likely attack scenarios for RFID tags in real-world settings. It represents a useful divergence from standard cryptographic security modeling, and thus a new view of practical formalization of minimal security requirements for low-cost RFID-tag security. 2. We describe protocol that provably achieves the properties of authentication and privacy in RFID tags in our proposed model, and in a good practical sense. Our proposed protocol involves no computationally intensive cryptographic operations, and relatively little storage. 3. Of particular practical interest, we describe some reduced-functionality variants of our protocol. We show, for instance, how static pseudonyms may considerably enhance security against eavesdropping in low-cost RFID tags. Our most basic static-pseudonym proposals require virtually no increase in existing RFID tag resources.", "title": "" }, { "docid": "fcd0c523e74717c572c288a90c588259", "text": "From analyzing 100 assessments of coping, the authors critiqued strategies and identified best practices for constructing category systems. From current systems, a list of 400 ways of coping was compiled. For constructing lower order categories, the authors concluded that confirmatory factor analysis should replace the 2 most common strategies (exploratory factor analysis and rational sorting). For higher order categories, they recommend that the 3 most common distinctions (problem- vs. emotion-focused, approach vs. avoidance, and cognitive vs. behavioral) no longer be used. Instead, the authors recommend hierarchical systems of action types (e.g., proximity seeking, accommodation). From analysis of 6 such systems, 13 potential core families of coping were identified. Future steps involve deciding how to organize these families, using their functional homogeneity and distinctiveness, and especially their links to adaptive processes.", "title": "" }, { "docid": "dd84b653de8b3b464c904a988a622a39", "text": "We demonstrate that for sentence-level relation extraction it is beneficial to consider other relations in the sentential context while predicting the target relation. Our architecture uses an LSTM-based encoder to jointly learn representations for all relations in a single sentence. We combine the context representations with an attention mechanism to make the final prediction. We use the Wikidata knowledge base to construct a dataset of multiple relations per sentence and to evaluate our approach. Compared to a baseline system, our method results in an average error reduction of 24% on a held-out set of relations. The code and the dataset to replicate the experiments are made available at https://github.com/ukplab.", "title": "" } ]
scidocsrr
8d5f60dd08e3d1f5fee9bf9912cdc382
A deliberate practice account of typing proficiency in everyday typists.
[ { "docid": "420a3d0059a91e78719955b4cc163086", "text": "The superior skills of experts, such as accomplished musicians and chess masters, can be amazing to most spectators. For example, club-level chess players are often puzzled by the chess moves of grandmasters and world champions. Similarly, many recreational athletes find it inconceivable that most other adults – regardless of the amount or type of training – have the potential ever to reach the performance levels of international competitors. Especially puzzling to philosophers and scientists has been the question of the extent to which expertise requires innate gifts versus specialized acquired skills and abilities. One of the most widely used and simplest methods of gathering data on exceptional performance is to interview the experts themselves. But are experts always capable of describing their thoughts, their behaviors, and their strategies in a manner that would allow less-skilled individuals to understand how the experts do what they do, and perhaps also understand how they might reach expert level through appropriate training? To date, there has been considerable controversy over the extent to which experts are capable of explaining the nature and structure of their exceptional performance. Some pioneering scientists, such as Binet (1893 / 1966), questioned the validity of the experts’ descriptions when they found that some experts gave reports inconsistent with those of other experts. To make matters worse, in those rare cases that allowed verification of the strategy by observing the performance, discrepancies were found between the reported strategies and the observations (Watson, 1913). Some of these discrepancies were explained, in part, by the hypothesis that some processes were not normally mediated by awareness/attention and that the mere act of engaging in self-observation (introspection) during performance changed the content of ongoing thought processes. These problems led most psychologists in first half of the 20th century to reject all types of introspective verbal reports as valid scientific evidence, and they focused almost exclusively on observable behavior (Boring, 1950). In response to the problems with the careful introspective analysis of images and perceptions, investigators such as John B.", "title": "" } ]
[ { "docid": "d2b45d76e93f07ededbab03deee82431", "text": "A cordless battery charger will greatly improve the user friendliness of electric vehicles (EVs), accelerating the replacement of traditional internal combustion engine (ICE) vehicles with EVs and improving energy sustainability as a result. Resonant circuits are used for both the power transmitter and receiver of a cordless charger to compensate their coils and improve power transfer efficiency. However, conventional compensation circuit topology is not suitable for application to an EV, which involves very large power, a wide gap between the transmitter and receiver coils, and large horizontal misalignment. This paper proposes a novel compensation circuit topology that has a carefully designed series capacitor added to the parallel resonant circuit of the receiver. The proposed circuit has been implemented and tested on an EV. The simulation and experimental results are presented to show that the circuit can improve the power factor and power transfer efficiency, and as a result, allow a larger gap between the transmitter and receiver coils.", "title": "" }, { "docid": "86ce47260d84ddcf8558a0e5e4f2d76f", "text": "We present the definition and computational algorithms for a new class of surfaces which are dual to the isosurface produced by the widely used marching cubes (MC) algorithm. These new isosurfaces have the same separating properties as the MC surfaces but they are comprised of quad patches that tend to eliminate the common negative aspect of poorly shaped triangles of the MC isosurfaces. Based upon the concept of this new dual operator, we describe a simple, but rather effective iterative scheme for producing smooth separating surfaces for binary, enumerated volumes which are often produced by segmentation algorithms. Both the dual surface algorithm and the iterative smoothing scheme are easily implemented.", "title": "" }, { "docid": "82be3cafe24185b1f3c58199031e41ef", "text": "UNLABELLED\nFamily-based therapy (FBT) is regarded as best practice for the treatment of eating disorders in children and adolescents. In FBT, parents play a vital role in bringing their child or adolescent to health; however, a significant minority of families do not respond to this treatment. This paper introduces a new model whereby FBT is enhanced by integrating emotion-focused therapy (EFT) principles and techniques with the aims of helping parents to support their child's refeeding and interruption of symptoms. Parents are also supported to become their child's 'emotion coach'; and to process any emotional 'blocks' that may interfere with their ability to take charge of recovery. A parent testimonial is presented to illustrate the integration of the theory and techniques of EFT in the FBT model. EFFT (Emotion-Focused Family Therapy) is a promising model of therapy for those families who require a more intense treatment to bring about recovery of an eating disorder.\n\n\nKEY PRACTITIONER MESSAGE\nMore intense therapeutic models exist for treatment-resistant eating disorders in children and adolescents. Emotion is a powerful healing tool in families struggling with an eating disorder. Working with parent's emotions and emotional reactions to their child's struggles has the potential to improve child outcomes.", "title": "" }, { "docid": "72226ba8d801a3db776cf40d5243c521", "text": "Hyperspectral image (HSI) classification is one of the most widely used methods for scene analysis from hyperspectral imagery. In the past, many different engineered features have been proposed for the HSI classification problem. In this paper, however, we propose a feature learning approach for hyperspectral image classification based on convolutional neural networks (CNNs). The proposed CNN model is able to learn structured features, roughly resembling different spectral band-pass filters, directly from the hyperspectral input data. Our experimental results, conducted on a commonly-used remote sensing hyperspectral dataset, show that the proposed method provides classification results that are among the state-of-the-art, without using any prior knowledge or engineered features.", "title": "" }, { "docid": "950fe0124f830a63f528aa5905116c82", "text": "One of the main barriers to immersivity during object manipulation in virtual reality is the lack of realistic haptic feedback. Our goal is to convey compelling interactions with virtual objects, such as grasping, squeezing, pressing, lifting, and stroking, without requiring a bulky, world-grounded kinesthetic feedback device (traditional haptics) or the use of predetermined passive objects (haptic retargeting). To achieve this, we use a pair of finger-mounted haptic feedback devices that deform the skin on the fingertips to convey cutaneous force information from object manipulation. We show that users can perceive differences in virtual object weight and that they apply increasing grasp forces when lifting virtual objects as rendered mass is increased. Moreover, we show how naive users perceive changes of a virtual object's physical properties when we use skin deformation to render objects with varying mass, friction, and stiffness. These studies demonstrate that fingertip skin deformation devices can provide a compelling haptic experience appropriate for virtual reality scenarios involving object manipulation.", "title": "" }, { "docid": "c0d7cd54a947d9764209e905a6779d45", "text": "The mainstream approach to protecting the location-privacy of mobile users in location-based services (LBSs) is to alter the users' actual locations in order to reduce the location information exposed to the service provider. The location obfuscation algorithm behind an effective location-privacy preserving mechanism (LPPM) must consider three fundamental elements: the privacy requirements of the users, the adversary's knowledge and capabilities, and the maximal tolerated service quality degradation stemming from the obfuscation of true locations. We propose the first methodology, to the best of our knowledge, that enables a designer to find the optimal LPPM for a LBS given each user's service quality constraints against an adversary implementing the optimal inference algorithm. Such LPPM is the one that maximizes the expected distortion (error) that the optimal adversary incurs in reconstructing the actual location of a user, while fulfilling the user's service-quality requirement. We formalize the mutual optimization of user-adversary objectives (location privacy vs. correctness of localization) by using the framework of Stackelberg Bayesian games. In such setting, we develop two linear programs that output the best LPPM strategy and its corresponding optimal inference attack. Our optimal user-centric LPPM can be easily integrated in the users' mobile devices they use to access LBSs. We validate the efficacy of our game theoretic method against real location traces. Our evaluation confirms that the optimal LPPM strategy is superior to a straightforward obfuscation method, and that the optimal localization attack performs better compared to a Bayesian inference attack.", "title": "" }, { "docid": "bdbbe079493bbfec7fb3cb577c926997", "text": "A large amount of information on the Web is contained in regularly structured objects, which we call data records. Such data records are important because they often present the essential information of their host pages, e.g., lists of products or services. It is useful to mine such data records in order to extract information from them to provide value-added services. Existing automatic techniques are not satisfactory because of their poor accuracies. In this paper, we propose a more effective technique to perform the task. The technique is based on two observations about data records on the Web and a string matching algorithm. The proposed technique is able to mine both contiguous and non-contiguous data records. Our experimental results show that the proposed technique outperforms existing techniques substantially.", "title": "" }, { "docid": "6717e438376a78cb177bfc3942b6eec6", "text": "Decisions are often guided by generalizing from past experiences. Fundamental questions remain regarding the cognitive and neural mechanisms by which generalization takes place. Prior data suggest that generalization may stem from inference-based processes at the time of generalization. By contrast, generalization may emerge from mnemonic processes occurring while premise events are encoded. Here, participants engaged in a two-phase learning and generalization task, wherein they learned a series of overlapping associations and subsequently generalized what they learned to novel stimulus combinations. Functional MRI revealed that successful generalization was associated with coupled changes in learning-phase activity in the hippocampus and midbrain (ventral tegmental area/substantia nigra). These findings provide evidence for generalization based on integrative encoding, whereby overlapping past events are integrated into a linked mnemonic representation. Hippocampal-midbrain interactions support the dynamic integration of experiences, providing a powerful mechanism for building a rich associative history that extends beyond individual events.", "title": "" }, { "docid": "b0727e320a1c532bd3ede4fd892d8d01", "text": "Semantic technologies could facilitate realizing features like interoperability and reasoning for Internet of Things (IoT). However, the dynamic and heterogeneous nature of IoT data, constrained resources, and real-time requirements set challenges for applying these technologies. In this paper, we study approaches for delivering semantic data from IoT nodes to distributed reasoning engines and reasoning over such data. We perform experiments to evaluate the scalability of these approaches and also study how reasoning is affected by different data aggregation strategies.", "title": "" }, { "docid": "932c66caf9665e9dea186732217d4313", "text": "Citations are very important parameters and are used to take many important decisions like ranking of researchers, institutions, countries, and to measure the relationship between research papers. All of these require accurate counting of citations and their occurrence (in-text citation counts) within the citing papers. Citation anchors refer to the citation made within the full text of the citing paper for example: ‘[1]’, ‘(Afzal et al, 2015)’, ‘[Afzal, 2015]’ etc. Identification of citation-anchors from the plain-text is a very challenging task due to the various styles and formats of citations. Recently, Shahid et al. highlighted some of the problems such as commonality in content, wrong allotment, mathematical ambiguities, and string variations etc in automatically identifying the in-text citation frequencies. The paper proposes an algorithm, CAD, for identification of citation-anchors and its in-text citation frequency based on different rules. For a comprehensive analysis, the dataset of research papers is prepared: on both Journal of Universal Computer Science (J.UCS) and (2) CiteSeer digital libraries. In experimental study, we conducted two experiments. In the first experiment, the proposed approach is compared with state-of-the-art technique over both datasets. The J.UCS dataset consists of 1200 research papers with 16,000 citation strings or references while the CiteSeer dataset consists of 52 research papers with 1850 references. The total dataset size becomes 1252 citing documents and 17,850 references. The experiments showed that CAD algorithm improved F-score by 44% and 37% respectively on both J.UCS and CiteSeer dataset over the contemporary technique (Shahid et al. in Int J Arab Inf Technol 12:481–488, 2014). The average score is 41% on both datasets. In the second experiment, the proposed approach is further analyzed against the existing state-of-the-art tools: CERMINE and GROBID. According to our results, the proposed approach is best performing with F1 of 0.99, followed by GROBID (F1 0.89) and CERMINE (F1 0.82).", "title": "" }, { "docid": "f2d1f05292ddb0df8fa92fe1992852ab", "text": "In this paper, we study the design of omnidirectional mobile robots with Active-Caster RObotic drive with BAll Transmission (ACROBAT). ACROBAT system has been developed by the authors group which realizes mechanical coordination of wheel and steering motions for creating caster behaviors without computer calculations. A motion in the specific direction relative to a robot body is fully depends on the motion of a specific motor. This feature gives a robot designer to build an omnidirectional mobile robot propelled by active-casters with no redundant actuation with a simple control. A controller of the robot becomes as simple as that for omni-wheeled robotic bases. Namely 3DOF of the omnidirectional robot is controlled by three motors using a simple and constant kinematics. ACROBAT includes a unique dual-ball transmission to transmit traction power to rotate and orient a drive wheel with distributing velocity components to wheel and steering axes in an appropriate ratio. Therefore a sensor for measuring a wheel orientation and calculations for velocity distributions are totally removed from a conventional control system. To build an omnidirectional vehicle by ACROBAT, the significant feature is some multiple drive shafts can be driven by a common motor which realizes non-redundant actuation of the robotic platform. A kinematic model of the proposed robot with ACROBAT is analyzed and a mechanical condition for realizing a non-redundant actuation is derived. Based on the kinematic model and the mechanical condition, computer simulations of the mechanism are performed. A prototype two-wheeled robot with two ACROBATs is designed and built to verify the availability of the proposed system. In the experiments, the prototype robot shows successful omnidirectional motions with a simple and constant kinematics based control.", "title": "" }, { "docid": "4d0b04f546ab5c0d79bb066b1431ff51", "text": "In this paper, we present an extraction and characterization methodology which allows for the determination, from S-parameter measurements, of the threshold voltage, the gain factor, and the mobility degradation factor, neither requiring data regressions involving multiple devices nor DC measurements. This methodology takes into account the substrate effects occurring in MOSFETs built in bulk technology so that physically meaningful parameters can be obtained. Furthermore, an analysis of the substrate impedance is presented, showing that this parasitic component not only degrades the performance of a microwave MOSFET, but may also lead to determining unrealistic values for the model parameters when not considered during a high-frequency characterization process. Measurements were made on transistors of different lengths, the shortest being 80 nm, in the 10 MHz to 40 GHz frequency range. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6b1fbc91a501ea25c7d3d20780a2be74", "text": "STUDY DESIGN\nA systematic quantitative review of the literature.\n\n\nOBJECTIVE\nTo compare combined anterior-posterior surgery versus posterior surgery for thoracolumbar fractures in order to identify better treatments.\n\n\nSUMMARY OF BACKGROUND DATA\nAxial load of the anterior and middle column of the spine can lead to a burst fracture in the vertebral body. The management of thoracolumbar burst fractures remains controversial. The goals of operative treatment are fracture reduction, fixation and decompressing the neural canal. For this, different operative methods are developed, for instance, the posterior and the combined anterior-posterior approach. Recent systematic qualitative reviews comparing these methods are lacking.\n\n\nMETHODS\nWe conducted an electronic search of MEDLINE, EMBASE, LILACS and the Cochrane Central Register for Controlled Trials.\n\n\nRESULTS\nFive observational comparative studies and no randomized clinical trials comparing the combined anteriorposterior approach with the posterior approach were retrieved. The total enrollment of patients in these studies was 755 patients. The results were expressed as relative risk (RR) for dichotomous outcomes and weighted mean difference (WMD) for continuous outcomes with 95% confidence intervals (CI).\n\n\nCONCLUSIONS\nA small significantly higher kyphotic correction and improvement of vertebral height (sagittal index) observed for the combined anterior-posterior group is cancelled out by more blood loss, longer operation time, longer hospital stay, higher costs and a possible higher intra- and postoperative complication rate requiring re-operation and the possibility of a worsened Hannover spine score. The surgeons' choices regarding the operative approach are biased: worse cases tended to undergo the combined anterior-posterior approach.", "title": "" }, { "docid": "50795998e83dafe3431c3509b9b31235", "text": "In this study, the daily movement directions of three frequently traded stocks (GARAN, THYAO and ISCTR) in Borsa Istanbul were predicted using deep neural networks. Technical indicators obtained from individual stock prices and dollar-gold prices were used as features in the prediction. Class labels indicating the movement direction were found using daily close prices of the stocks and they were aligned with the feature vectors. In order to perform the prediction process, the type of deep neural network, Convolutional Neural Network, was trained and the performance of the classification was evaluated by the accuracy and F-measure metrics. In the experiments performed, using both price and dollar-gold features, the movement directions in GARAN, THYAO and ISCTR stocks were predicted with the accuracy rates of 0.61, 0.578 and 0.574 respectively. Compared to using the price based features only, the use of dollar-gold features improved the classification performance.", "title": "" }, { "docid": "2bd5ca4cbb8ef7eea1f7b2762918d18b", "text": "Deep convolutional neural networks continue to advance the state-of-the-art in many domains as they grow bigger and more complex. It has been observed that many of the parameters of a large network are redundant, allowing for the possibility of learning a smaller network that mimics the outputs of the large network through a process called Knowledge Distillation. We show, however, that standard Knowledge Distillation is not effective for learning small models for the task of pedestrian detection. To improve this process, we introduce a higher-dimensional hint layer to increase information flow. We also estimate the uncertainty in the outputs of the large network and propose a loss function to incorporate this uncertainty. Finally, we attempt to boost the complexity of the small network without increasing its size by using as input hand-designed features that have been demonstrated to be effective for pedestrian detection. For only a 2.8% increase in miss rate, we have succeeded in training a student network that is 8 times faster and 21 times smaller than the teacher network.", "title": "" }, { "docid": "ec130c42c43a2a0ba8f33cd4a5d0082b", "text": "Support vector machine (SVM) has appeared as a powerful tool for forecasting forex market and demonstrated better performance over other methods, e.g., neural network or ARIMA based model. SVM-based forecasting model necessitates the selection of appropriate kernel function and values of free parameters: regularization parameter and ε– insensitive loss function. In this paper, we investigate the effect of different kernel functions, namely, linear, polynomial, radial basis and spline on prediction error measured by several widely used performance metrics. The effect of regularization parameter is also studied. The prediction of six different foreign currency exchange rates against Australian dollar has been performed and analyzed. Some interesting results are presented.", "title": "" }, { "docid": "207bb3922ad45daa1023b70e1a18baf7", "text": "The article explains how photo-response nonuniformity (PRNU) of imaging sensors can be used for a variety of important digital forensic tasks, such as device identification, device linking, recovery of processing history, and detection of digital forgeries. The PRNU is an intrinsic property of all digital imaging sensors due to slight variations among individual pixels in their ability to convert photons to electrons. Consequently, every sensor casts a weak noise-like pattern onto every image it takes. This pattern, which plays the role of a sensor fingerprint, is essentially an unintentional stochastic spread-spectrum watermark that survives processing, such as lossy compression or filtering. This tutorial explains how this fingerprint can be estimated from images taken by the camera and later detected in a given image to establish image origin and integrity. Various forensic tasks are formulated as a two-channel hypothesis testing problem approached using the generalized likelihood ratio test. The performance of the introduced forensic methods is briefly illustrated on examples to give the reader a sense of the performance.", "title": "" }, { "docid": "c5d74c69c443360d395a8371055ef3e2", "text": "The supply of oxygen and nutrients and the disposal of metabolic waste in the organs depend strongly on how blood, especially red blood cells, flow through the microvascular network. Macromolecular plasma proteins such as fibrinogen cause red blood cells to form large aggregates, called rouleaux, which are usually assumed to be disaggregated in the circulation due to the shear forces present in bulk flow. This leads to the assumption that rouleaux formation is only relevant in the venule network and in arterioles at low shear rates or stasis. Thanks to an excellent agreement between combined experimental and numerical approaches, we show that despite the large shear rates present in microcapillaries, the presence of either fibrinogen or the synthetic polymer dextran leads to an enhanced formation of robust clusters of red blood cells, even at haematocrits as low as 1%. Robust aggregates are shown to exist in microcapillaries even for fibrinogen concentrations within the healthy physiological range. These persistent aggregates should strongly affect cell distribution and blood perfusion in the microvasculature, with putative implications for blood disorders even within apparently asymptomatic subjects.", "title": "" }, { "docid": "b5dc5268c2eb3b216aa499a639ddfbf9", "text": "This paper describes a self-localization for indoor mobile robots based on integrating measurement values from multiple optical mouse sensors and a global camera. This paper consists of two parts. Firstly, we propose a dead-reckoning based on increments of the robot movements read directly from the floor using optical mouse sensors. Since the measurement values from multiple optical mouse sensors are compared to each other and only the reliable values are selected, accurate dead-reckoning can be realized compared with the conventional method based on increments of wheel rotations. Secondly, in order to realize robust localization, we propose a method of estimating position and orientation by integrating measured robot position (orientation information is not included) via global camera and dead-reckoning with the Kalman filter", "title": "" }, { "docid": "e37f707ac7a86f287fbbfe9b8a4b1e31", "text": "We survey distributed deep learning models for training or inference without accessing raw data from clients. These methods aim to protect confidential patterns in data while still allowing servers to train models. The distributed deep learning methods of federated learning, split learning and large batch stochastic gradient descent are compared in addition to private and secure approaches of differential privacy, homomorphic encryption, oblivious transfer and garbled circuits in the context of neural networks. We study their benefits, limitations and trade-offs with regards to computational resources, data leakage and communication efficiency and also share our anticipated future trends.", "title": "" } ]
scidocsrr
713ee77d9d1d75ba1676446766043a5b
Sustained attention in children with specific language impairment (SLI).
[ { "docid": "bb65decbaecb11cf14044b2a2cbb6e74", "text": "The ability to remain focused on goal-relevant stimuli in the presence of potentially interfering distractors is crucial for any coherent cognitive function. However, simply instructing people to ignore goal-irrelevant stimuli is not sufficient for preventing their processing. Recent research reveals that distractor processing depends critically on the level and type of load involved in the processing of goal-relevant information. Whereas high perceptual load can eliminate distractor processing, high load on \"frontal\" cognitive control processes increases distractor processing. These findings provide a resolution to the long-standing early and late selection debate within a load theory of attention that accommodates behavioural and neuroimaging data within a framework that integrates attention research with executive function.", "title": "" } ]
[ { "docid": "3b72c70213ccd3d5f3bda5cc2e2c6945", "text": "Neural language models (NLMs) have recently gained a renewed interest by achieving state-of-the-art performance across many natural language processing (NLP) tasks. However, NLMs are very computationally demanding largely due to the computational cost of the softmax layer over a large vocabulary. We observe that, in decoding of many NLP tasks, only the probabilities of the top-K hypotheses need to be calculated preciously and K is often much smaller than the vocabulary size. This paper proposes a novel softmax layer approximation algorithm, called Fast Graph Decoder (FGD), which quickly identifies, for a given context, a set of K words that are most likely to occur according to a NLM. We demonstrate that FGD reduces the decoding time by an order of magnitude while attaining close to the full softmax baseline accuracy on neural machine translation and language modeling tasks. We also prove the theoretical guarantee on the softmax approximation quality.", "title": "" }, { "docid": "7528af716f17f125b253597e8c3e596f", "text": "BACKGROUND\nEnhancement of the osteogenic potential of mesenchymal stem cells (MSCs) is highly desirable in the field of bone regeneration. This paper proposes a new approach for the improvement of osteogenesis combining hypergravity with osteoinductive nanoparticles (NPs).\n\n\nMATERIALS AND METHODS\nIn this study, we aimed to investigate the combined effects of hypergravity and barium titanate NPs (BTNPs) on the osteogenic differentiation of rat MSCs, and the hypergravity effects on NP internalization. To obtain the hypergravity condition, we used a large-diameter centrifuge in the presence of a BTNP-doped culture medium. We analyzed cell morphology and NP internalization with immunofluorescent staining and coherent anti-Stokes Raman scattering, respectively. Moreover, cell differentiation was evaluated both at the gene level with quantitative real-time reverse-transcription polymerase chain reaction and at the protein level with Western blotting.\n\n\nRESULTS\nFollowing a 20 g treatment, we found alterations in cytoskeleton conformation, cellular shape and morphology, as well as a significant increment of expression of osteoblastic markers both at the gene and protein levels, jointly pointing to a substantial increment of NP uptake. Taken together, our findings suggest a synergistic effect of hypergravity and BTNPs in the enhancement of the osteogenic differentiation of MSCs.\n\n\nCONCLUSION\nThe obtained results could become useful in the design of new approaches in bone-tissue engineering, as well as for in vitro drug-delivery strategies where an increment of nanocarrier internalization could result in a higher drug uptake by cell and/or tissue constructs.", "title": "" }, { "docid": "1cd77d97f27b45d903ffcecda02795a5", "text": "Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.", "title": "" }, { "docid": "0441fb016923cd0b7676d3219951c230", "text": "Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but they are typically inefficient at capturing global relations between distant regions and require stacking multiple convolution layers. In this work, we propose a new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed. After reasoning, relation-aware features are distributed back to the original coordinate space for down-stream tasks. We further present a highly efficient instantiation of the proposed approach and introduce the Global Reasoning unit (GloRe unit) that implements the coordinate-interaction space mapping by weighted global pooling and weighted broadcasting, and the relation reasoning via graph convolution on a small graph in interaction space. The proposed GloRe unit is lightweight, end-to-end trainable and can be easily plugged into existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost the performance of state-of-the-art backbone architectures, including ResNet [15, 16], ResNeXt [33], SE-Net [18] and DPN [9], for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognition task.", "title": "" }, { "docid": "3bb6bfbb139ab9b488c4106c9d6cc3bd", "text": "BACKGROUND\nRecent evidence demonstrates growth in both the quality and quantity of evidence in physical therapy. Much of this work has focused on randomized controlled trials and systematic reviews.\n\n\nOBJECTIVE\nThe purpose of this study was to conduct a comprehensive bibliometric assessment of Physical Therapy (PTJ) over the past 30 years to examine trends for all types of studies.\n\n\nDESIGN\nThis was a bibliometric analysis.\n\n\nMETHODS\nAll manuscripts published in PTJ from 1980 to 2009 were reviewed. Research reports, topical reviews (including perspectives and nonsystematic reviews), and case reports were included. Articles were coded based on type, participant characteristics, physical therapy focus, research design, purpose of article, clinical condition, and intervention. Coding was performed by 2 independent reviewers, and author, institution, and citation information was obtained using bibliometric software.\n\n\nRESULTS\nOf the 4,385 publications identified, 2,519 were included in this analysis. Of these, 67.1% were research reports, 23.0% were topical reviews, and 9.9% were case reports. Percentage increases over the past 30 years were observed for research reports, inclusion of \"symptomatic\" participants (defined as humans with a current symptomatic condition), systematic reviews, qualitative studies, prospective studies, and articles focused on prognosis, diagnosis, or metric topics. Percentage decreases were observed for topical reviews, inclusion of only \"asymptomatic\" participants (defined as humans without a current symptomatic condition), education articles, nonsystematic reviews, and articles focused on anatomy/physiology.\n\n\nLIMITATIONS\nQuality assessment of articles was not performed.\n\n\nCONCLUSIONS\nThese trends provide an indirect indication of the evolution of the physical therapy profession through the publication record in PTJ. Collectively, the data indicated an increased emphasis on publishing articles consistent with evidence-based practice and clinically based research. Bibliometric analyses indicated the most frequent citations were metric studies and references in PTJ were from journals from a variety of disciplines.", "title": "" }, { "docid": "5c9ba6384b6983a26212e8161e502484", "text": "The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures. Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies. Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset. This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples – ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks. We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training. Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice.", "title": "" }, { "docid": "9b7654390d496cb041f3073dcfb07e67", "text": "Electronic commerce (EC) transactions are subject to multiple information security threats. Proposes that consumer trust in EC transactions is influenced by perceived information security and distinguishes it from the objective assessment of security threats. Proposes mechanisms of encryption, protection, authentication, and verification as antecedents of perceived information security. These mechanisms are derived from technological solutions to security threats that are visible to consumers and hence contribute to actual consumer perceptions. Tests propositions in a study of 179 consumers and shows a significant relationship between consumers’ perceived information security and trust in EC transactions. Explores the role of limited financial liability as a surrogate for perceived security. However, the findings show that there is a minimal effect of financial liability on consumers’ trust in EC. Engenders several new insights regarding the role of perceived security in EC transactions.", "title": "" }, { "docid": "52b354c9b1cfe53598f159b025ec749a", "text": "This paper describes a survey designed to determine the information seeking behavior of graduate students at the University of Macedonia (UoM). The survey is a continuation of a previous one undertaken in the Faculties of Philosophy and Engineering at the Aristotle University of Thessaloniki (AUTh). This paper primarily presents results from the UoM survey, but also makes comparisons with the findings from the earlier survey at AUTh. The 254 UoM students responding tend to use the simplest information search techniques with no critical variations between different disciplines. Their information seeking behavior seems to be influenced by their search experience, computer and web experience, perceived ability and frequency of use of esources, and not by specific personal characteristics or attendance at library instruction programs. Graduate students of both universities similar information seeking preferences, with the UoM students using more sophisticated techniques, such as Boolean search and truncation, more often than the AUTh students.", "title": "" }, { "docid": "247eb1c32cf3fd2e7a925d54cb5735da", "text": "Several applications in machine learning and machine-to-human interactions tolerate small deviations in their computations. Digital systems can exploit this fault-tolerance to increase their energy-efficiency, which is crucial in embedded applications. Hence, this paper introduces a new means of Approximate Computing: Dynamic-Voltage-Accuracy-Frequency-Scaling (DVAFS), a circuit-level technique enabling a dynamic trade-off of energy versus computational accuracy that outperforms other Approximate Computing techniques. The usage and applicability of DVAFS is illustrated in the context of Deep Neural Networks, the current state-of-the-art in advanced recognition. These networks are typically executed on CPU's or GPU's due to their high computational complexity, making their deployment on battery-constrained platforms only possible through wireless connections with the cloud. This work shows how deep learning can be brought to IoT devices by running every layer of the network at its optimal computational accuracy. Finally, we demonstrate a DVAFS processor for Convolutional Neural Networks, achieving efficiencies of multiple TOPS/W.", "title": "" }, { "docid": "d1a4abaa57f978858edf0d7b7dc506ba", "text": "Abstraction in imagery results from the strategic simplification and elimination of detail to clarify the visual structure of the depicted shape. It is a mainstay of artistic practice and an important ingredient of effective visual communication. We develop a computational method for the abstract depiction of 2D shapes. Our approach works by organizing the shape into parts using a new synthesis of holistic features of the part shape, local features of the shape boundary, and global aspects of shape organization. Our abstractions are new shapes with fewer and clearer parts.", "title": "" }, { "docid": "932b189b21703a4c50399f27395f37a6", "text": "An ultra-low power wake-up receiver for body channel communication (BCC) is implemented in 0.13 μm CMOS process. The proposed wake-up receiver uses the injection-locking ring-oscillator (ILRO) to replace the RF amplifier with low power consumption. Through the ILRO, the frequency modulated input signal is converted to the full swing rectangular signal which is directly demodulated by the following low power PLL based FSK demodulator. In addition, the relaxed sensitivity and selectivity requirement by the good channel quality of the BCC reduces the power consumption of the receiver. As a result, the proposed wake-up receiver achieves a sensitivity of -55.2 dbm at a data rate of 200 kbps while consuming only 39 μW from the 0.7 V supply.", "title": "" }, { "docid": "ba94bc5f5762017aed0c307ce89c0558", "text": "Carsharing has emerged as an alternative to vehicle ownership and is a rapidly expanding global market. Particularly through the flexibility of free-floating models, car sharing complements public transport since customers do not need to return cars to specific stations. We present a novel data analytics approach that provides decision support to car sharing operators -- from local start-ups to global players -- in maneuvering this constantly growing and changing market environment. Using a large set of rental data, as well as zero-inflated and geographically weighted regression models, we derive indicators for the attractiveness of certain areas based on points of interest in their vicinity. These indicators are valuable for a variety of operational and strategic decisions. As a demonstration project, we present a case study of Berlin, where the indicators are used to identify promising regions for business area expansion.", "title": "" }, { "docid": "e6640dc272e4142a2ddad8291cfaead7", "text": "We give a summary of R. Borcherds’ solution (with some modifications) to the following part of the Conway-Norton conjectures: Given the Monster M and Frenkel-Lepowsky-Meurman’s moonshine module V ♮, prove the equality between the graded characters of the elements of M acting on V ♮ (i.e., the McKay-Thompson series for V ♮) and the modular functions provided by Conway and Norton. The equality is established using the homology of a certain subalgebra of the monster Lie algebra, and the Euler-Poincaré identity.", "title": "" }, { "docid": "3af1e6d82d1c70a2602d52f47ddce665", "text": "Birds have a smaller repertoire of immune genes than mammals. In our efforts to study antiviral responses to influenza in avian hosts, we have noted key genes that appear to be missing. As a result, we speculate that birds have impaired detection of viruses and intracellular pathogens. Birds are missing TLR8, a detector for single-stranded RNA. Chickens also lack RIG-I, the intracellular detector for single-stranded viral RNA. Riplet, an activator for RIG-I, is also missing in chickens. IRF3, the nuclear activator of interferon-beta in the RIG-I pathway is missing in birds. Downstream of interferon (IFN) signaling, some of the antiviral effectors are missing, including ISG15, and ISG54 and ISG56 (IFITs). Birds have only three antibody isotypes and IgD is missing. Ducks, but not chickens, make an unusual truncated IgY antibody that is missing the Fc fragment. Chickens have an expanded family of LILR leukocyte receptor genes, called CHIR genes, with hundreds of members, including several that encode IgY Fc receptors. Intriguingly, LILR homologues appear to be missing in ducks, including these IgY Fc receptors. The truncated IgY in ducks, and the duplicated IgY receptor genes in chickens may both have resulted from selective pressure by a pathogen on IgY FcR interactions. Birds have a minimal MHC, and the TAP transport and presentation of peptides on MHC class I is constrained, limiting function. Perhaps removing some constraint, ducks appear to lack tapasin, a chaperone involved in loading peptides on MHC class I. Finally, the absence of lymphotoxin-alpha and beta may account for the observed lack of lymph nodes in birds. As illustrated by these examples, the picture that emerges is some impairment of immune response to viruses in birds, either a cause or consequence of the host-pathogen arms race and long evolutionary relationship of birds and RNA viruses.", "title": "" }, { "docid": "de408de1915d43c4db35702b403d0602", "text": "real-time population health assessment and monitoring D. L. Buckeridge M. Izadi A. Shaban-Nejad L. Mondor C. Jauvin L. Dubé Y. Jang R. Tamblyn The fragmented nature of population health information is a barrier to public health practice. Despite repeated demands by policymakers, administrators, and practitioners to develop information systems that provide a coherent view of population health status, there has been limited progress toward developing such an infrastructure. We are creating an informatics platform for describing and monitoring the health status of a defined population by integrating multiple clinical and administrative data sources. This infrastructure, which involves a population health record, is designed to enable development of detailed portraits of population health, facilitate monitoring of population health indicators, enable evaluation of interventions, and provide clinicians and patients with population context to assist diagnostic and therapeutic decision-making. In addition to supporting public health professionals, clinicians, and the public, we are designing the infrastructure to provide a platform for public health informatics research. This early report presents the requirements and architecture for the infrastructure and describes the initial implementation of the population health record, focusing on indicators of chronic diseases related to obesity.", "title": "" }, { "docid": "cd1af39ff72f2ff36708ed0bf820fb95", "text": "Classifying semantic relations between entity pairs in sentences is an important task in Natural Language Processing (NLP). Most previous models for relation classification rely on the high-level lexical and syntatic features obtained by NLP tools such as WordNet, dependency parser, part-ofspeech (POS) tagger, and named entity recognizers (NER). In addition, state-of-the-art neural models based on attention mechanisms do not fully utilize information of entity that may be the most crucial features for relation classification. To address these issues, we propose a novel end-to-end recurrent neural model which incorporates an entity-aware attention mechanism with a latent entity typing (LET) method. Our model not only utilizes entities and their latent types as features effectively but also is more interpretable by visualizing attention mechanisms applied to our model and results of LET. Experimental results on the SemEval-2010 Task 8, one of the most popular relation classification task, demonstrate that our model outperforms existing state-ofthe-art models without any high-level features.", "title": "" }, { "docid": "77f83ada0854e34ac60c725c21671434", "text": "OBJECTIVES\nThis subanalysis of the TNT (Treating to New Targets) study investigates the effects of intensive lipid lowering with atorvastatin in patients with coronary heart disease (CHD) with and without pre-existing chronic kidney disease (CKD).\n\n\nBACKGROUND\nCardiovascular disease is a major cause of morbidity and mortality in patients with CKD.\n\n\nMETHODS\nA total of 10,001 patients with CHD were randomized to double-blind therapy with atorvastatin 80 mg/day or 10 mg/day. Patients with CKD were identified at baseline on the basis of an estimated glomerular filtration rate (eGFR) <60 ml/min/1.73 m(2) using the Modification of Diet in Renal Disease equation. The primary efficacy outcome was time to first major cardiovascular event.\n\n\nRESULTS\nOf 9,656 patients with complete renal data, 3,107 had CKD at baseline and demonstrated greater cardiovascular comorbidity than those with normal eGFR (n = 6,549). After a median follow-up of 5.0 years, 351 patients with CKD (11.3%) experienced a major cardiovascular event, compared with 561 patients with normal eGFR (8.6%) (hazard ratio [HR] = 1.35; 95% confidence interval [CI] 1.18 to 1.54; p < 0.0001). Compared with atorvastatin 10 mg, atorvastatin 80 mg reduced the relative risk of major cardiovascular events by 32% in patients with CKD (HR = 0.68; 95% CI 0.55 to 0.84; p = 0.0003) and 15% in patients with normal eGFR (HR = 0.85; 95% CI 0.72 to 1.00; p = 0.049). Both doses of atorvastatin were well tolerated in patients with CKD.\n\n\nCONCLUSIONS\nAggressive lipid lowering with atorvastatin 80 mg was both safe and effective in reducing the excess of cardiovascular events in a high-risk population with CKD and CHD.", "title": "" }, { "docid": "d3c8903fed280246ea7cb473ee87c0e7", "text": "Reaction time has a been a favorite subject of experimental psychologists since the middle of the nineteenth century. However, most studies ask questions about the organization of the brain, so the authors spend a lot of time trying to determine if the results conform to some mathematical model of brain activity. This makes these papers hard to understand for the beginning student. In this review, I have ignored these brain organization questions and summarized the major literature conclusions that are applicable to undergraduate laboratories using my Reaction Time software. I hope this review helps you write a good report on your reaction time experiment. I also apologize to reaction time researchers for omissions and oversimplifications.", "title": "" }, { "docid": "40a181cc018d3050e41fe9e2659acd0a", "text": "Efforts to adapt and extend graphic arts printing techniques for demanding device applications in electronics, biotechnology and microelectromechanical systems have grown rapidly in recent years. Here, we describe the use of electrohydrodynamically induced fluid flows through fine microcapillary nozzles for jet printing of patterns and functional devices with submicrometre resolution. Key aspects of the physics of this approach, which has some features in common with related but comparatively low-resolution techniques for graphic arts, are revealed through direct high-speed imaging of the droplet formation processes. Printing of complex patterns of inks, ranging from insulating and conducting polymers, to solution suspensions of silicon nanoparticles and rods, to single-walled carbon nanotubes, using integrated computer-controlled printer systems illustrates some of the capabilities. High-resolution printed metal interconnects, electrodes and probing pads for representative circuit patterns and functional transistors with critical dimensions as small as 1 mum demonstrate potential applications in printed electronics.", "title": "" }, { "docid": "b0532d77781257c80024926c836f14e1", "text": "Various levels of automation can be introduced by intelligent decision support systems, from fully automated, where the operator is completely left out of the decision process, to minimal levels of automation, where the automation only makes recommendations and the operator has the final say. For rigid tasks that require no flexibility in decision-making and with a low probability of system failure, higher levels of automation often provide the best solution. However, in time critical environments with many external and changing constraints such as air traffic control and military command and control operations, higher levels of automation are not advisable because of the risks and the complexity of both the system and the inability of the automated decision aid to be perfectly reliable. Human-inthe-loop designs, which employ automation for redundant, manual, and monotonous tasks and allow operators active participation, provide not only safety benefits, but also allow a human operator and a system to respond more flexibly to uncertain and unexpected events. However, there can be measurable costs to human performance when automation is used, such as loss of situational awareness, complacency, skill degradation, and automation bias. This paper will discuss the influence of automation bias in intelligent decision support systems, particularly those in aviation domains. Automation bias occurs in decision-making because humans have a tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct and can be exacerbated in time critical domains. Automated decision aids are designed to reduce human error but actually can cause new errors in the operation of a system if not designed with human cognitive limitations in mind.", "title": "" } ]
scidocsrr
224c61d01a854bb6cb12e3fd8908411f
PD-L1 biomarker testing for non-small cell lung cancer: truth or fiction?
[ { "docid": "f17a6c34a7b3c6a7bf266f04e819af94", "text": "BACKGROUND\nPatients with advanced squamous-cell non-small-cell lung cancer (NSCLC) who have disease progression during or after first-line chemotherapy have limited treatment options. This randomized, open-label, international, phase 3 study evaluated the efficacy and safety of nivolumab, a fully human IgG4 programmed death 1 (PD-1) immune-checkpoint-inhibitor antibody, as compared with docetaxel in this patient population.\n\n\nMETHODS\nWe randomly assigned 272 patients to receive nivolumab, at a dose of 3 mg per kilogram of body weight every 2 weeks, or docetaxel, at a dose of 75 mg per square meter of body-surface area every 3 weeks. The primary end point was overall survival.\n\n\nRESULTS\nThe median overall survival was 9.2 months (95% confidence interval [CI], 7.3 to 13.3) with nivolumab versus 6.0 months (95% CI, 5.1 to 7.3) with docetaxel. The risk of death was 41% lower with nivolumab than with docetaxel (hazard ratio, 0.59; 95% CI, 0.44 to 0.79; P<0.001). At 1 year, the overall survival rate was 42% (95% CI, 34 to 50) with nivolumab versus 24% (95% CI, 17 to 31) with docetaxel. The response rate was 20% with nivolumab versus 9% with docetaxel (P=0.008). The median progression-free survival was 3.5 months with nivolumab versus 2.8 months with docetaxel (hazard ratio for death or disease progression, 0.62; 95% CI, 0.47 to 0.81; P<0.001). The expression of the PD-1 ligand (PD-L1) was neither prognostic nor predictive of benefit. Treatment-related adverse events of grade 3 or 4 were reported in 7% of the patients in the nivolumab group as compared with 55% of those in the docetaxel group.\n\n\nCONCLUSIONS\nAmong patients with advanced, previously treated squamous-cell NSCLC, overall survival, response rate, and progression-free survival were significantly better with nivolumab than with docetaxel, regardless of PD-L1 expression level. (Funded by Bristol-Myers Squibb; CheckMate 017 ClinicalTrials.gov number, NCT01642004.).", "title": "" }, { "docid": "8fc31ce6dbe961c2f08c4efc108967be", "text": "PURPOSE\nImmunomodulatory drugs differ in mechanism-of-action from directly cytotoxic cancer therapies. Identifying factors predicting clinical response could guide patient selection and therapeutic optimization.\n\n\nEXPERIMENTAL DESIGN\nPatients (N = 41) with melanoma, non-small cell lung carcinoma (NSCLC), renal cell carcinoma (RCC), colorectal carcinoma, or castration-resistant prostate cancer were treated on an early-phase trial of anti-PD-1 (nivolumab) at one institution and had evaluable pretreatment tumor specimens. Immunoarchitectural features, including PD-1, PD-L1, and PD-L2 expression, patterns of immune cell infiltration, and lymphocyte subpopulations, were assessed for interrelationships and potential correlations with clinical outcomes.\n\n\nRESULTS\nMembranous (cell surface) PD-L1 expression by tumor cells and immune infiltrates varied significantly by tumor type and was most abundant in melanoma, NSCLC, and RCC. In the overall cohort, PD-L1 expression was geographically associated with infiltrating immune cells (P < 0.001), although lymphocyte-rich regions were not always associated with PD-L1 expression. Expression of PD-L1 by tumor cells and immune infiltrates was significantly associated with expression of PD-1 on lymphocytes. PD-L2, the second ligand for PD-1, was associated with PD-L1 expression. Tumor cell PD-L1 expression correlated with objective response to anti-PD-1 therapy, when analyzing either the specimen obtained closest to therapy or the highest scoring sample among multiple biopsies from individual patients. These correlations were stronger than borderline associations of PD-1 expression or the presence of intratumoral immune cell infiltrates with response.\n\n\nCONCLUSIONS\nTumor PD-L1 expression reflects an immune-active microenvironment and, while associated other immunosuppressive molecules, including PD-1 and PD-L2, is the single factor most closely correlated with response to anti-PD-1 blockade. Clin Cancer Res; 20(19); 5064-74. ©2014 AACR.", "title": "" } ]
[ { "docid": "de1d6d032637f4943fcb6a04b32ab92b", "text": "Numerous pattern recognition applications can be formed as learning from graph-structured data, including social network, protein-interaction network, the world wide web data, knowledge graph, etc. While convolutional neural network (CNN) facilitates great advances in gridded image/video understanding tasks, very limited attention has been devoted to transform these successful network structures (including Inception net, Residual net, Dense net, etc.) to establish convolutional networks on graph, due to its irregularity and complexity geometric topologies (unordered vertices, unfixed number of adjacent edges/vertices). In this paper, we aim to give a comprehensive analysis of when work matters by transforming different classical network structures to graph CNN, particularly in the basic graph recognition problem. Specifically, we firstly review the general graph CNN methods, especially in its spectral filtering operation on the irregular graph data. We then introduce the basic structures of ResNet, Inception and DenseNet into graph CNN and construct these network structures on graph, named as G ResNet, G Inception, G DenseNet. In particular, it seeks to help graph CNNs by shedding light on how these classical network structures work and providing guidelines for choosing appropriate graph network frameworks. Finally, we comprehensively evaluate the performance of these different network structures on several public graph datasets (including social networks and bioinformatic datasets), and demonstrate how different network structures work on graph CNN in the graph recognition task.", "title": "" }, { "docid": "879688c3b77e639a43445aa49556ebd0", "text": "Unethical behavior by “ordinary” people poses significant societal and personal challenges. We present a novel framework centered on the role of self-serving justification to build upon and advance the rapidly expanding research on intentional unethical behavior of people who value their morality highly. We propose that self-serving justifications emerging before and after people engage in intentional ethical violations mitigate the threat to the moral self, enabling them to do wrong while feeling moral. Pre-violation justifications lessen the anticipated threat to the moral self by redefining questionable behaviors as excusable. Post-violation justifications alleviate the experienced threat to the moral self through compensations that balance or lessen violations. We highlight the psychological mechanisms that prompt people to do wrong and feel moral, and suggest future research directions regarding the temporal dimension of self-serving justifications of ethical misconduct.", "title": "" }, { "docid": "04846001f9136102088326a40b0fa7ff", "text": "In this paper, we propose a novel approach of learning mid-level filters from automatically discovered patch clusters for person re-identification. It is well motivated by our study on what are good filters for person re-identification. Our mid-level filters are discriminatively learned for identifying specific visual patterns and distinguishing persons, and have good cross-view invariance. First, local patches are qualitatively measured and classified with their discriminative power. Discriminative and representative patches are collected for filter learning. Second, patch clusters with coherent appearance are obtained by pruning hierarchical clustering trees, and a simple but effective cross-view training strategy is proposed to learn filters that are view-invariant and discriminative. Third, filter responses are integrated with patch matching scores in RankSVM training. The effectiveness of our approach is validated on the VIPeR dataset and the CUHK01 dataset. The learned mid-level features are complementary to existing handcrafted low-level features, and improve the best Rank-1 matching rate on the VIPeR dataset by 14%.", "title": "" }, { "docid": "4f31b16c53632e2d1ae874a692e5b64e", "text": "Previously published algorithms for finding the longest common subsequence of two sequences of length n have had a best-case running time of O(n2). An algorithm for this problem is presented which has a running time of O((r + n) log n), where r is the total number of ordered pairs of positions at which the two sequences match. Thus in the worst case the algorithm has a running time of O(n2 log n). However, for those applications where most positions of one sequence match relatively few positions in the other sequence, a running time of O(n log n) can be expected.", "title": "" }, { "docid": "e0c52b0fdf2d67bca4687b8060565288", "text": "Large graph databases are commonly collected and analyzed in numerous domains. For reasons related to either space efficiency or for privacy protection (e.g., in the case of social network graphs), it sometimes makes sense to replace the original graph with a summary, which removes certain details about the original graph topology. However, this summarization process leaves the database owner with the challenge of processing queries that are expressed in terms of the original graph, but are answered using the summary. In this paper, we propose a formal semantics for answering queries on summaries of graph structures. At its core, our formulation is based on a random worlds model. We show that important graph-structure queries (e.g., adjacency, degree, and eigenvector centrality) can be answered efficiently and in closed form using these semantics. Further, based on this approach to query answering, we formulate three novel graph partitioning/compression problems. We develop algorithms for finding a graph summary that least affects the accuracy of query results, and we evaluate our proposed algorithms using both real and synthetic data.", "title": "" }, { "docid": "a8d3a75cdc3bb43217a0120edf5025ff", "text": "An important approach to text mining involves the use of natural-language information extraction. Information extraction (IE) distills structured data or knowledge from unstructured text by identifying references to named entities as well as stated relationships between such entities. IE systems can be used to directly extricate abstract knowledge from a text corpus, or to extract concrete data from a set of documents which can then be further analyzed with traditional data-mining techniques to discover more general patterns. We discuss methods and implemented systems for both of these approaches and summarize results on mining real text corpora of biomedical abstracts, job announcements, and product descriptions. We also discuss challenges that arise when employing current information extraction technology to discover knowledge in text.", "title": "" }, { "docid": "632f42f71b09f4dea40bc1cccd2d9604", "text": "The phenomenon of radicalization is investigated within a mixed population composed of core and sensitive subpopulations. The latest includes first to third generation immigrants. Respective ways of life may be partially incompatible. In case of a conflict core agents behave as inflexible about the issue. In contrast, sensitive agents can decide either to live peacefully adjusting their way of life to the core one, or to oppose it with eventually joining violent activities. The interplay dynamics between peaceful and opponent sensitive agents is driven by pairwise interactions. These interactions occur both within the sensitive population and by mixing with core agents. The update process is monitored using a Lotka-Volterra-like Ordinary Differential Equation. Given an initial tiny minority of opponents that coexist with both inflexible and peaceful agents, we investigate implications on the emergence of radicalization. Opponents try to turn peaceful agents to opponents driving radicalization. However, inflexible core agents may step in to bring back opponents to a peaceful choice thus weakening the phenomenon. The required minimum individual core involvement to actually curb radicalization is calculated. It is found to be a function of both the majority or minority status of the sensitive subpopulation with respect to the core subpopulation and the degree of activeness of opponents. The results highlight the instrumental role core agents can have to hinder radicalization within the sensitive subpopulation. Some hints are outlined to favor novel public policies towards social integration.", "title": "" }, { "docid": "21870abb7943b1b26c844bff1685da1c", "text": "Many robots capable of performing social behaviors have recently been developed for Human-Robot Interaction (HRI) studies. These social robots are applied in various domains such as education, entertainment, medicine, and collaboration. Besides the undisputed advantages, a major difficulty in HRI studies with social robots is that the robot platforms are typically expensive and/or not open-source. It burdens researchers to broaden experiments to a larger scale or apply study results in practice. This paper describes a method to modify My Keepon, a toy version of Keepon robot, to be a programmable platform for HRI studies, especially for robot-assisted therapies. With an Arduino microcontroller board and an open-source Microsoft Visual C# software, users are able to fully control the sounds and motions of My Keepon, and configure the robot to the needs of their research. Peripherals can be added for advanced studies (e.g., mouse, keyboard, buttons, PlayStation2 console, Emotiv neuroheadset, Kinect). Our psychological experiment results show that My Keepon modification is a useful and low-cost platform for several HRI studies.", "title": "" }, { "docid": "65849cfb115918dd264445e91698e868", "text": "Handwritten character recognition is always a frontier area of research in the field of pattern recognition. There is a large demand for OCR on hand written documents in Image processing. Even though, sufficient studies have performed in foreign scripts like Arabic, Chinese and Japanese, only a very few work can be traced for handwritten character recognition mainly for the south Indian scripts. OCR system development for Indian script has many application areas like preserving manuscripts and ancient literatures written in different Indian scripts and making digital libraries for the documents. Feature extraction and classification are essential steps of character recognition process affecting the overall accuracy of the recognition system. This paper presents a brief overview of digital image processing techniques such as Feature Extraction, Image Restoration and Image Enhancement. A brief history of OCR and various approaches to character recognition is also discussed in this paper.", "title": "" }, { "docid": "0e64386d566fafabb793fe33a0ac1280", "text": "Autonomous mobile robot navigation is a very relevant problem in robotics research. This paper proposes a vision-based autonomous navigation system using artificial neural networks (ANN) and finite state machines (FSM). In the first step, ANNs are used to process the image frames taken from the robot´s camera, classifying the space, resulting in navigable or non-navigable areas (image road segmentation). Then, the ANN output is processed and used by a FSM, which identifies the robot´s current state, and define which action the robot should take according to the processed image frame. Different experiments were performed in order to validate and evaluate this approach, using a small mobile robot with integrated camera, in a structured indoor environment. The integration of ANN vision-based algorithms and robot´s action control based on a FSM, as proposed in this paper, demonstrated to be a promising approach to autonomous mobile robot navigation.", "title": "" }, { "docid": "debb1b975738fd0b3db01bbc1b2ff9f3", "text": "An attempt to solve the collapse problem in the framework of a time-symmetric quantum formalism is reviewed. Although the proposal does not look very attractive, its concept a world defined by two quantum states, one evolving forwards and one evolving backwards in time is found to be useful in modifying the many-worlds picture of Everett’s theory.", "title": "" }, { "docid": "883d79eac056314ae45feca23d79c3e3", "text": "Our life is characterized by the presence of a multitude of interactive devices and smart objects exploited for disparate goals in different contexts of use. Thus, it is impossible for application developers to predict at design time the devices and objects users will exploit, how they will be arranged, and in which situations and for which objectives they will be used. For such reasons, it is important to make end users able to easily and autonomously personalize the behaviour of their Internet of Things applications, so that they can better comply with their specific expectations. In this paper, we present a method and a set of tools that allow end users without programming experience to customize the context-dependent behaviour of their Web applications through the specification of trigger-action rules. The environment is able to support end-user specification of more flexible behaviour than what can be done with existing commercial tools, and it also includes an underlying infrastructure able to detect the possible contextual changes in order to achieve the desired behaviour. The resulting set of tools is able to support the dynamic creation and execution of personalized application versions more suitable for users’ needs in specific contexts of use. Thus, it represents a contribution to obtaining low threshold/high ceiling environments. We also report on an example application in the home automation domain, and a user study that has provided useful positive feedback.", "title": "" }, { "docid": "d5a18a82f8e041b717291c69676c7094", "text": "Total sleep deprivation (TSD) for one whole night improves depressive symptoms in 40-60% of treatments. The degree of clinical change spans a continuum from complete remission to worsening (in 2-7%). Other side effects are sleepiness and (hypo-) mania. Sleep deprivation (SD) response shows up in the SD night or on the following day. Ten to 15% of patients respond after recovery sleep only. After recovery sleep 50-80% of day 1 responders suffer a complete or partial relapse; but improvement can last for weeks. Sleep seems to lead to relapse although this is not necessarily the case. Treatment effects may be stabilised by antidepressant drugs, lithium, shifting of sleep time or light therapy. The best predictor of a therapeutic effect is a large variability of mood. Current opinion is that partial sleep deprivation (PSD) in the second half of the night is equally effective as TSD. There are, however, indications that TSD is superior. Early PSD (i.e. sleeping between 3:00 and 6:00) has the same effect as late PSD given equal sleep duration. New data cast doubt on the time-honoured conviction that REM sleep deprivation is more effective than non-REM SD. Both may work by reducing total sleep time. SD is an unspecific therapy. The main indication is the depressive syndrome. Some studies show positive effects in Parkinson's disease. It is still unknown how sleep deprivation works.", "title": "" }, { "docid": "fc2f99fff361e68f154d88da0739bac4", "text": "Mondor's disease is characterized by thrombophlebitis of the superficial veins of the breast and the chest wall. The list of causes is long. Various types of clothing, mainly tight bras and girdles, have been postulated as causes. We report a case of a 34-year-old woman who referred typical symptoms and signs of Mondor's disease, without other possible risk factors, and showed the cutaneous findings of the tight bra. Therefore, after distinguishing benign causes of Mondor's disease from hidden malignant causes, the clinicians should consider this clinical entity.", "title": "" }, { "docid": "55dc046b0052658521d627f29bcd7870", "text": "The proliferation of IT and its consequent dispersion is an enterprise reality, however, most organizations do not have adequate tools and/or methodologies that enable the management and coordination of their Information Systems. The Zachman Framework provides a structured way for any organization to acquire the necessary knowledge about itself with respect to the Enterprise Architecture. Zachman proposes a logical structure for classifying and organizing the descriptive representations of an enterprise, in different dimensions, and each dimension can be perceived in different perspectives.In this paper, we propose a method for achieving an Enterprise Architecture Framework, based on the Zachman Framework Business and IS perspectives, that defines the several artifacts for each cell, and a method which defines the sequence of filling up each cell in a top-down and incremental approach. We also present a tool developed for the purpose of supporting the Zachman Framework concepts. The tool: (i) behaves as an information repository for the framework's concepts; (ii) produces the proposed artifacts that represent each cell contents, (iii) allows multi-dimensional analysis among cell's elements, which is concerned with perspectives (rows) and/or dimensions (columns) dependency; and (iv) finally, evaluate the integrity, dependency and, business and information systems alignment level, through the answers defined for each framework dimension.", "title": "" }, { "docid": "abba5d320a4b6bf2a90ba2b836019660", "text": "We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and variable background. To alleviate this, researchers proposed a coarse-to-fine approach [46], which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage. Despite its effectiveness, this algorithm dealt with two stages individually, which lacked optimizing a global energy function, and limited its ability to incorporate multi-stage visual cues. Missing contextual information led to unsatisfying convergence in iterations, and that the fine stage sometimes produced even lower segmentation accuracy than the coarse stage. This paper presents a Recurrent Saliency Transformation Network. The key innovation is a saliency transformation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy. Experiments in the NIH pancreas segmentation dataset demonstrate the state-of-the-art accuracy, which outperforms the previous best by an average of over 2%. Much higher accuracies are also reported on several small organs in a larger dataset collected by ourselves. In addition, our approach enjoys better convergence properties, making it more efficient and reliable in practice.", "title": "" }, { "docid": "d6794e4917896ba1040b4a83f8bd69b4", "text": "There has been little work on computational grammars for Amh aric or other Ethio-Semitic languages and their use for pars ing and generation. This paper introduces a grammar for a fragment o f Amharic within the Extensible Dependency Grammar (XDG) fr amework of Debusmann. A language such as Amharic presents special ch allenges for the design of a dependency grammar because of th complex morphology and agreement constraints. The paper describes how a morphological analyzer for the language can be integra t d into the grammar, introduces empty nodes as a solution to the problem of null subjects and objects, and extends the agreement prin ci le of XDG in several ways to handle verb agreement with objects as well as subjects and the constraints governing relative clause v erbs. It is shown that XDG’s multiple dimensions lend themselves to a new appr oach to relative clauses in the language. The introduced ext ensions to XDG are also applicable to other Ethio-Semitic languages.", "title": "" }, { "docid": "e94a8186f99bcc7397b8476f8ea12125", "text": "Commentators regularly lament the proliferation of both negative and/or strategic (“horserace”) coverage in political news content. The most frequent account for this trends focuses on news norms, and/or the priorities of news journalists. Here, we build on recent work arguing for the importance of demand-side, rather than supply-side, explanations of news content. In short, news may be negative and/or strategy-focused because that is the kind of news that people are interested in. We use a lab experiment to capture participants’ news selection biases, alongside a survey capturing their stated news preferences. Politically-interested participants are more likely to select negative stories. Interest is associated with a greater preference for strategic frames as well. And results suggest that behavioral results do not conform to attitudinal ones. That is, regardless of what participants say, they exhibit a preference for negative news content. Literature on political communication often finds itself concerned with two related themes in media content: (1) negative news frames that generally cast politicians and politics in an unfavourable light, and (2) cynical strategy coverage that focuses on the “horserace” and conflictual aspects of politics. The two themes may be related, insofar as strategic coverage implies that politicians are motivated only by power, not the common good (e.g. Capella and Jamieson 1997). Regardless of their relation, the body of work on these frames makes two assumptions: first, that they are bad for society; and second, that their root cause lies in the actions of journalists. ∗Paper prepared for delivery at the Annual Conference of the Political Science Association, June 2013, Victoria BC. †Email: marc.trussler@mail.mcgill.ca ‡Email: stuart.soroka@mcgill.ca. 1 We rely here on Capella and Jamieson’s (1997) definition of strategy coverage: “(1) winning and losing as the central concern; (2) the language of wars, games, and competition; (3) a story with performers, critics and audience (voters); (4) centrality of performance, style, and perception of the candidate; (5) heavy weighting of polls and the candidates” (31). In this way it includes both the “game” schema and “horserace” coverage which often become muddled in the literature. We seek here to question the second assumption through a simple supposition: that the content of any given media environment, both on the personal and systemic level, is determined by some interplay between what media sources supply, and what consumers demand. Instead of looking at particular processes and norms inherent in the news-making process which may generate these themes (Sabato 1991; Patterson 1994; Lichter and Noyes 1995; Farnsworth and Lichter 2007), we instead focus on the additional role that demand plays in their provision. Put simply, we argue that the proliferation of negative and/or strategic content is at least in part a function of individuals’ (quite possibly subconscious) preferences. This is to our knowledge the first exploration of news selection biases outside the US context, and/or outside the context of an election campaign. It is in part an extension of existing work focused on consumer interest in horserace stories (e.g., Iyengar et al. 2004), or in negative content (e.g., Meffert et al. 2006), although it is the first to simultaneously consider both. It does so using a new lab-experimental approach that we believe has some advantages where both internal and external validity are concerned. It also provides a rare opportunity to compare actual news selection behavior with answers to survey questions about participants’ preferences in media content. We find, in sum, that individuals tend to select negative and strategic news frames, even when other options are available, and, moreover, even when their own stated preferences are for news that is less negative and/or strategic. Results thus support past work suggesting that participants are more likely to select negative stories rather than positive ones, though we find that this is particularly true for strategic stories. We also find evidence, in line with past work, that participants expressing high levels of political interest show a greater attraction to strategic stories. (This is true for citizens versus non-citizens as well.) Our own interpretation of these results draws on work in psychology, biology, economics, and political science on the “negativity-bias.” But even a thin reading of our findings emphasizes a too-often overlooked aspect of new content: it is the way it is not just because of the nature of the supply of news, but also the demand. The Cynical Media and their Audience That the media are negative and cynical about politics and politicians is widely agreed upon in the literature. (For a recent review see Soroka 2012.) Most scholars see this trend as a product, or perhaps a mutation, of the media’s role as the watchdog “Fourth Estate.” Patterson (1994: 79) argues that journalist’s understanding of what this role entails has evolved in a way that has caused them to shift from “silent skeptics” to “vocal cynics.” Indeed, the great deal of literature surrounding For a useful distinction of demandversus supply-side accounts of media content, see (Andrew 2007).", "title": "" }, { "docid": "ac4c2f4820496f40e08e587b070d4ef5", "text": "We have developed an implantable fuel cell that generates power through glucose oxidation, producing 3.4 μW cm(-2) steady-state power and up to 180 μW cm(-2) peak power. The fuel cell is manufactured using a novel approach, employing semiconductor fabrication techniques, and is therefore well suited for manufacture together with integrated circuits on a single silicon wafer. Thus, it can help enable implantable microelectronic systems with long-lifetime power sources that harvest energy from their surrounds. The fuel reactions are mediated by robust, solid state catalysts. Glucose is oxidized at the nanostructured surface of an activated platinum anode. Oxygen is reduced to water at the surface of a self-assembled network of single-walled carbon nanotubes, embedded in a Nafion film that forms the cathode and is exposed to the biological environment. The catalytic electrodes are separated by a Nafion membrane. The availability of fuel cell reactants, oxygen and glucose, only as a mixture in the physiologic environment, has traditionally posed a design challenge: Net current production requires oxidation and reduction to occur separately and selectively at the anode and cathode, respectively, to prevent electrochemical short circuits. Our fuel cell is configured in a half-open geometry that shields the anode while exposing the cathode, resulting in an oxygen gradient that strongly favors oxygen reduction at the cathode. Glucose reaches the shielded anode by diffusing through the nanotube mesh, which does not catalyze glucose oxidation, and the Nafion layers, which are permeable to small neutral and cationic species. We demonstrate computationally that the natural recirculation of cerebrospinal fluid around the human brain theoretically permits glucose energy harvesting at a rate on the order of at least 1 mW with no adverse physiologic effects. Low-power brain-machine interfaces can thus potentially benefit from having their implanted units powered or recharged by glucose fuel cells.", "title": "" }, { "docid": "46ec91100f7a7a18c5383469e7ac8d02", "text": "Fast event ordering is critical for delay-sensitive edge computing applications that serve massive geographically distributed clients. Using a centralized cloud to determine the event order suffers from unsatisfactory latency. Naive edge-centric solutions, which designate one edge node to order all the events, have scalability and single point of failure issues. To address these problems, we propose EdgeCons, a novel consensus algorithm optimized for edge computing networks. EdgeCons achieves fast consensus by running a sequence of Paxos instances among the edge nodes and dynamically distributing their leadership based on the recent running history. It also guarantees progressiveness by incorporating a reliable, backend cloud. A preliminary evaluation shows that EdgeCons works more efficiently than the state-of-theart consensus algorithms, in the context of achieving fast event ordering in edge computing networks.", "title": "" } ]
scidocsrr
62033235c6aa05b1442b204e73fd0aa3
Static analysis for probabilistic programs: inferring whole program properties from finitely many paths
[ { "docid": "e49aa0d0f060247348f8b3ea0a28d3c6", "text": "Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.", "title": "" } ]
[ { "docid": "36afb791436e95cec6167499bf4b0214", "text": "Leveraging historical data from the movie industry, this study built a predictive model for movie success, deviating from past studies by predicting profit (as opposed to revenue) at early stages of production (as opposed to just prior to release) to increase investor certainty. Our work derived several groups of novel features for each movie, based on the cast and collaboration network (who’), content (‘what’), and time of release (‘when’).", "title": "" }, { "docid": "b017fd773265c73c7dccad86797c17b8", "text": "Active learning, which has a strong impact on processing data prior to the classification phase, is an active research area within the machine learning community, and is now being extended for remote sensing applications. To be effective, classification must rely on the most informative pixels, while the training set should be as compact as possible. Active learning heuristics provide capability to select unlabeled data that are the “most informative” and to obtain the respective labels, contributing to both goals. Characteristics of remotely sensed image data provide both challenges and opportunities to exploit the potential advantages of active learning. We present an overview of active learning methods, then review the latest techniques proposed to cope with the problem of interactive sampling of training pixels for classification of remotely sensed data with support vector machines (SVMs). We discuss remote sensing specific approaches dealing with multisource and spatially and time-varying data, and provide examples for high-dimensional hyperspectral imagery.", "title": "" }, { "docid": "0d2ddb448c01172e53f19d9d5ac39f21", "text": "Malicious Android applications are currently the biggest threat in the scope of mobile security. To cope with their exponential growth and with their deceptive and hideous behaviors, static analysis signature based approaches are not enough to timely detect and tackle brand new threats such as polymorphic and composition malware. This work presents BRIDEMAID, a novel framework for analysis of Android apps' behavior, which exploits both a static and dynamic approach to detect malicious apps directly on mobile devices. The static analysis is based on n-grams matching to statically recognize malicious app execution patterns. The dynamic analysis is instead based on multi-level monitoring of device, app and user behavior to detect and prevent at runtime malicious behaviors. The framework has been tested against 2794 malicious apps reporting a detection accuracy of 99,7% and a negligible false positive rate, tested on a set of 10k genuine apps.", "title": "" }, { "docid": "23eb737d3930862326f81bac73c5e7f5", "text": "O discussion communities have become a widely used medium for interaction, enabling conversations across a broad range of topics and contexts. Their success, however, depends on participants’ willingness to invest their time and attention in the absence of formal role and control structures. Why, then, would individuals choose to return repeatedly to a particular community and engage in the various behaviors that are necessary to keep conversation within the community going? Some studies of online communities argue that individuals are driven by self-interest, while others emphasize more altruistic motivations. To get beyond these inconsistent explanations, we offer a model that brings dissimilar rationales into a single conceptual framework and shows the validity of each rationale in explaining different online behaviors. Drawing on typologies of organizational commitment, we argue that members may have psychological bonds to a particular online community based on (a) need, (b) affect, and/or (c) obligation. We develop hypotheses that explain how each form of commitment to a community affects the likelihood that a member will engage in particular behaviors (reading threads, posting replies, moderating the discussion). Our results indicate that each form of community commitment has a unique impact on each behavior, with need-based commitment predicting thread reading, affect-based commitment predicting reply posting and moderating behaviors, and obligation-based commitment predicting only moderating behavior. Researchers seeking to understand how discussion-based communities function will benefit from this more precise theorizing of how each form of member commitment relates to different kinds of online behaviors. Community managers who seek to encourage particular behaviors may use our results to target the underlying form of commitment most likely to encourage the activities they wish to promote.", "title": "" }, { "docid": "f2f5495973c560f15c307680bd5d3843", "text": "The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions . In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations. Two methods, using optimization and averaging (via Hybrid Monte Carlo) over hyperparameters have been tested on a number of challenging problems and have produced excellent results.", "title": "" }, { "docid": "91504378f63ba0c0d662180981f30f03", "text": "Closely matching natural teeth with an artificial restoration can be one of the most challenging procedures in restorative dentistry. Natural teeth vary greatly in color and shape. They reveal ample information about patients' background and personality. Dentistry provides the opportunity to restore unique patient characteristics or replace them with alternatives. Whether one tooth or many are restored, the ability to assess and properly communicate information to the laboratory can be greatly improved by learning the language of color and light characteristics. It is only possible to duplicate in ceramic what has been distinguished, understood, and communicated in the shade-matching process of the natural dentition. This article will give the reader a better understanding of what happens when incident light hits the surface of a tooth and give strategies for best assessing and communicating this to the dental laboratory.", "title": "" }, { "docid": "3f4d83525145a963c87167e3e02136a6", "text": "Using the GTZAN Genre Collection [1], we start with a set of 1000 30 second song excerpts subdivided into 10 pre-classified genres: Blues, Classical, Country, Disco, Hip-Hop, Jazz, Metal, Pop, Reggae, and Rock. We downsampled to 4000 Hz, and further split each excerpt into 5-second clips For each clip, we compute a spectrogram using Fast Fourier Transforms, giving us 22 timestep vectors of dimensionality 513 for each clip. Spectrograms separate out component audio signals at different frequencies from a raw audio signal, and provide us with a tractable, loosely structured feature set for any given audio clip that is well-suited for deep learning techniques. (See, for example, the spectrogram produced by a jazz excerpt below) Models", "title": "" }, { "docid": "a56650db0651fc0e76f9c0f383aec0e9", "text": "Solid evidence of virtual reality's benefits has graduated from impressive visual demonstrations to producing results in practical applications. Further, a realistic experience is no longer immersion's sole asset. Empirical studies show that various components of immersion provide other benefits - full immersion is not always necessary. The goal of immersive virtual environments (VEs) was to let the user experience a computer-generated world as if it were real - producing a sense of presence, or \"being there,\" in the user's mind.", "title": "" }, { "docid": "499fe7f6bf5c7d8fcfe690e7390a5d36", "text": "Compressional or traumatic asphyxia is a well recognized entity to most forensic pathologists. The vast majority of reported cases have been accidental. The case reported here describes the apparent inflicted compressional asphyxia of a small child. A review of mechanisms and related controversy regarding proposed mechanisms is discussed.", "title": "" }, { "docid": "2cc1373758f509c39275562f69b602c1", "text": "This paper presents our solution for enabling a quadrotor helicopter to autonomously navigate unstructured and unknown indoor environments. We compare two sensor suites, specifically a laser rangefinder and a stereo camera. Laser and camera sensors are both well-suited for recovering the helicopter’s relative motion and velocity. Because they use different cues from the environment, each sensor has its own set of advantages and limitations that are complimentary to the other sensor. Our eventual goal is to integrate both sensors on-board a single helicopter platform, leading to the development of an autonomous helicopter system that is robust to generic indoor environmental conditions. In this paper, we present results in this direction, describing the key components for autonomous navigation using either of the two sensors separately.", "title": "" }, { "docid": "fa2e8f411d74030bbec7937114f88f35", "text": "We present a method for synthesizing a frontal, neutralexpression image of a person’s face given an input face photograph. This is achieved by learning to generate facial landmarks and textures from features extracted from a facial-recognition network. Unlike previous generative approaches, our encoding feature vector is largely invariant to lighting, pose, and facial expression. Exploiting this invariance, we train our decoder network using only frontal, neutral-expression photographs. Since these photographs are well aligned, we can decompose them into a sparse set of landmark points and aligned texture maps. The decoder then predicts landmarks and textures independently and combines them using a differentiable image warping operation. The resulting images can be used for a number of applications, such as analyzing facial attributes, exposure and white balance adjustment, or creating a 3-D avatar.", "title": "" }, { "docid": "246cddf2c76383e82dab8f498b6974bb", "text": "With the growing use of the Social Web, an increasing number of applications for exchanging opinions with other people are becoming available online. These applications are widely adopted with the consequence that the number of opinions about the debated issues increases. In order to cut in on a debate, the participants need first to evaluate the opinions in favour or against the debated issue. Argumentation theory proposes algorithms and semantics to evaluate the set of accepted arguments, given the conflicts among them. The main problem is how to automatically generate the arguments from the natural language formulation of the opinions used in these applications. Our paper addresses this problem by proposing and evaluating the use of natural language techniques to generate the arguments. In particular, we adopt the textual entailment approach, a generic framework for applied semantics, where linguistic objects are mapped by means of semantic inferences at a textual level. We couple textual entailment together with a Dung-like argumentation system which allows us to identify the arguments that are accepted in the considered online debate. The originality of the proposed framework lies in the following point: natural language debates are analyzed and the arguments are automatically extracted.", "title": "" }, { "docid": "7dc7eaef334fc7678821fa66424421f1", "text": "The present research complements extant variable-centered research that focused on the dimensions of autonomous and controlled motivation through adoption of a person-centered approach for identifying motivational profiles. Both in high school students (Study 1) and college students (Study 2), a cluster analysis revealed 4 motivational profiles: a good quality motivation group (i.e., high autonomous, low controlled); a poor quality motivation group (i.e., low autonomous, high controlled); a low quantity motivation group (i.e., low autonomous, low controlled); and a high quantity motivation group (i.e., high autonomous, high controlled). To compare the 4 groups, the authors derived predictions from qualitative and quantitative perspectives on motivation. Findings generally favored the qualitative perspective; compared with the other groups, the good quality motivation group displayed the most optimal learning pattern and scored highest on perceived need-supportive teaching. Theoretical and practical implications of the findings are discussed.", "title": "" }, { "docid": "7f5af3806f0baa040a26f258944ad3f9", "text": "Linear Discriminant Analysis (LDA) is a widely-used supervised dimensionality reduction method in computer vision and pattern recognition. In null space based LDA (NLDA), a well-known LDA extension, between-class distance is maximized in the null space of the within-class scatter matrix. However, there are some limitations in NLDA. Firstly, for many data sets, null space of within-class scatter matrix does not exist, thus NLDA is not applicable to those datasets. Secondly, NLDA uses arithmetic mean of between-class distances and gives equal consideration to all between-class distances, which makes larger between-class distances can dominate the result and thus limits the performance of NLDA. In this paper, we propose a harmonic mean based Linear Discriminant Analysis, Multi-Class Discriminant Analysis (MCDA), for image classification, which minimizes the reciprocal of weighted harmonic mean of pairwise between-class distance. More importantly, MCDA gives higher priority to maximize small between-class distances. MCDA can be extended to multi-label dimension reduction. Results on 7 single-label data sets and 4 multi-label data sets show that MCDA has consistently better performance than 10 other single-label approaches and 4 other multi-label approaches in terms of classification accuracy, macro and micro average F1 score.", "title": "" }, { "docid": "8c47d9a93e3b9d9f31b77b724bf45578", "text": "A high-sensitivity fully passive 868-MHz wake-up radio (WUR) front-end for wireless sensor network nodes is presented. The front-end does not have an external power source and extracts the entire energy from the radio-frequency (RF) signal received at the antenna. A high-efficiency differential RF-to-DC converter rectifies the incident RF signal and drives the circuit blocks including a low-power comparator and reference generators; and at the same time detects the envelope of the on-off keying (OOK) wake-up signal. The front-end is designed and simulated 0.13μm CMOS and achieves a sensitivity of -33 dBm for a 100 kbps wake-up signal.", "title": "" }, { "docid": "5ae22c0209333125c61f66aafeeda139", "text": "The author reports the development of a multi-finger robot hand with the mechatronics approach. The proposed robot hand has 4 fingers with 14 under-actuated joints driven by 10 linear actuators with linkages. Each of the 10 nodes in the distributed control system uses position and current feedback to monitor the contact stiffness and control the grasping force according to the motor current change rate. The combined force and position control loop enable the robot hand to grasp an object with the unknown shape. Pre-defined tasks, such as grasping and pinching are stored as scripts in the hand controller to provide a high-level programming interface for the upstream robot controller. The mechanical design, controller design and co-simulation are performed in an integrated model-based software environment, and also for the real time code generation and for mechanical parts manufacturing with a 3D printer. Based on the same model for design, a virtual robot hand interface is developed to provide off-line simulation tool and user interface to the robot hand to reduce the programming effort in fingers' motion planning. In the development of the robot hand, the mechatronics approach has been proven to be an indispensable tool for such a complex system.", "title": "" }, { "docid": "3a948bb405b89376807a60a2a70ce7f7", "text": "The objective of this research is to develop feature extraction and classification techniques for the task of acoustic event recognition (AER) in unstructured environments, which are those where adverse effects such as noise, distortion and multiple sources are likely to occur. The goal is to design a system that can achieve human-like sound recognition performance on a variety of hearing tasks in different environments. The research is important, as the field is commonly overshadowed by the more popular area of automatic speech recognition (ASR), and typical AER systems are often based on techniques taken directly from this. However, direct application presents difficulties, as the characteristics of acoustic events are less well defined than those of speech, and there is no sub-word dictionary available like the phonemes in speech. In addition, the performance of ASR systems typically degrades dramatically in such adverse, unstructured environments. Therefore, it is important to develop a system that can perform well for this challenging task. In this work, two novel feature extraction methods are proposed for recognition of environmental sounds in severe noisy conditions, based on the visual signature of the sounds. The first method is called the Spectrogram Image Feature (SIF), and is based on the timefrequency spectrogram of the sound. This is captured through an image-processing inspired quantisation and mapping of the dynamic range prior to feature extraction. Experimental results show that the feature based on the raw-power spectrogram has a good performance, and is particularly suited to severe mismatched conditions. The second proposed method is the Spectral Power Distribution Image Feature (SPD-IF), which uses the same image feature approach, but is based on an SPD image derived from the stochastic distribution of power over the sound clip. This is combined with a missing feature classification system, which marginalises the image regions containing only noise, and experiments show the method achieves the high accuracy of the baseline methods in clean conditions combined with robust results in mismatched noise.", "title": "" }, { "docid": "eadc50aebc6b9c2fbd16f9ddb3094c00", "text": "Instance segmentation is the problem of detecting and delineating each distinct object of interest appearing in an image. Current instance segmentation approaches consist of ensembles of modules that are trained independently of each other, thus missing opportunities for joint learning. Here we propose a new instance segmentation paradigm consisting in an end-to-end method that learns how to segment instances sequentially. The model is based on a recurrent neural network that sequentially finds objects and their segmentations one at a time. This net is provided with a spatial memory that keeps track of what pixels have been explained and allows occlusion handling. In order to train the model we designed a principled loss function that accurately represents the properties of the instance segmentation problem. In the experiments carried out, we found that our method outperforms recent approaches on multiple person segmentation, and all state of the art approaches on the Plant Phenotyping dataset for leaf counting.", "title": "" }, { "docid": "378f0e528dddcb0319d0015ebc5f8ccb", "text": "Specific and non specific cholinesterase activities were demonstrated in the ABRM of Mytilus edulis L. and Mytilus galloprovincialis L. by means of different techniques. The results were found identical for both species: neuromuscular junctions “en grappe”-type scarcely distributed within the ABRM, contain AChE. According to the histochemical inhibition tests, (a) the eserine inhibits AChE activity of the ABRM with a level of 5·10−5 M or higher, (b) the ChE non specific activities are inhibited by iso-OMPA level between 5·10−5 to 10−4 M. The histo- and cytochemical observations were completed by showing the existence of neuromuscular junctions containing small clear vesicles: they probably are the morphological support for ACh presence. Moreover, specific and non specific ChE activities were localized in the glio-interstitial cells. AChE precipitates were developped along the ABRM sarcolemma, some muscle mitochondria and in the intercellular spaces remain enigmatic.", "title": "" }, { "docid": "301373338fe35426f5186f400f63dbd3", "text": "OBJECTIVE\nThis paper describes state of the art, scientific publications and ongoing research related to the methods of analysis of respiratory sounds.\n\n\nMETHODS AND MATERIAL\nReview of the current medical and technological literature using Pubmed and personal experience.\n\n\nRESULTS\nThe study includes a description of the various techniques that are being used to collect auscultation sounds, a physical description of known pathologic sounds for which automatic detection tools were developed. Modern tools are based on artificial intelligence and on technics such as artificial neural networks, fuzzy systems, and genetic algorithms…\n\n\nCONCLUSION\nThe next step will consist in finding new markers so as to increase the efficiency of decision aid algorithms and tools.", "title": "" } ]
scidocsrr
be3bde921a65f73375afbcdd6a19940a
Intergroup emotions: explaining offensive action tendencies in an intergroup context.
[ { "docid": "59af1eb49108e672a35f7c242c5b4683", "text": "“The value concept, more than any other, should occupy a central position . . . able to unify the apparently diverse interests of all the sciences concerned with human behavior.” These words, proclaiming the centrality of the value concept, were written by a psychologist (Rokeach, 1973, p. 3), but similar stands have been taken by sociologists (e.g., Williams, 1968) and anthropologists (e.g., Kluckhohn, 1951). These theorists view values as the criteria people use to select and justify actions and to evaluate people (including the self) and events. We, too, adopt this view of values as criteria rather than as qualities inherent in objects. This article discusses work that is part of a larger project intended to explore the importance of values in a wide variety of contexts. The project addresses three broad questions about values. First, how are the value priorities of individuals affected by their social experiences? That is, how do the common experiences people have, because of their shared locations in the social structure (their education, age, gender, occupation, etc.), influence their value priorities? And, how do individuals’ unique experiences (trauma, relations with parents, immigration, etc.) affect their value priorities? Second, how do the value priorities held by individuals affect their behavioral orientations and choices? That is, how do value priorities influence ideologies, attitudes, and actions in the political, religious, environmental, and other domains?", "title": "" } ]
[ { "docid": "bc57dfee1a00d7cfb025a1a5840623f8", "text": "Production and consumption relationship shows that marketing plays an important role in enterprises. In the competitive market, it is very important to be able to sell rather than produce. Nowadays, marketing is customeroriented and aims to meet the needs and expectations of customers to increase their satisfaction. While creating a marketing strategy, an enterprise must consider many factors. Which is why, the process can and should be considered as a multi-criteria decision making (MCDM) case. In this study, marketing strategies and marketing decisions in the new-product-development process has been analyzed in a macro level. To deal quantitatively with imprecision or uncertainty, fuzzy sets theory has been used throughout the analysis.", "title": "" }, { "docid": "f267f44fe9463ac0114335959f9739fa", "text": "HTTP Adaptive Streaming (HAS) is today the number one video technology for over-the-top video distribution. In HAS, video content is temporally divided into multiple segments and encoded at different quality levels. A client selects and retrieves per segment the most suited quality version to create a seamless playout. Despite the ability of HAS to deal with changing network conditions, HAS-based live streaming often suffers from freezes in the playout due to buffer under-run, low average quality, large camera-to-display delay, and large initial/channel-change delay. Recently, IETF has standardized HTTP/2, a new version of the HTTP protocol that provides new features for reducing the page load time in Web browsing. In this paper, we present ten novel HTTP/2-based methods to improve the quality of experience of HAS. Our main contribution is the design and evaluation of a push-based approach for live streaming in which super-short segments are pushed from server to client as soon as they become available. We show that with an RTT of 300 ms, this approach can reduce the average server-to-display delay by 90.1% and the average start-up delay by 40.1%.", "title": "" }, { "docid": "59c83aa2f97662c168316f1a4525fd4d", "text": "Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method.", "title": "" }, { "docid": "765e766515c9c241ffd2d84572fd887f", "text": "The cost of reconciling consistency and state management with high availability is highly magnified by the unprecedented scale and robustness requirements of today’s Internet applications. We propose two strategies for improving overall availability using simple mechanisms that scale over large applications whose output behavior tolerates graceful degradation. We characterize this degradation in terms of harvest and yield, and map it directly onto engineering mechanisms that enhance availability by improving fault isolation, and in some cases also simplify programming. By collecting examples of related techniques in the literature and illustrating the surprising range of applications that can benefit from these approaches, we hope to motivate a broader research program in this area. 1. Motivation, Hypothesis, Relevance Increasingly, infrastructure services comprise not only routing, but also application-level resources such as search engines [15], adaptation proxies [8], and Web caches [20]. These applications must confront the same operational expectations and exponentially-growing user loads as the routing infrastructure, and consequently are absorbing comparable amounts of hardware and software. The current trend of harnessing commodity-PC clusters for scalability and availability [9] is reflected in the largest web server installations. These sites use tens to hundreds of PC’s to deliver 100M or more read-mostly page views per day, primarily using simple replication or relatively small data sets to increase throughput. The scale of these applications is bringing the wellknown tradeoff between consistency and availability [4] into very sharp relief. In this paper we propose two general directions for future work in building large-scale robust systems. Our approaches tolerate partial failures by emphasizing simple composition mechanisms that promote fault containment, and by translating possible partial failure modes into engineering mechanisms that provide smoothlydegrading functionality rather than lack of availability of the service as a whole. The approaches were developed in the context of cluster computing, where it is well accepted [22] that one of the major challenges is the nontrivial software engineering required to automate partial-failure handling in order to keep system management tractable. 2. Related Work and the CAP Principle In this discussion, strong consistency means singlecopy ACID [13] consistency; by assumption a stronglyconsistent system provides the ability to perform updates, otherwise discussing consistency is irrelevant. High availability is assumed to be provided through redundancy, e.g. data replication; data is considered highly available if a given consumer of the data can always reach some replica. Partition-resilience means that the system as whole can survive a partition between data replicas. Strong CAP Principle. Strong Consistency, High Availability, Partition-resilience: Pick at most 2. The CAP formulation makes explicit the trade-offs in designing distributed infrastructure applications. It is easy to identify examples of each pairing of CAP, outlining the proof by exhaustive example of the Strong CAP Principle: CA without P: Databases that provide distributed transactional semantics can only do so in the absence of a network partition separating server peers. CP without A: In the event of a partition, further transactions to an ACID database may be blocked until the partition heals, to avoid the risk of introducing merge conflicts (and thus inconsistency). AP without C: HTTP Web caching provides clientserver partition resilience by replicating documents, but a client-server partition prevents verification of the freshness of an expired replica. In general, any distributed database problem can be solved with either expiration-based caching to get AP, or replicas and majority voting to get PC (the minority is unavailable). In practice, many applications are best described in terms of reduced consistency or availability. For example, weakly-consistent distributed databases such as Bayou [5] provide specific models with well-defined consistency/availability tradeoffs; disconnected filesystems such as Coda [16] explicitly argued for availability over strong consistency; and expiration-based consistency mechanisms such as leases [12] provide fault-tolerant consistency management. These examples suggest that there is a Weak CAP Principle which we have yet to characterize precisely: The stronger the guarantees made about any two of strong consistency, high availability, or resilience to partitions, the weaker the guarantees that can be made about the third. 3. Harvest, Yield, and the CAP Principle Both strategies we propose for improving availability with simple mechanisms rely on the ability to broaden our notion of “correct behavior” for the target application, and then exploit the tradeoffs in the CAP principle to improve availability at large scale. We assume that clients make queries to servers, in which case there are at least two metrics for correct behavior: yield, which is the probability of completing a request, and harvest, which measures the fraction of the data reflected in the response, i.e. the completeness of the answer to the query. Yield is the common metric and is typically measured in “nines”: “four-nines availability” means a completion probability of . In practice, good HA systems aim for four or five nines. In the presence of faults there is typically a tradeoff between providing no answer (reducing yield) and providing an imperfect answer (maintaining yield, but reducing harvest). Some applications do not tolerate harvest degradation because any deviation from the single well-defined correct behavior renders the result useless. For example, a sensor application that must provide a binary sensor reading (presence/absence) does not tolerate degradation of the output.1 On the other hand, some applications tolerate graceful degradation of harvest: online aggregation [14] allows a user to explicitly trade running time for precision and confidence in performing arithmetic aggregation queries over a large dataset, thereby smoothly trading harvest for response time, which is particularly useful for approximate answers and for avoiding work that looks unlikely to be worthwhile based on preliminary results. At first glance, it would appear that this kind of degradation applies only to queries and not to updates. However, the model can be applied in the case of “single-location” updates: those changes that are localized to a single node (or technically a single partition). In this case, updates that 1This is consistent with the use of the term yield in semiconductor manufacturing: typically, each die on a wafer is intolerant to harvest degradation, and yield is defined as the fraction of working dice on a wafer. affect reachable nodes occur correctly but have limited visibility (a form of reduced harvest), while those that require unreachable nodes fail (reducing yield). These localized changes are consistent exactly because the new values are not available everywhere. This model of updates fails for global changes, but it is still quite useful for many practical applications, including personalization databases and collaborative filtering. 4. Strategy 1: Trading Harvest for Yield— Probabilistic Availability Nearly all systems are probabilistic whether they realize it or not. In particular, any system that is 100% available under single faults is probabilistically available overall (since there is a non-zero probability of multiple failures), and Internet-based servers are dependent on the best-effort Internet for true availability. Therefore availability maps naturally to probabilistic approaches, and it is worth addressing probabilistic systems directly, so that we can understand and limit the impact of faults. This requires some basic decisions about what needs to be available and the expected nature of faults. For example, node faults in the Inktomi search engine remove a proportional fraction of the search database. Thus in a 100-node cluster a single-node fault reduces the harvest by 1% during the duration of the fault (the overall harvest is usually measured over a longer interval). Implicit in this approach is graceful degradation under multiple node faults, specifically, linear degradation in harvest. By randomly placing data on nodes, we can ensure that the 1% lost is a random 1%, which makes the average-case and worstcase fault behavior the same. In addition, by replicating a high-priority subset of data, we reduce the probability of losing that data. This gives us more precise control of harvest, both increasing it and reducing the practical impact of missing data. Of course, it is possible to replicate all data, but doing so may have relatively little impact on harvest and yield despite significant cost, and in any case can never ensure 100% harvest or yield because of the best-effort Internet protocols the service relies on. As a similar example, transformation proxies for thin clients [8] also trade harvest for yield, by degrading results on demand to match the capabilities of clients that might otherwise be unable to get results at all. Even when the 100%-harvest answer is useful to the client, it may still be preferable to trade response time for harvest when clientto-server bandwidth is limited, for example, by intelligent degradation to low-bandwidth formats [7]. 5. Strategy 2: Application Decomposition and Orthogonal Mechanisms Some large applications can be decomposed into subsystems that are independently intolerant to harvest degradation (i.e. they fail by reducing yield), but whose independent failure allows the overall application to continue functioning with reduced utility. The application as a whole is then tolerant of harvest degradation. A good decomposition has at least one actual benefit and one potential benefit. The actual benefi", "title": "" }, { "docid": "227f23f0357e0cad280eb8e6dec4526b", "text": "This paper presents an iterative and analytical approach to optimal synthesis of a multiplexer with a star-junction. Two types of commonly used lumped-element junction models, namely, nonresonant node (NRN) type and resonant type, are considered and treated in a uniform way. A new circuit equivalence called phased-inverter to frequency-invariant reactance inverter transformation is introduced. It allows direct adoption of the optimal synthesis theory of a bandpass filter for synthesizing channel filters connected to a star-junction by converting the synthesized phase shift to the susceptance compensation at the junction. Since each channel filter is dealt with individually and alternately, when synthesizing a multiplexer with a high number of channels, good accuracy can still be maintained. Therefore, the approach can be used to synthesize a wide range of multiplexers. Illustrative examples of synthesizing a diplexer with a common resonant type of junction and a triplexer with an NRN type of junction are given to demonstrate the effectiveness of the proposed approach. A prototype of a coaxial resonator diplexer according to the synthesized circuit model is fabricated to validate the synthesized result. Excellent agreement is obtained.", "title": "" }, { "docid": "a8d6fe9d4670d1ccc4569aa322f665ee", "text": "Abstract Improved feedback on electricity consumption may provide a tool for customers to better control their consumption and ultimately save energy. This paper asks which kind of feedback is most successful. For this purpose, a psychological model is presented that illustrates how and why feedback works. Relevant features of feedback are identified that may determine its effectiveness: frequency, duration, content, breakdown, medium and way of presentation, comparisons, and combination with other instruments. The paper continues with an analysis of international experience in order to find empirical evidence for which kinds of feedback work best. In spite of considerable data restraints and research gaps, there is some indication that the most successful feedback combines the following features: it is given frequently and over a long time, provides an appliance-specific breakdown, is presented in a clear and appealing way, and uses computerized and interactive tools.", "title": "" }, { "docid": "6aa9eaad1024bf49e24eabc70d5d153d", "text": "High-quality documentary photo series have a special place in rhinoplasty. The exact photographic reproduction of the nasal contours is an essential part of surgical planning, documentation and follow-up of one’s own work. Good photographs can only be achieved using suitable technology and with a good knowledge of photography. Standard operating procedures are also necessary. The photographic equipment should consist of a digital single-lens reflex camera, studio flash equipment and a suitable room for photography with a suitable backdrop. The high standards required cannot be achieved with simple photographic equipment. The most important part of the equipment is the optics. Fixed focal length lenses with a focal length of about 105 mm are especially suited to this type of work. Nowadays, even a surgeon without any photographic training is in a position to produce a complete series of clinical images. With digital technology, any of us can take good photographs. The correct exposure, the right depth of focus for the key areas of the nose and the right camera angle are the decisive factors in a good image series. Up to six standard images are recommended in the literature for the proper documentation of nasal surgery. The most important are frontal, three quarters and profile views. In special cases, close-up images may also be necessary. Preparing a professional image series is labour-intensive and very expensive. Large hospitals no longer employ professional photographers. Despite this, we must strive to maintain a high standard of photodocumenation for publications and to ensure that cases can be compared at congresses.", "title": "" }, { "docid": "d0a6ca9838f8844077fdac61d1d75af1", "text": "Depth-first search, as developed by Tarjan and coauthors, is a fundamental technique of efficient algorithm design for graphs [23]. This note presents depth-first search algorithms for two basic problems, strong and biconnected components. Previous algorithms either compute auxiliary quantities based on the depth-first search tree (e.g., LOWPOINT values) or require two passes. We present one-pass algorithms that only maintain a representation of the depth-first search path. This gives a simplified view of depth-first search without sacrificing efficiency. In greater detail, most depth-first search algorithms (e.g., [23,10,11]) compute so-called LOWPOINT values that are defined in terms of the depth-first search tree. Because of the success of this method LOWPOINT values have become almost synonymous with depth-first search. LOWPOINT values are regarded as crucial in the strong and biconnected component algorithms, e.g., [14, pp. 94, 514]. Tarjan’s LOWPOINT method for strong components is presented in texts [1, 7,14,16,17,21]. The strong component algorithm of Kosaraju and Sharir [22] is often viewed as conceptu-", "title": "" }, { "docid": "82835828a7f8c073d3520cdb4b6c47be", "text": "Simultaneous Localization and Mapping (SLAM) for mobile robots is a computationally expensive task. A robot capable of SLAM needs a powerful onboard computer, but this can limit the robot's mobility because of weight and power demands. We consider moving this task to a remote compute cloud, by proposing a general cloud-based architecture for real-time robotics computation, and then implementing a Rao-Blackwellized Particle Filtering-based SLAM algorithm in a multi-node cluster in the cloud. In our implementation, expensive computations are executed in parallel, yielding significant improvements in computation time. This allows the algorithm to increase the complexity and frequency of calculations, enhancing the accuracy of the resulting map while freeing the robot's onboard computer for other tasks. Our method for implementing particle filtering in the cloud is not specific to SLAM and can be applied to other computationally-intensive tasks.", "title": "" }, { "docid": "48e917ffb0e5636f5ca17b3242c07706", "text": "Two studies examined the influence of approach and avoidance social goals on memory for and evaluation of ambiguous social information. Study 1 found that individual differences in avoidance social goals were associated with greater memory of negative information, negatively biased interpretation of ambiguous social cues, and a more pessimistic evaluation of social actors. Study 2 experimentally manipulated social goals and found that individuals high in avoidance social motivation remembered more negative information and expressed more dislike for a stranger in the avoidance condition than in the approach condition. Results suggest that avoidance social goals are associated with emphasizing potential threats when making sense of the social environment.", "title": "" }, { "docid": "9666ac68ee1aeb8ce18ccd2615cdabb2", "text": "As the bring your own device (BYOD) to work trend grows, so do the network security risks. This fast-growing trend has huge benefits for both employees and employers. With malware, spyware and other malicious downloads, tricking their way onto personal devices, organizations need to consider their information security policies. Malicious programs can download onto a personal device without a user even knowing. This can have disastrous results for both an organization and the personal device. When this happens, it risks BYODs making unauthorized changes to policies and leaking sensitive information into the public domain. A privacy breach can cause a domino effect with huge financial and legal implications, and loss of productivity for organizations. This is a difficult challenge. Organizations need to consider user privacy and rights together with protecting networks from attacks. This paper evaluates a new architectural framework to control the risks that challenge organizations and the use of BYODs. After analysis of large volumes of research, the previous studies addressed single issues. We integrated parts of these single solutions into a new framework to develop a complete solution for access control. With too many organizations failing to implement and enforce adequate security policies, the process needs to be simpler. This framework reduces system restrictions while enforcing access control policies for BYOD and cloud environments using an independent platform. Primary results of the study are positive with the framework reducing access control issues. Keywords—Bring your own device; access control; policy; security", "title": "" }, { "docid": "ec237c01100bf6afa26f3b01a62577f3", "text": "Polyphenols are secondary metabolites of plants and are generally involved in defense against ultraviolet radiation or aggression by pathogens. In the last decade, there has been much interest in the potential health benefits of dietary plant polyphenols as antioxidant. Epidemiological studies and associated meta-analyses strongly suggest that long term consumption of diets rich in plant polyphenols offer protection against development of cancers, cardiovascular diseases, diabetes, osteoporosis and neurodegenerative diseases. Here we present knowledge about the biological effects of plant polyphenols in the context of relevance to human health.", "title": "" }, { "docid": "61d8761f3c6a8974d0384faf9a084b53", "text": "With many thyroid nodules being incidentally detected, it is important to identify as many malignant nodules as possible while excluding those that are highly likely to be benign from fine needle aspiration (FNA) biopsies or surgeries. This paper presents a computer-aided diagnosis (CAD) system for classifying thyroid nodules in ultrasound images. We use deep learning approach to extract features from thyroid ultrasound images. Ultrasound images are pre-processed to calibrate their scale and remove the artifacts. A pre-trained GoogLeNet model is then fine-tuned using the pre-processed image samples which leads to superior feature extraction. The extracted features of the thyroid ultrasound images are sent to a Cost-sensitive Random Forest classifier to classify the images into “malignant” and “benign” cases. The experimental results show the proposed fine-tuned GoogLeNet model achieves excellent classification performance, attaining 98.29% classification accuracy, 99.10% sensitivity and 93.90% specificity for the images in an open access database (Pedraza et al. 16), while 96.34% classification accuracy, 86% sensitivity and 99% specificity for the images in our local health region database.", "title": "" }, { "docid": "9d0ea524b8f591d9ea337a8c789e51c1", "text": "Abstract—The recent development of social media poses new challenges to the research community in analyzing online interactions between people. Social networking sites offer great opportunities for connecting with others, but also increase the vulnerability of young people to undesirable phenomena, such as cybervictimization. Recent research reports that on average, 20% to 40% of all teenagers have been victimized online. In this paper, we focus on cyberbullying as a particular form of cybervictimization. Successful prevention depends on the adequate detection of potentially harmful messages. However, given the massive information overload on the Web, there is a need for intelligent systems to identify potential risks automatically. We present the construction and annotation of a corpus of Dutch social media posts annotated with fine-grained cyberbullying-related text categories, such as insults and threats. Also, the specific participants (harasser, victim or bystander) in a cyberbullying conversation are identified to enhance the analysis of human interactions involving cyberbullying. Apart from describing our dataset construction and annotation, we present proof-of-concept experiments on the automatic identification of cyberbullying events and fine-grained cyberbullying categories.", "title": "" }, { "docid": "458470e18ce2ab134841f76440cfdc2b", "text": "Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.", "title": "" }, { "docid": "f407ea856f2d00dca1868373e1bd9e2f", "text": "Software industry is heading towards centralized computin g. Due to this trend data and programs are being taken away from traditional desktop PCs and placed in compute clouds instead. Compute clouds are enormous server farms packed with computing power and storage space accessible through the Internet. Instead of having to manage one’s own infrastructure to run applications, server time and storage space can can be bought from an external service provider. From the customers’ point of view the benefit behind this idea is to be able to dynamically adjust computing power up or down to meet the demand for that power at a particular moment. This kind of flexibility not only ensures that no costs are incurred by excess processing capacity, but also enables hard ware infrastructure to scale up with business growth. Because of growing interest in taking advantage of cloud computing a number of service providers are working on providing cloud services. As stated in [7], Amazon, Salerforce.co m and Google are examples of firms that already have working solutions on the market. Recently also Microsoft released a preview version of its cloud platform called the Azure. Earl y adopters can test the platform and development tools free of charge.[2, 3, 4] The main purpose of this paper is to shed light on the internals of Microsoft’s Azure platform. In addition to examinin g how Azure platform works, the benefits of Azure platform are explored. The most important benefit in Microsoft’s solu tion is that it resembles existing Windows environment a lot . Developers can use the same application programming interfaces (APIs) and development tools they are already used to. The second benefit is that migrating applications to cloud is easy. This partially stems from the fact that Azure’s servic es can be exploited by an application whether it is run locally or in the cloud.", "title": "" }, { "docid": "eec33c75a0ec9b055a857054d05bcf54", "text": "We introduce a logical process of three distinct phases to begin the evaluation of a new 3D dosimetry array. The array under investigation is a hollow cylinder phantom with diode detectors fixed in a helical shell forming an \"O\" axial detector cross section (ArcCHECK), with comparisons drawn to a previously studied 3D array with diodes fixed in two crossing planes forming an \"X\" axial cross section (Delta⁴). Phase I testing of the ArcCHECK establishes: robust relative calibration (response equalization) of the individual detectors, minor field size dependency of response not present in a 2D predecessor, and uncorrected angular response dependence in the axial plane. Phase II testing reveals vast differences between the two devices when studying fixed-width full circle arcs. These differences are primarily due to arc discretization by the TPS that produces low passing rates for the peripheral detectors of the ArcCHECK, but high passing rates for the Delta⁴. Similar, although less pronounced, effects are seen for the test VMAT plans modeled after the AAPM TG119 report. The very different 3D detector locations of the two devices, along with the knock-on effect of different percent normalization strategies, prove that the analysis results from the devices are distinct and noninterchangeable; they are truly measuring different things. The value of what each device measures, namely their correlation with--or ability to predict--clinically relevant errors in calculation and/or delivery of dose is the subject of future Phase III work.", "title": "" }, { "docid": "e985d20f75d29c24fda39135e0e54636", "text": "Software testing is a highly complex and time consu ming activityIt is even difficult to say when tes ing is complete. The effective combination of black box (external) a nd white box (internal) testing is known as Gray-bo x testing. Gray box testing is a powerful idea if one knows something about how the product works on the inside; one can test it b etter, even from the outside. Gray box testing is not black box testing, because the tester does know some of the internal workings of the software under test. It is not to be confused with white box testing, testi ng approach that attempts to cover the internals of the product in detail. Gray box testing is a test strategy based partly on internal s. This paper will present all the three methodolog y Black-box, White-box, Graybox and how this method has been applied to validat e cri ical software systems. KeywordsBlack-box, White-box, Gray-box or Grey-box Introduction In most software projects, testing is not given the necessary attention. Statistics reveal that the ne arly 30-40% of the effort goes into testing irrespective of the type of project; h ardly any time is allocated for testing. The comput er industry is changing at a very rapid pace. In order to keep pace with a rapidly ch anging computer industry, software test must develo p methods to verify and validate software for all aspects of the product li fecycle. Test case design techniques can be broadly split into two main categories: Black box & White box. Black box + White box = Gray Box Spelling: Note that Gray is also spelt as Grey. Hence Gray Box Testing and Grey Box Testing mean the same. Gray Box testing is a technique to test the applica tion with limited knowledge of the internal working s of an application. In software testing, the term the more you know the be tter carries a lot of weight when testing an applic ation. Mastering the domain of a system always gives the t ester an edge over someone with limited domain know ledge. Unlike black box testing, where the tester only tests the applicatio n's user interface, in Gray box testing, the tester has access to design documents and the database. Having this knowledge, the tester is able to better prepare test data and test scena rios when making the test plan. The gray-box testing goes mainly with the testing of web applications b ecause it considers high-level development, operati ng environment, and compatibility conditions. During b lack-box or white-box analysis it is harder to iden tify problems, related to endto-end data flow. Context-specific problems, associ ated with web site testing are usually found during gray-box verifying. Bridge between Black Box and White Box – ISSN 2277-1956/V2N1-175-185 Testing Methods Fig 1: Classification 1. Black Box Testing Black box testing is a software testing techniques in which looking at the internal code structure, implementation details and knowledge of internal pa ths of the software. testing is based entirely on the software requireme nts and specifications. Black box testing is best suited for rapid test sce nario testing and quick Web Service Services provides quick feedback on the functional re diness of operations t better suited for operations that have enumerated necessary. It is used for finding the following errors: 1. Incorrect or missing functions 2. Interface errors 3. Errors in data structures or External database access 4. Performance errors 5. Initialization and termination errors Example A tester, without knowledge of the internal structu res of a website, tests the web pages by using a br owse ; providing inputs (clicks, keystrokes) and verifying the outputs agai nst the expected outcome. Levels Applicable To Black Box testing method is applicable to all levels of the software testing process: Testing, and Acceptance Testing. The higher the level, and hence the bigger and more c mplex the box, the mo method comes into use. Black Box Testing Techniques Following are some techniques that can be used for esigning black box tests. Equivalence partitioning Equivalence Partitioning is a software test design technique that involves selecting representative values from each partition as test data. Boundary Value Analysis Boundary Value Analysis is a software test design t echnique that involves determination of boundaries for selecting values that are at the boundaries and jus t inside/outside of the boundaries as test data. Cause Effect Graphing Cause Effect Graphing is a software test design tec hnique that involves identifying the cases (input c onditions) and conditions), producing a CauseEffect Graph, and generating test cases accordingly . Gray Box Testing Technique", "title": "" }, { "docid": "7ad4f52279e85f8e20239e1ea6c85bbb", "text": "One of the most exciting but challenging endeavors in music research is to develop a computational model that comprehends the affective content of music signals and organizes a music collection according to emotion. In this paper, we propose a novel acoustic emotion Gaussians (AEG) model that defines a proper generative process of emotion perception in music. As a generative model, AEG permits easy and straightforward interpretations of the model learning processes. To bridge the acoustic feature space and music emotion space, a set of latent feature classes, which are learned from data, is introduced to perform the end-to-end semantic mappings between the two spaces. Based on the space of latent feature classes, the AEG model is applicable to both automatic music emotion annotation and emotion-based music retrieval. To gain insights into the AEG model, we also provide illustrations of the model learning process. A comprehensive performance study is conducted to demonstrate the superior accuracy of AEG over its predecessors, using two emotion annotated music corpora MER60 and MTurk. Our results show that the AEG model outperforms the state-of-the-art methods in automatic music emotion annotation. Moreover, for the first time a quantitative evaluation of emotion-based music retrieval is reported.", "title": "" }, { "docid": "4825e492dc1b7b645a5b92dde0c766cd", "text": "This article shows how language processing is intimately tuned to input frequency. Examples are given of frequency effects in the processing of phonology, phonotactics, reading, spelling, lexis, morphosyntax, formulaic language, language comprehension, grammaticality, sentence production, and syntax. The implications of these effects for the representations and developmental sequence of SLA are discussed. Usage-based theories hold that the acquisition of language is exemplar based. It is the piecemeal learning of many thousands of constructions and the frequency-biased abstraction of regularities within them. Determinants of pattern productivity include the power law of practice, cue competition and constraint satisfaction, connectionist learning, and effects of type and token frequency. The regularities of language emerge from experience as categories and prototypical patterns. The typical route of emergence of constructions is from formula, through low-scope pattern, to construction. Frequency plays a large part in explaining sociolinguistic variation and language change. Learners’ sensitivity to frequency in all these domains has implications for theories of implicit and explicit learning and their interactions. The review concludes by considering the history of frequency as an explanatory concept in theoretical and applied linguistics, its 40 years of exile, and its necessary reinstatement as a bridging variable that binds the different schools of language acquisition research.", "title": "" } ]
scidocsrr
1d99c577fe448b1ec5f29a3367d0a504
Clustering of Vehicle Trajectories
[ { "docid": "9d5593d89a206ac8ddb82921c2a68c43", "text": "This paper presents an automatic traffic surveillance system to estimate important traffic parameters from video sequences using only one camera. Different from traditional methods that can classify vehicles to only cars and noncars, the proposed method has a good ability to categorize vehicles into more specific classes by introducing a new \"linearity\" feature in vehicle representation. In addition, the proposed system can well tackle the problem of vehicle occlusions caused by shadows, which often lead to the failure of further vehicle counting and classification. This problem is solved by a novel line-based shadow algorithm that uses a set of lines to eliminate all unwanted shadows. The used lines are devised from the information of lane-dividing lines. Therefore, an automatic scheme to detect lane-dividing lines is also proposed. The found lane-dividing lines can also provide important information for feature normalization, which can make the vehicle size more invariant, and thus much enhance the accuracy of vehicle classification. Once all features are extracted, an optimal classifier is then designed to robustly categorize vehicles into different classes. When recognizing a vehicle, the designed classifier can collect different evidences from its trajectories and the database to make an optimal decision for vehicle classification. Since more evidences are used, more robustness of classification can be achieved. Experimental results show that the proposed method is more robust, accurate, and powerful than other traditional methods, which utilize only the vehicle size and a single frame for vehicle classification.", "title": "" }, { "docid": "c7d6e273065ce5ca82cd55f0ba5937cd", "text": "Many environmental and socioeconomic time–series data can be adequately modeled using Auto-Regressive Integrated Moving Average (ARIMA) models. We call such time–series ARIMA time–series. We consider the problem of clustering ARIMA time–series. We propose the use of the Linear Predictive Coding (LPC) cepstrum of time–series for clustering ARIMA time–series, by using the Euclidean distance between the LPC cepstra of two time–series as their dissimilarity measure. We demonstrate that LPC cepstral coefficients have the desired features for accurate clustering and efficient indexing of ARIMA time–series. For example, few LPC cepstral coefficients are sufficient in order to discriminate between time–series that are modeled by different ARIMA models. In fact this approach requires fewer coefficients than traditional approaches, such as DFT and DWT. The proposed distance measure can be used for measuring the similarity between different ARIMA models as well. We cluster ARIMA time–series using the Partition Around Medoids method with various similarity measures. We present experimental results demonstrating that using the proposed measure we achieve significantly better clusterings of ARIMA time–series data as compared to clusterings obtained by using other traditional similarity measures, such as DFT, DWT, PCA, etc. Experiments were performed both on simulated as well as real data.", "title": "" } ]
[ { "docid": "242686291812095c5320c1c8cae6da27", "text": "In the modern high-performance transceivers, mixers (both upand down-converters) are required to have large dynamic range in order to meet the system specifications. The lower end of the dynamic range is indicated by the noise floor which tells how small a signal may be processed while the high end is determined by the non-linearity which causes distortion, compression and saturation of the signal and thus limits the maximum signal amplitude input to the mixer for the undistorted output. Compared to noise, the linearity requirement is much higher in mixer design because it is generally the limiting factor to the transceiver’s linearity. Therefore, this paper will emphasize on the linearization techniques for analog multipliers and mixers, which have been a very active research area since 1960s.", "title": "" }, { "docid": "43e90cd84394bd686303e07b3048e3ac", "text": "A harlequin fetus seen at birth was treated with etretinate and more general measures, including careful attention to fluid balance, calorie intake and temperature control. She improved, continued to develop, and had survived to 5 months at the time of this report.", "title": "" }, { "docid": "c2a307faaec42f3c05188a5153eade19", "text": "A 28-year-old breastfeeding mother of term-born 3-month old twins contacted the Hospital Lactation consultant for advice. She had expressed milk at 2am and had stored the milk in the fridge. She fed some of that milk to one of the twins at 11am and further milk to both twins at 4pm. All three bottles were left on the bench until the next morning when the mother intended to clean the bottles. She found that the milk residue in all three feeding bottles had turned bright pink and had a strong earthy odour (see Fig. 1). The mother brought one of the bottles containing the bright pink milk with her to the hospital. The mother was in good health, with no symptoms of mastitis and no fever. Both twins were also healthy and continued to feed well and gain weight. What is the cause of the pink milk? (answer on page 82)", "title": "" }, { "docid": "2194de791698f6a0180e6a1bca8714a7", "text": "Several procedures have been utilized to elevate plasma free fatty acid (FFA) concentration and increase fatty acid (FA) delivery to skeletal muscle during exercise. These include fasting, caffeine ingestion, L-carnitine supplementation, ingestion of medium-chain and long-chain triglyceride (LCT) solutions, and intravenous infusion of intralipid emulsions. Studies in which both untrained and well-trained subjects have ingested LCT solutions or received an infusion of intralipid (in combination with an injection of heparin) before exercise have reported significant reductions in whole-body carbohydrate oxidation and decreased muscle glycogen utilization during both moderate and intense dynamic exercise lasting 15-60 min. The effects of increased FA provision on rates of muscle glucose uptake during exercise are, however, equivocal. Despite substantial muscle glycogen sparing (15-48% compared with control), exercise capacity is not systematically improved in the face of increased FA availability.", "title": "" }, { "docid": "cc6161fd350ac32537dc704cbfef2155", "text": "The contribution of cloud computing and mobile computing technologies lead to the newly emerging mobile cloud computing paradigm. Three major approaches have been proposed for mobile cloud applications: 1) extending the access to cloud services to mobile devices; 2) enabling mobile devices to work collaboratively as cloud resource providers; 3) augmenting the execution of mobile applications on portable devices using cloud resources. In this paper, we focus on the third approach in supporting mobile data stream applications. More specifically, we study how to optimize the computation partitioning of a data stream application between mobile and cloud to achieve maximum speed/throughput in processing the streaming data.\n To the best of our knowledge, it is the first work to study the partitioning problem for mobile data stream applications, where the optimization is placed on achieving high throughput of processing the streaming data rather than minimizing the makespan of executions as in other applications. We first propose a framework to provide runtime support for the dynamic computation partitioning and execution of the application. Different from existing works, the framework not only allows the dynamic partitioning for a single user but also supports the sharing of computation instances among multiple users in the cloud to achieve efficient utilization of the underlying cloud resources. Meanwhile, the framework has better scalability because it is designed on the elastic cloud fabrics. Based on the framework, we design a genetic algorithm for optimal computation partition. Both numerical evaluation and real world experiment have been performed, and the results show that the partitioned application can achieve at least two times better performance in terms of throughput than the application without partitioning.", "title": "" }, { "docid": "e4427550b3d34557f073c3c16e1c61d9", "text": "Despite the significant progress in multiagent teamwork, existing research does not address the optimality of its prescriptions nor the complexity of the teamwork problem. Thus, we cannot determine whether the assumptions and approximations made by a particular theory gain enough efficiency to justify the losses in overall performance. To provide a tool for evaluating this tradeoff, we present a unified framework, the COMmunicative Multiagent Team Decision Problem (COM-MTDP) model, which is general enough to subsume many existing models of multiagent systems. We analyze use the COM-MTDP model to provide a breakdown of the computational complexity of constructing optimal teams under problem domains divided along the dimensions of observability and communication cost. We then exploit the COM-MTDP's ability to encode existing teamwork theories and models to encode two instantiations of joint intentions theory, including STEAM. We then derive a domain-independent criterion for optimal communication and provide a comparative analysis of the two joint intentions instantiations. We have implemented a reusable, domain-independent software package based COM-MTDPs to analyze teamwork coordination strategies, and we demonstrate its use by encoding and evaluating the two joint intentions strategies within an example domain.", "title": "" }, { "docid": "e4222dda5ecde102c0fdea0d48fb5baf", "text": "The association of hematological malignancies with a mediastinal germ cell tumor (GCT) is very rare. We report one case of a young adult male with primary mediastinal GCT who subsequently developed acute megakaryoblastic leukemia involving isochromosome (12p). A 25-yr-old man had been diagnosed with a mediastinal GCT and underwent surgical resection and adjuvant chemotherapy. At 1 week after the last cycle of chemotherapy, his peripheral blood showed leukocytosis with blasts. A bone marrow study confirmed the acute megakaryoblastic leukemia. A cytogenetic study revealed a complex karyotype with i(12p). Although additional chemotherapy was administered, the patient could not attain remission and died of septic shock. This case was definitely distinct from therapy-related secondary leukemia in terms of clinical, morphologic, and cytogenetic features. To our knowledge, this is the first case report of a patient with mediastinal GCT subsequently developing acute megakaryoblastic leukemia involving i(12p) in Korea.", "title": "" }, { "docid": "ac8d66a387f3c2b7fc6c579e33b27c64", "text": "We revisit the relation between stock market volatility and macroeconomic activity using a new class of component models that distinguish short-run from long-run movements. We formulate models with the long-term component driven by inflation and industrial production growth that are in terms of pseudo out-of-sample prediction for horizons of one quarter at par or outperform more traditional time series volatility models at longer horizons. Hence, imputing economic fundamentals into volatility models pays off in terms of long-horizon forecasting. We also find that macroeconomic fundamentals play a significant role even at short horizons.", "title": "" }, { "docid": "9779c9f4f15d9977a20592cabb777059", "text": "Expert search or recommendation involves the retrieval of people (experts) in response to a query and on occasion, a given set of constraints. In this paper, we address expert recommendation in academic domains that are different from web and intranet environments studied in TREC. We propose and study graph-based models for expertise retrieval with the objective of enabling search using either a topic (e.g. \"Information Extraction\") or a name (e.g. \"Bruce Croft\"). We show that graph-based ranking schemes despite being \"generic\" perform on par with expert ranking models specific to topic-based and name-based querying.", "title": "" }, { "docid": "05ea7a05b620c0dc0a0275f55becfbc3", "text": "Automated story generation is the problem of automatically selecting a sequence of events, actions, or words that can be told as a story. We seek to develop a system that can generate stories by learning everything it needs to know from textual story corpora. To date, recurrent neural networks that learn language models at character, word, or sentence levels have had little success generating coherent stories. We explore the question of event representations that provide a midlevel of abstraction between words and sentences in order to retain the semantic information of the original data while minimizing event sparsity. We present a technique for preprocessing textual story data into event sequences. We then present a technique for automated story generation whereby we decompose the problem into the generation of successive events (event2event) and the generation of natural language sentences from events (event2sentence). We give empirical results comparing different event representations and their effects on event successor generation and the translation of events to natural language.", "title": "" }, { "docid": "6efc8d18baa63945eac0c2394f29da19", "text": "Deep learning subsumes algorithms that automatically learn compositional representations. The ability of these models to generalize well has ushered in tremendous advances in many fields such as natural language processing (NLP). Recent research in the software engineering (SE) community has demonstrated the usefulness of applying NLP techniques to software corpora. Hence, we motivate deep learning for software language modeling, highlighting fundamental differences between state-of-the-practice software language models and connectionist models. Our deep learning models are applicable to source code files (since they only require lexically analyzed source code written in any programming language) and other types of artifacts. We show how a particular deep learning model can remember its state to effectively model sequential data, e.g., streaming software tokens, and the state is shown to be much more expressive than discrete tokens in a prefix. Then we instantiate deep learning models and show that deep learning induces high-quality models compared to n-grams and cache-based n-grams on a corpus of Java projects. We experiment with two of the models' hyperparameters, which govern their capacity and the amount of context they use to inform predictions, before building several committees of software language models to aid generalization. Then we apply the deep learning models to code suggestion and demonstrate their effectiveness at a real SE task compared to state-of-the-practice models. Finally, we propose avenues for future work, where deep learning can be brought to bear to support model-based testing, improve software lexicons, and conceptualize software artifacts. Thus, our work serves as the first step toward deep learning software repositories.", "title": "" }, { "docid": "082630a33c0cc0de0e60a549fc57d8e8", "text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.", "title": "" }, { "docid": "833ec45dfe660377eb7367e179070322", "text": "It was predicted that high self-esteem Ss (HSEs) would rationalize an esteem-threatening decision less than low self-esteem Ss (LSEs), because HSEs presumably had more favorable self-concepts with which to affirm, and thus repair, their overall sense of self-integrity. This prediction was supported in 2 experiments within the \"free-choice\" dissonance paradigm--one that manipulated self-esteem through personality feedback and the other that varied it through selection of HSEs and LSEs, but only when Ss were made to focus on their self-concepts. A 3rd experiment countered an alternative explanation of the results in terms of mood effects that may have accompanied the experimental manipulations. The results were discussed in terms of the following: (a) their support for a resources theory of individual differences in resilience to self-image threats--an extension of self-affirmation theory, (b) their implications for self-esteem functioning, and (c) their implications for the continuing debate over self-enhancement versus self-consistency motivation.", "title": "" }, { "docid": "109a84ad1c1a541e2a0b4972b21caca2", "text": "Our brain is a network. It consists of spatially distributed, but functionally linked regions that continuously share information with each other. Interestingly, recent advances in the acquisition and analysis of functional neuroimaging data have catalyzed the exploration of functional connectivity in the human brain. Functional connectivity is defined as the temporal dependency of neuronal activation patterns of anatomically separated brain regions and in the past years an increasing body of neuroimaging studies has started to explore functional connectivity by measuring the level of co-activation of resting-state fMRI time-series between brain regions. These studies have revealed interesting new findings about the functional connections of specific brain regions and local networks, as well as important new insights in the overall organization of functional communication in the brain network. Here we present an overview of these new methods and discuss how they have led to new insights in core aspects of the human brain, providing an overview of these novel imaging techniques and their implication to neuroscience. We discuss the use of spontaneous resting-state fMRI in determining functional connectivity, discuss suggested origins of these signals, how functional connections tend to be related to structural connections in the brain network and how functional brain communication may form a key role in cognitive performance. Furthermore, we will discuss the upcoming field of examining functional connectivity patterns using graph theory, focusing on the overall organization of the functional brain network. Specifically, we will discuss the value of these new functional connectivity tools in examining believed connectivity diseases, like Alzheimer's disease, dementia, schizophrenia and multiple sclerosis.", "title": "" }, { "docid": "376c96bb9fc8c44e1489da94509116a6", "text": "Predictive analytics techniques applied to a broad swath of student data can aid in timely intervention strategies to help prevent students from failing a course. This paper discusses a predictive analytic model that was created for the University of Phoenix. The purpose of the model is to identify students who are in danger of failing the course in which they are currently enrolled. Within the model's architecture, data from the learning management system (LMS), financial aid system, and student system are combined to calculate a likelihood of any given student failing the current course. The output can be used to prioritize students for intervention and referral to additional resources. The paper includes a discussion of the predictor and statistical tests used, validation procedures, and plans for implementation.", "title": "" }, { "docid": "7019214df5d1f55b3ed6ce3405e648fc", "text": "Cursive handwriting recognition is a challenging task for many real world applications such as document authentication, form processing, postal address recognition, reading machines for the blind, bank cheque recognition and interpretation of historical documents. Therefore, in the last few decades the researchers have put enormous effort to develop various techniques for handwriting segmentation and recognition. This review presents the segmentation strategies for automated recognition of off-line unconstrained cursive handwriting from static surfaces. This paper reviews many basic and advanced techniques and also compares the research results of various researchers in the domain of handwritten words segmentation.", "title": "" }, { "docid": "5ff8d6415a2601afdc4a15c13819f5bb", "text": "This paper studies the e ects of various types of online advertisements on purchase conversion by capturing the dynamic interactions among advertisement clicks themselves. It is motivated by the observation that certain advertisement clicks may not result in immediate purchases, but they stimulate subsequent clicks on other advertisements which then lead to purchases. We develop a stochastic model based on mutually exciting point processes, which model advertisement clicks and purchases as dependent random events in continuous time. We incorporate individual random e ects to account for consumer heterogeneity and cast the model in the Bayesian hierarchical framework. We propose a new metric of conversion probability to measure the conversion e ects of online advertisements. Simulation algorithms for mutually exciting point processes are developed to evaluate the conversion probability and for out-of-sample prediction. Model comparison results show the proposed model outperforms the benchmark model that ignores exciting e ects among advertisement clicks. We nd that display advertisements have relatively low direct e ect on purchase conversion, but they are more likely to stimulate subsequent visits through other advertisement formats. We show that the commonly used measure of conversion rate is biased in favor of search advertisements and underestimates the conversion e ect of display advertisements the most. Our model also furnishes a useful tool to predict future purchases and clicks on online", "title": "" }, { "docid": "e095f0b15273dbf9abf3d03f3d6c49ff", "text": "We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods that have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, and on real-world video frames. We present analyses of the learned network representations, showing it is implicitly learning a compact encoding of object appearance and motion. We also demonstrate a few of its applications, including visual analogy-making and video extrapolation.", "title": "" }, { "docid": "4170ae2e077bde01f2cf1c80d60dfe63", "text": "Y. WANG, E. GRANADOS, F. PEDACI, D. ALESSI, B. LUTHER, M. BERRILL AND J. J. ROCCA* National Science Foundation Engineering Research Center for Extreme Ultraviolet Science and Technology and Department of Electrical and Computer Engineering, Colorado State University, Fort Collins, Colorado 80523, USA Department of Physics, Colorado State University, Fort Collins, Colorado 80523, USA *e-mail: rocca@engr.colostate.edu", "title": "" } ]
scidocsrr
dace1ba50c98825c4f04cd0296c66488
Application of Data Mining in Educational Database for Predicting Behavioural Patterns of the Students
[ { "docid": "26e24e4a59943f9b80d6bf307680b70c", "text": "We present a machine-learned model that can automatically detect when a student using an intelligent tutoring system is off-task, i.e., engaged in behavior which does not involve the system or a learning task. This model was developed using only log files of system usage (i.e. no screen capture or audio/video data). We show that this model can both accurately identify each student's prevalence of off-task behavior and can distinguish off-task behavior from when the student is talking to the teacher or another student about the subject matter. We use this model in combination with motivational and attitudinal instruments, developing a profile of the attitudes and motivations associated with off-task behavior, and compare this profile to the attitudes and motivations associated with other behaviors in intelligent tutoring systems. We discuss how the model of off-task behavior can be used within interactive learning environments which respond to when students are off-task.", "title": "" }, { "docid": "7834f32e3d6259f92f5e0beb3a53cc04", "text": "An educational institution needs to have an approximate prior knowledge of enrolled students to predict their performance in future academics. This helps them to identify promising students and also provides them an opportunity to pay attention to and improve those who would probably get lower grades. As a solution, we have developed a system which can predict the performance of students from their previous performances using concepts of data mining techniques under Classification. We have analyzed the data set containing information about students, such as gender, marks scored in the board examinations of classes X and XII, marks and rank in entrance examinations and results in first year of the previous batch of students. By applying the ID3 (Iterative Dichotomiser 3) and C4.5 classification algorithms on this data, we have predicted the general and individual performance of freshly admitted students in future examinations.", "title": "" } ]
[ { "docid": "1ef6623e117998098ee609ea79d5f17d", "text": "Effective enforcement of laws and policies requires expending resources to prevent and detect offenders, as well as appropriate punishment schemes to deter violators. In particular, enforcement of privacy laws and policies in modern organizations that hold large volumes of personal information (e.g., hospitals, banks) relies heavily on internal audit mechanisms. We study economic considerations in the design of these mechanisms, focusing in particular on effective resource allocation and appropriate punishment schemes. We present an audit game model that is a natural generalization of a standard security game model for resource allocation with an additional punishment parameter. Computing the Stackelberg equilibrium for this game is challenging because it involves solving an optimization problem with non-convex quadratic constraints. We present an additive FPTAS that efficiently computes the solution.", "title": "" }, { "docid": "f9806d3542f575d53ef27620e4aa493b", "text": "Many of the current scientific advances in the life sciences have their origin in the intensive use of data for knowledge discovery. In no area this is so clear as in bioinformatics, led by technological breakthroughs in data acquisition technologies. It has been argued that bioinformatics could quickly become the field of research generating the largest data repositories, beating other data-intensive areas such as high-energy physics or astroinformatics. Over the last decade, deep learning has become a disruptive advance in machine learning, giving new live to the long-standing connectionist paradigm in artificial intelligence. Deep learning methods are ideally suited to large-scale data and, therefore, they should be ideally suited to knowledge discovery in bioinformatics and biomedicine at large. In this brief paper, we review key aspects of the application of deep learning in bioinformatics and medicine, drawing from the themes covered by the contributions to an ESANN 2018 special session devoted to this topic.", "title": "" }, { "docid": "435c6eb000618ef63a0f0f9f919bc0b4", "text": "Selective sampling is an active variant of online learning in which the learner is allowed to adaptively query the label of an observed example. The goal of selective sampling is to achieve a good trade-off between prediction performance and the number of queried labels. Existing selective sampling algorithms are designed for vector-based data. In this paper, motivated by the ubiquity of graph representations in real-world applications, we propose to study selective sampling on graphs. We first present an online version of the well-known Learning with Local and Global Consistency method (OLLGC). It is essentially a second-order online learning algorithm, and can be seen as an online ridge regression in the Hilbert space of functions defined on graphs. We prove its regret bound in terms of the structural property (cut size) of a graph. Based on OLLGC, we present a selective sampling algorithm, namely Selective Sampling with Local and Global Consistency (SSLGC), which queries the label of each node based on the confidence of the linear function on graphs. Its bound on the label complexity is also derived. We analyze the low-rank approximation of graph kernels, which enables the online algorithms scale to large graphs. Experiments on benchmark graph datasets show that OLLGC outperforms the state-of-the-art first-order algorithm significantly, and SSLGC achieves comparable or even better results than OLLGC while querying substantially fewer nodes. Moreover, SSLGC is overwhelmingly better than random sampling.", "title": "" }, { "docid": "ab2c4d5317d2e10450513283c21ca6d3", "text": "We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don’t match between call logs and address book entries on the same phone.", "title": "" }, { "docid": "385c7c16af40ae13b965938ac3bce34c", "text": "The information age has brought a deluge of data. Much of this is in text form, insurmountable in scope for humans and incomprehensible in structure for computers. Text mining is an expanding field of research that seeks to utilize the information contained in vast document collections. General data mining methods based on machine learning face challenges with the scale of text data, posing a need for scalable text mining methods. This thesis proposes a solution to scalable text mining: generative models combined with sparse computation. A unifying formalization for generative text models is defined, bringing together research traditions that have used formally equivalent models, but ignored parallel developments. This framework allows the use of methods developed in different processing tasks such as retrieval and classification, yielding effective solutions across different text mining tasks. Sparse computation using inverted indices is proposed for inference on probabilistic models. This reduces the computational complexity of the common text mining operations according to sparsity, yielding probabilistic models with the scalability of modern search engines. The proposed combination provides sparse generative models: a solution for text mining that is general, effective, and scalable. Extensive experimentation on text classification and ranked retrieval datasets are conducted, showing that the proposed solution matches or outperforms the leading task-specific methods in effectiveness, with a order of magnitude decrease in classification times for Wikipedia article categorization with a million classes. The developed methods were further applied in two 2014 Kaggle data mining prize competitions with over a hundred competing teams, earning first and second places.", "title": "" }, { "docid": "8f177b79f0b89510bd84e1f503b5475f", "text": "We propose a distributed cooperative framework among base stations (BS) with load balancing (dubbed as inter-BS for simplicity) for improving energy efficiency of OFDMA-based cellular access networks. Proposed inter-BS cooperation is formulated following the principle of ecological self-organization. Based on the network traffic, BSs mutually cooperate for distributing traffic among themselves and thus, the number of active BSs is dynamically adjusted for energy savings. For reducing the number of inter-BS communications, a three-step measure is taken by using estimated load factor (LF), initializing the algorithm with only the active BSs and differentiating neighboring BSs according to their operating modes for distributing traffic. An exponentially weighted moving average (EWMA)-based technique is proposed for estimating the LF in advance based on the historical data. Various selection schemes for finding the best BSs to distribute traffic are also explored. Furthermore, we present an analytical formulation for modeling the dynamic switching of BSs. A thorough investigation under a wide range of network settings is carried out in the context of an LTE system. Results demonstrate a significant enhancement in network energy efficiency yielding a much higher savings than the compared schemes. Moreover, frequency of inter-BS correspondences can be reduced by over 80%.", "title": "" }, { "docid": "06e6704699652849e745df7c472fdc7b", "text": "Despite extensive research, many methods in software quality prediction still exhibit some degree of uncertainty in their results. Rather than treating this as a problem, this paper asks if this uncertainty is a resource that can simplify software quality prediction. For example, Deb’s principle of ε-dominance states that if there exists some ε value below which it is useless or impossible to distinguish results, then it is superfluous to explore anything less than ε . We say that for “large ε problems”, the results space of learning effectively contains just a few regions. If many learners are then applied to such large ε problems, they would exhibit a “many roads lead to Rome” property; i.e., many different software quality prediction methods would generate a small set of very similar results. This paper explores DART, an algorithm especially selected to succeed for large ε software quality prediction problems. DART is remarkable simple yet, on experimentation, it dramatically outperforms three sets of state-of-the-art defect prediction methods. The success of DART for defect prediction begs the questions: how many other domains in software quality predictors can also be radically simplified? This will be a fruitful direction for future work.", "title": "" }, { "docid": "02ed562cb1a532f937a8590226bb44dc", "text": "We present a new algorithm for approximate inference in prob abilistic programs, based on a stochastic gradient for variational programs. Th is method is efficient without restrictions on the probabilistic program; it is pa rticularly practical for distributions which are not analytically tractable, inclu ding highly structured distributions that arise in probabilistic programs. We show ho w t automatically derive mean-field probabilistic programs and optimize them , and demonstrate that our perspective improves inference efficiency over other al gorithms.", "title": "" }, { "docid": "3cda92028692a25411d74e5a002740ac", "text": "Protecting sensitive information from unauthorized disclosure is a major concern of every organization. As an organization’s employees need to access such information in order to carry out their daily work, data leakage detection is both an essential and challenging task. Whether caused by malicious intent or an inadvertent mistake, data loss can result in significant damage to the organization. Fingerprinting is a content-based method used for detecting data leakage. In fingerprinting, signatures of known confidential content are extracted and matched with outgoing content in order to detect leakage of sensitive content. Existing fingerprinting methods, however, suffer from two major limitations. First, fingerprinting can be bypassed by rephrasing (or minor modification) of the confidential content, and second, usually the whole content of document is fingerprinted (including non-confidential parts), resulting in false alarms. In this paper we propose an extension to the fingerprinting approach that is based on sorted k-skip-n-grams. The proposed method is able to produce a fingerprint of the core confidential content which ignores non-relevant (non-confidential) sections. In addition, the proposed fingerprint method is more robust to rephrasing and can also be used to detect a previously unseen confidential document and therefore provide better detection of intentional leakage incidents.", "title": "" }, { "docid": "4d4c0d5a0abcd38aff2ba514f080edc0", "text": "We present an approach to adaptively utilize deep neural networks in order to reduce the evaluation time on new examples without loss of classification performance. Rather than attempting to redesign or approximate existing networks, we propose two schemes that adaptively utilize networks. First, we pose an adaptive network evaluation scheme, where we learn a system to adaptively choose the components of a deep network to be evaluated for each example. By allowing examples correctly classified using early layers of the system to exit, we avoid the computational time associated with full evaluation of the network. Building upon this approach, we then learn a network selection system that adaptively selects the network to be evaluated for each example. We exploit the fact that many examples can be correctly classified using relatively efficient networks and that complex, computationally costly networks are only necessary for a small fraction of examples. By avoiding evaluation of these complex networks for a large fraction of examples, computational time can be dramatically reduced. Empirically, these approaches yield dramatic reductions in computational cost, with up to a 2.8x speedup on state-of-the-art networks from the ImageNet image recognition challenge with minimal (less than 1%) loss of accuracy.", "title": "" }, { "docid": "d2292d2e530bca678ab36f387488f8f3", "text": "One key advantage of 4G OFDM system is the relatively simple receiver implementation due to the orthogonal resource allocation. However, from sum-capacity and spectral efficiency points of view, orthogonal systems are never the achieving schemes. With the rapid development of mobile communication systems, a novel concept of non-orthogonal transmission for 5G mobile communications has attracted researches all around the world. In this trend, many new multiple access schemes and waveform modulation technologies were proposed. In this paper, some promising ones of them were discussed which include Non-orthogonal Multiple Access (NOMA), Sparse Code Multiple Access (SCMA), Multi-user Shared Access (MUSA), Pattern Division Multiple Access (PDMA) and some main new waveforms including Filter-bank based Multicarrier (FBMC), Universal Filtered Multi-Carrier (UFMC), Generalized Frequency Division Multiplexing (GFDM). By analyzing and comparing features of these technologies, a research direction of guiding on future 5G multiple access and waveform are given.", "title": "" }, { "docid": "cbefaf40a904b6218bbdca0042f57b14", "text": "For the purpose of automatically evaluating speakers’ humor usage, we build a presentation corpus containing humorous utterances based on TED talks. Compared to previous data resources supporting humor recognition research, ours has several advantages, including (a) both positive and negative instances coming from a homogeneous data set, (b) containing a large number of speakers, and (c) being open. Focusing on using lexical cues for humor recognition, we systematically compare a newly emerging text classification method based on Convolutional Neural Networks (CNNs) with a well-established conventional method using linguistic knowledge. The CNN method shows its advantages on both higher recognition accuracies and being able to learn essential features auto-", "title": "" }, { "docid": "3bc34f3ef98147015e2ad94a6c615348", "text": "Objective methods for assessing perceptual image quality traditionally attempt to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MatLab implementation of the proposed algorithm is available online at http://www.cns.nyu.edu/~lcv/ssim/. Keywords— Image quality assessment, perceptual quality, human visual system, error sensitivity, structural similarity, structural information, image coding, JPEG, JPEG2000", "title": "" }, { "docid": "06f8b713ed4020c99403c28cbd1befbc", "text": "In the last decade, deep learning algorithms have become very popular thanks to the achieved performance in many machine learning and computer vision tasks. However, most of the deep learning architectures are vulnerable to so called adversarial examples. This questions the security of deep neural networks (DNN) for many securityand trust-sensitive domains. The majority of the proposed existing adversarial attacks are based on the differentiability of the DNN cost function. Defence strategies are mostly based on machine learning and signal processing principles that either try to detect-reject or filter out the adversarial perturbations and completely neglect the classical cryptographic component in the defence. In this work, we propose a new defence mechanism based on the second Kerckhoffs’s cryptographic principle which states that the defence and classification algorithm are supposed to be known, but not the key. To be compliant with the assumption that the attacker does not have access to the secret key, we will primarily focus on a gray-box scenario and do not address a white-box one. More particularly, we assume that the attacker does not have direct access to the secret block, but (a) he completely knows the system architecture, (b) he has access to the data used for training and testing and (c) he can observe the output of the classifier for each given input. We show empirically that our system is efficient against most famous state-of-the-art attacks in black-box and gray-box scenarios.", "title": "" }, { "docid": "9074416729e07ba4ec11ebd0021b41ed", "text": "The purpose of this study is to examine the relationships between internet addiction and depression, anxiety, and stress. Participants were 300 university students who were enrolled in mid-size state University, in Turkey. In this study, the Online Cognition Scale and the Depression Anxiety Stress Scale were used. In correlation analysis, internet addiction was found positively related to depression, anxiety, and stress. According to path analysis results, depression, anxiety, and stress were predicted positively by internet addiction. This research shows that internet addiction has a direct impact on depression, anxiety, and stress.", "title": "" }, { "docid": "1b5c1cbe3f53c1f3a50557ff3144887e", "text": "The emergence of antibiotic resistant Staphylococcus aureus presents a worldwide problem that requires non-antibiotic strategies. This study investigated the anti-biofilm and anti-hemolytic activities of four red wines and two white wines against three S. aureus strains. All red wines at 0.5-2% significantly inhibited S. aureus biofilm formation and hemolysis by S. aureus, whereas the two white wines had no effect. Furthermore, at these concentrations, red wines did not affect bacterial growth. Analyses of hemolysis and active component identification in red wines revealed that the anti-biofilm compounds and anti-hemolytic compounds largely responsible were tannic acid, trans-resveratrol, and several flavonoids. In addition, red wines attenuated S. aureus virulence in vivo in the nematode Caenorhabditis elegans, which is killed by S. aureus. These findings show that red wines and their compounds warrant further attention in antivirulence strategies against persistent S. aureus infection.", "title": "" }, { "docid": "96e24fabd3567a896e8366abdfaad78e", "text": "Interior permanent magnet synchronous motor (IPMSM) is usually applied to traction motor in the hybrid electric vehicle (HEV). All motors including IPMSM have different parameters and characteristics with various combinations of the number of poles and slots. The proper combination can improve characteristics of traction system ultimately. This paper deals with analysis of the characteristics of IPMSM for mild type HEV according to the combinations of number of poles and slots. The specific models with 16-pole/18-slot, 16-pole/24-slot and 12-pole/18-slot combinations are introduced. And the advantages and disadvantages of these three models are compared. The characteristics of each model are computed in d-q axis equivalent circuit analysis and finite element analysis. After then, the proper combination of the number of poles and slots for HEV traction motor is presented after comparing these three models.", "title": "" }, { "docid": "cd0e7cace1b89af72680f9d8ef38bdf3", "text": "Analyzing stock market trends and sentiment is an interdisciplinary area of research being undertaken by many disciplines such as Finance, Computer Science, Statistics, and Economics. It has been well established that real time news plays a strong role in the movement of stock prices. With the advent of electronic and online news sources, analysts have to deal with enormous amounts of real-time, unstructured streaming data. In this paper, we present an automated text mining based approach to aggregate news stories from diverse sources and create a News Corpus. The Corpus is filtered down to relevant sentences and analyzed using Natural Language Processing (NLP) techniques. A sentiment metric, called NewsSentiment, utilizing the count of positive and negative polarity words is proposed as a measure of the sentiment of the overall news corpus. We have used various open source packages and tools to develop the news collection and aggregation engine as well as the sentiment evaluation engine. Extensive experimentation has been done using news stories about various stocks. The time variation of NewsSentiment shows a very strong correlation with the actual stock price movement. Our proposed metric has many applications in analyzing current news stories and predicting stock trends for specific companies and sectors of the economy.", "title": "" }, { "docid": "6a02c629f83049712c09ebe43d9a4ac9", "text": "The term model-driven engineering (MDE) is typically used to describe software development approaches in which abstract models of software systems are created and systematically transformed to concrete implementations. In this paper we give an overview of current research in MDE and discuss some of the major challenges that must be tackled in order to realize the MDE vision of software development. We argue that full realizations of the MDE vision may not be possible in the near to medium-term primarily because of the wicked problems involved. On the other hand, attempting to realize the vision will provide insights that can be used to significantly reduce the gap between evolving software complexity and the technologies used to manage complexity.", "title": "" }, { "docid": "230a79e785aec288582ee12de3d6c262", "text": "OBJECTIVE\nThe goal of enhanced nutrition in critically ill patients is to improve outcome by reducing lean tissue wasting. However, such effect has not been proven. This study aimed to assess the effect of early administration of parenteral nutrition on muscle volume and composition by repeated quantitative CT.\n\n\nDESIGN\nA preplanned substudy of a randomized controlled trial (Early Parenteral Nutrition Completing Enteral Nutrition in Adult Critically Ill Patients [EPaNIC]), which compared early initiation of parenteral nutrition when enteral nutrition was insufficient (early parenteral nutrition) with tolerating a pronounced nutritional deficit for 1 week in ICU (late parenteral nutrition). Late parenteral nutrition prevented infections and accelerated recovery.\n\n\nSETTING\nUniversity hospital.\n\n\nPATIENTS\nFifteen EPaNIC study neurosurgical patients requiring prescheduled repeated follow-up CT scans and six healthy volunteers matched for age, gender, and body mass index.\n\n\nINTERVENTION\nRepeated abdominal and femoral quantitative CT images were obtained in a standardized manner on median ICU day 2 (interquartile range, 2-3) and day 9 (interquartile range, 8-10). Intramuscular, subcutaneous, and visceral fat compartments were delineated manually. Muscle and adipose tissue volume and composition were quantified using standard Hounsfield Unit ranges.\n\n\nMEASUREMENTS AND MAIN RESULTS\nCritical illness evoked substantial loss of femoral muscle volume in 1 week's time, irrespective of the nutritional regimen. Early parenteral nutrition reduced the quality of the muscle tissue, as reflected by the attenuation, revealing increased intramuscular water/lipid content. Early parenteral nutrition also increased the volume of adipose tissue islets within the femoral muscle compartment. These changes in skeletal muscle quality correlated with caloric intake. In the abdominal muscle compartments, changes were similar, albeit smaller. Femoral and abdominal subcutaneous adipose tissue compartments were unaffected by disease and nutritional strategy.\n\n\nCONCLUSIONS\nEarly parenteral nutrition did not prevent the pronounced wasting of skeletal muscle observed over the first week of critical illness. Furthermore, early parenteral nutrition increased the amount of adipose tissue within the muscle compartments.", "title": "" } ]
scidocsrr
ec43b1b7a7ead9699dd1ffe663e8e08c
Active Learning to Rank using Pairwise Supervision
[ { "docid": "14838947ee3b95c24daba5a293067730", "text": "In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs 'weak rankers' on the basis of reweighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.", "title": "" }, { "docid": "f1a162f64838817d78e97a3c3087fae4", "text": "Most literature on support vector machines (SVMs) concentrates on the dual optimization problem. In this letter, we point out that the primal problem can also be solved efficiently for both linear and nonlinear SVMs and that there is no reason for ignoring this possibility. On the contrary, from the primal point of view, new families of algorithms for large-scale SVM training can be investigated.", "title": "" } ]
[ { "docid": "252b8722acd43c9f61a6b10019715392", "text": "Semantic segmentation is an important step of visual scene understanding for autonomous driving. Recently, Convolutional Neural Network (CNN) based methods have successfully applied in semantic segmentation using narrow-angle or even wide-angle pinhole camera. However, in urban traffic environments, autonomous vehicles need wider field of view to perceive surrounding things and stuff, especially at intersections. This paper describes a CNN-based semantic segmentation solution using fisheye camera which covers a large field of view. To handle the complex scene in the fisheye image, Overlapping Pyramid Pooling (OPP) module is proposed to explore local, global and pyramid local region context information. Based on the OPP module, a network structure called OPP-net is proposed for semantic segmentation. The net is trained and evaluated on a fisheye image dataset for semantic segmentation which is generated from an existing dataset of urban traffic scenes. In addition, zoom augmentation, a novel data augmentation policy specially designed for fisheye image, is proposed to improve the net's generalization performance. Experiments demonstrate the outstanding performance of the OPP-net for urban traffic scenes and the effectiveness of the zoom augmentation.", "title": "" }, { "docid": "b5097e718754c02cddd02a1c147c6398", "text": "Semi-automatic parking system is a driver convenience system automating steering control required during parking operation. This paper proposes novel monocular-vision based target parking-slot recognition by recognizing parking-slot markings when driver designates a seed-point inside the target parking-slot with touch screen. Proposed method compensates the distortion of fisheye lens and constructs a bird’s eye view image using homography. Because adjacent vehicles are projected along the outward direction from camera in the bird’s eye view image, if marking line-segment distinguishing parking-slots from roadway and front-ends of marking linesegments dividing parking-slots are observed, proposed method successfully recognizes the target parking-slot marking. Directional intensity gradient, utilizing the width of marking line-segment and the direction of seed-point with respect to camera position as a prior knowledge, can detect marking linesegments irrespective of noise and illumination variation. Making efficient use of the structure of parking-slot markings in the bird’s eye view image, proposed method simply recognizes the target parking-slot marking. It is validated by experiments that proposed method can successfully recognize target parkingslot under various situations and illumination conditions.", "title": "" }, { "docid": "cc17b3548d2224b15090ead8c398f808", "text": "Malaria is a global health problem that threatens 300–500 million people and kills more than one million people annually. Disease control is hampered by the occurrence of multi-drug-resistant strains of the malaria parasite Plasmodium falciparum. Synthetic antimalarial drugs and malarial vaccines are currently being developed, but their efficacy against malaria awaits rigorous clinical testing. Artemisinin, a sesquiterpene lactone endoperoxide extracted from Artemisia annua L (family Asteraceae; commonly known as sweet wormwood), is highly effective against multi-drug-resistant Plasmodium spp., but is in short supply and unaffordable to most malaria sufferers. Although total synthesis of artemisinin is difficult and costly, the semi-synthesis of artemisinin or any derivative from microbially sourced artemisinic acid, its immediate precursor, could be a cost-effective, environmentally friendly, high-quality and reliable source of artemisinin. Here we report the engineering of Saccharomyces cerevisiae to produce high titres (up to 100 mg l-1) of artemisinic acid using an engineered mevalonate pathway, amorphadiene synthase, and a novel cytochrome P450 monooxygenase (CYP71AV1) from A. annua that performs a three-step oxidation of amorpha-4,11-diene to artemisinic acid. The synthesized artemisinic acid is transported out and retained on the outside of the engineered yeast, meaning that a simple and inexpensive purification process can be used to obtain the desired product. Although the engineered yeast is already capable of producing artemisinic acid at a significantly higher specific productivity than A. annua, yield optimization and industrial scale-up will be required to raise artemisinic acid production to a level high enough to reduce artemisinin combination therapies to significantly below their current prices.", "title": "" }, { "docid": "b4978b2fbefc79fba6e69ad8fd55ebf9", "text": "This paper proposes an approach based on Least Squares Suppo rt Vect r Machines (LS-SVMs) for solving second order parti al differential equations (PDEs) with variable coe fficients. Contrary to most existing techniques, the proposed m thod provides a closed form approximate solution. The optimal representat ion of the solution is obtained in the primal-dual setting. T he model is built by incorporating the initial /boundary conditions as constraints of an optimization prob lem. The developed method is well suited for problems involving singular, variable and const a t coefficients as well as problems with irregular geometrical domai ns. Numerical results for linear and nonlinear PDEs demonstrat e he efficiency of the proposed method over existing methods.", "title": "" }, { "docid": "9516cf7ea68b16380669d47d6aee472b", "text": "In this paper, we survey the work that has been done in threshold concepts in computing since they were first discussed in 2005: concepts that have been identified, methodologies used, and issues discussed. Based on this survey, we then identify some promising unexplored areas for future work.", "title": "" }, { "docid": "c9fc05c0587a15a63b325ef6095aa0cb", "text": "Background:Recent epidemiological results suggested an increase of cancer risk after receiving computed tomography (CT) scans in childhood or adolescence. Their interpretation is questioned due to the lack of information about the reasons for examination. Our objective was to estimate the cancer risk related to childhood CT scans, and examine how cancer-predisposing factors (PFs) affect assessment of the radiation-related risk.Methods:The cohort included 67 274 children who had a first scan before the age of 10 years from 2000 to 2010 in 23 French departments. Cumulative X-rays doses were estimated from radiology protocols. Cancer incidence was retrieved through the national registry of childhood cancers; PF from discharge diagnoses.Results:During a mean follow-up of 4 years, 27 cases of tumours of the central nervous system, 25 of leukaemia and 21 of lymphoma were diagnosed; 32% of them among children with PF. Specific patterns of CT exposures were observed according to PFs. Adjustment for PF reduced the excess risk estimates related to cumulative doses from CT scans. No significant excess risk was observed in relation to CT exposures.Conclusions:This study suggests that the indication for examinations, whether suspected cancer or PF management, should be considered to avoid overestimation of the cancer risks associated with CT scans.", "title": "" }, { "docid": "807564cfc2e90dee21a3efd8dc754ba3", "text": "The present paper reports two studies designed to test the Dualistic Model of Passion with regard to performance attainment in two fields of expertise. Results from both studies supported the Passion Model. Harmonious passion was shown to be a positive source of activity investment in that it directly predicted deliberate practice (Study 1) and positively predicted mastery goals which in turn positively predicted deliberate practice (Study 2). In turn, deliberate practice had a direct positive impact on performance attainment. Obsessive passion was shown to be a mixed source of activity investment. While it directly predicted deliberate practice (Study 1) and directly predicted mastery goals (which predicted deliberate practice), it also predicted performance-avoidance and performance-approach goals, with the former having a tendency to facilitate performance directly, and the latter to directly negatively impact on performance attainment (Study 2). Finally, harmonious passion was also positively related to subjective well-being (SWB) in both studies, while obsessive passion was either unrelated (Study 1) or negatively related to SWB (Study 2). The conceptual and applied implications of the differential influences of harmonious and obsessive passion in performance are discussed.", "title": "" }, { "docid": "ce404452a843d18e4673d0dcf6cf01b1", "text": "We propose a formal mathematical model for sparse representations in neocortex based on a neuron model and associated operations. The design of our model neuron is inspired by recent experimental findings on active dendritic processing and NMDA spikes in pyramidal neurons. We derive a number of scaling laws that characterize the accuracy of such neurons in detecting activation patterns in a neuronal population under adverse conditions. We introduce the union property which shows that synapses for multiple patterns can be randomly mixed together within a segment and still lead to highly accurate recognition. We describe simulation results that provide overall insight into sparse representations as well as two primary results. First we show that pattern recognition by a neuron can be extremely accurate and robust with high dimensional sparse inputs even when using a tiny number of synapses to recognize large patterns. Second, equations representing recognition accuracy of a dendrite predict optimal NMDA spiking thresholds under a generous set of assumptions. The prediction tightly matches NMDA spiking thresholds measured in the literature. Our model neuron matches many of the known properties of pyramidal neurons. As such the theory provides a unified and practical mathematical framework for understanding the benefits and limits of sparse representations in cortical networks.", "title": "" }, { "docid": "44b7ed6c8297b6f269c8b872b0fd6266", "text": "vii", "title": "" }, { "docid": "b8b2d68955d6ed917900d30e4e15f71e", "text": "Due to the explosive growth of wireless devices and wireless traffic, the spectrum scarcity problem is becoming more urgent in numerous Radio Frequency (RF) systems. At the same time, many studies have shown that spectrum resources allocated to various existing RF systems are largely underutilized. As a potential solution to this spectrum scarcity problem, spectrum sharing among multiple, potentially dissimilar RF systems has been proposed. However, such spectrum sharing solutions are challenging to develop due to the lack of efficient coordination schemes and potentially different PHY/MAC properties. In this paper, we investigate existing spectrum sharing methods facilitating coexistence of various RF systems. The cognitive radio technique, which has been the subject of various surveys, constitutes a subset of our wider scope. We study more general coexistence scenarios and methods such as coexistence of communication systems with similar priorities, utilizing similar or different protocols or standards, as well as the coexistence of communication and non-communication systems using the same spectral resources. Finally, we explore open research issues on the spectrum sharing methods as well as potential approaches to resolving these issues. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b2c299e13eff8776375c14357019d82e", "text": "This paper is focused on the application of complementary split-ring resonators (CSRRs) to the suppression of the common (even) mode in microstrip differential transmission lines. By periodically and symmetrically etching CSRRs in the ground plane of microstrip differential lines, the common mode can be efficiently suppressed over a wide band whereas the differential signals are not affected. Throughout the paper, we present and discuss the principle for the selective common-mode suppression, the circuit model of the structure (including the models under even- and odd-mode excitation), the strategies for bandwidth enhancement of the rejected common mode, and a methodology for common-mode filter design. On the basis of the dispersion relation for the common mode, it is shown that the maximum achievable rejection bandwidth can be estimated. Finally, theory is validated by designing and measuring a differential line and a balanced bandpass filter with common-mode suppression, where double-slit CSRRs (DS-CSRRs) are used in order to enhance the common-mode rejection bandwidth. Due to the presence of DS-CSRRs, the balanced filter exhibits more than 40 dB of common-mode rejection within a 34% bandwidth around the filter pass band.", "title": "" }, { "docid": "c043e7a5d5120f5a06ef6decc06c184a", "text": "Entities are further categorized into those that are the object of the measurement (‘assayed components’) and those, if any, that are subjected to targeted and controlled experimental interventions (‘perturbations/interventions’). These two core categories are related to the concepts ‘perturbagen’ and ‘target’ in the Bioassay Ontology (BAO2) and capture an important aspect of the design of experiments where multiple conditions are compared with each other in order to test whether a given perturbation (e.g., the presence or absence of a drug), causes a given response (e.g., a change in gene expression). Additional categories include ‘experimental variables’, ‘reporters’, ‘normalizing components’ and generic ‘biological components’ (Supplementary Data). We developed a web-based tool with a graphical user interface that allows computer-assisted manual extraction of the metadata model described above at the level of individual figure panels based on the information provided in figure legends and in the images. Files that contain raw or minimally processed data, when available, can furthermore be linked or uploaded and attached to the figure. As proof of principle, we have curated a compendium of over 18,000 experiments published across 23 journals. From the 721 papers processed, 381 papers were related to the field of autophagy, and the rest were annotated during the publication process of accepted manuscripts at four partner molecular biology journals. Both sets of papers were processed identically. Out of the 18,157 experimental panels annotated, 77% included at least one ‘intervention/assayed component’ pair, and this supported the broad applicability of the perturbation-centric SourceData model. We provide a breakdown of entities by categories in Supplementary Figure 1. We note that the presence of a perturbation is not a requirement for the model. As such, the SourceData model is also applicable in cases such as correlative observations. The SourceData model is independent of data type (i.e., image-based or numerical values) and is well suited for cell and molecular biology experiments. 77% of the processed entities were explicitly mentioned in the text of the legend. For the remaining entities, curators added the terms based on the labels directly displayed on the image of the figure. SourceData: a semantic platform for curating and searching figures", "title": "" }, { "docid": "e0f6878845e02e966908311e6818dbe9", "text": "Smart Home is one of emerging application domains of The Internet of things which following the computer and Internet. Although home automation technologies have been commercially available already, they are basically designed for signal-family smart homes with a high cost, and along with the constant growth of digital appliances in smart home, we merge smart home into smart-home-oriented Cloud to release the stress on the smart home system which mostly installs application software on their local computers. In this paper, we present a framework for Cloud-based smart home for enabling home automation, household mobility and interconnection which easy extensible and fit for future demands. Through subscribing services of the Cloud, smart home consumers can easily enjoy smart home services without purchasing computers which owns strong power and huge storage. We focus on the overall Smart Home framework, the features and architecture of the components of Smart Home, the interaction and cooperation between them in detail.", "title": "" }, { "docid": "cccecb08c92f8bcec4a359373a20afcb", "text": "To solve the problem of the false matching and low robustness in detecting copy-move forgeries, a new method was proposed in this study. It involves the following steps: first, establish a Gaussian scale space; second, extract the orientated FAST key points and the ORB features in each scale space; thirdly, revert the coordinates of the orientated FAST key points to the original image and match the ORB features between every two different key points using the hamming distance; finally, remove the false matched key points using the RANSAC algorithm and then detect the resulting copy-move regions. The experimental results indicate that the new algorithm is effective for geometric transformation, such as scaling and rotation, and exhibits high robustness even when an image is distorted by Gaussian blur, Gaussian white noise and JPEG recompression; the new algorithm even has great detection on the type of hiding object forgery.", "title": "" }, { "docid": "fb63ab21fa40b125c1a85b9c3ed1dd8d", "text": "The two central topics of information theory are the compression and the transmission of data. Shannon, in his seminal work, formalized both these problems and determined their fundamental limits. Since then the main goal of coding theory has been to find practical schemes that approach these limits. Polar codes, recently invented by Arıkan, are the first “practical” codes that are known to achieve the capacity for a large class of channels. Their code construction is based on a phenomenon called “channel polarization”. The encoding as well as the decoding operation of polar codes can be implemented with O(N log N) complexity, where N is the blocklength of the code. We show that polar codes are suitable not only for channel coding but also achieve optimal performance for several other important problems in information theory. The first problem we consider is lossy source compression. We construct polar codes that asymptotically approach Shannon’s rate-distortion bound for a large class of sources. We achieve this performance by designing polar codes according to the “test channel”, which naturally appears in Shannon’s formulation of the rate-distortion function. The encoding operation combines the successive cancellation algorithm of Arıkan with a crucial new ingredient called “randomized rounding”. As for channel coding, both the encoding as well as the decoding operation can be implemented with O(N log N) complexity. This is the first known “practical” scheme that approaches the optimal rate-distortion trade-off. We also construct polar codes that achieve the optimal performance for the Wyner-Ziv and the Gelfand-Pinsker problems. Both these problems can be tackled using “nested” codes and polar codes are naturally suited for this purpose. We further show that polar codes achieve the capacity of asymmetric channels, multi-terminal scenarios like multiple access channels, and degraded broadcast channels. For each of these problems, our constructions are the first known “practical” schemes that approach the optimal performance. The original polar codes of Arıkan achieve a block error probability decaying exponentially in the square root of the block length. For source coding, the gap between the achieved distortion and the limiting distortion also vanishes exponentially in the square root of the blocklength. We explore other polarlike code constructions with better rates of decay. With this generalization,", "title": "" }, { "docid": "460d6a8a5f78e6fa5c42fb6c219b3254", "text": "Generative Adversarial Networks (GANs) have been successfully applied to the problem of policy imitation in a model-free setup. However, the computation graph of GANs, that include a stochastic policy as the generative model, is no longer differentiable end-to-end, which requires the use of high-variance gradient estimation. In this paper, we introduce the Modelbased Generative Adversarial Imitation Learning (MGAIL) algorithm. We show how to use a forward model to make the computation fully differentiable, which enables training policies using the exact gradient of the discriminator. The resulting algorithm trains competent policies using relatively fewer expert samples and interactions with the environment. We test it on both discrete and continuous action domains and report results that surpass the state-of-the-art.", "title": "" }, { "docid": "4753ea589bd7dd76d3fb08ba8dce65ff", "text": "Frequent Patterns are very important in knowledge discovery and data mining process such as mining of association rules, correlations etc. Prefix-tree based approach is one of the contemporary approaches for mining frequent patterns. FP-tree is a compact representation of transaction database that contains frequency information of all relevant Frequent Patterns (FP) in a dataset. Since the introduction of FP-growth algorithm for FP-tree construction, three major algorithms have been proposed, namely AFPIM, CATS tree, and CanTree, that have adopted FP-tree for incremental mining of frequent patterns. All of the three methods perform incremental mining by processing one transaction of the incremental database at a time and updating it to the FP-tree of the initial (original) database. Here in this paper we propose a novel method to take advantage of FP-tree representation of incremental transaction database for incremental mining. We propose “Batch Incremental Tree (BIT)” algorithm to merge two small consecutive duration FP-trees to obtain a FP-tree that is equivalent of FP-tree obtained when the entire database is processed at once from the beginning of the first duration", "title": "" }, { "docid": "6052c0f2adfe4b75f96c21a5ee128bf5", "text": "I present a new Markov chain sampling method appropriate for distributions with isolated modes. Like the recently-developed method of \\simulated tempering\", the \\tempered transition\" method uses a series of distributions that interpolate between the distribution of interest and a distribution for which sampling is easier. The new method has the advantage that it does not require approximate values for the normalizing constants of these distributions, which are needed for simulated tempering, and can be tedious to estimate. Simulated tempering performs a random walk along the series of distributions used. In contrast, the tempered transitions of the new method move systematically from the desired distribution, to the easily-sampled distribution, and back to the desired distribution. This systematic movement avoids the ineeciency of a random walk, an advantage that unfortunately is cancelled by an increase in the number of interpolating distributions required. Because of this, the sampling eeciency of the tempered transition method in simple problems is similar to that of simulated tempering. On more complex distributions, however, simulated tempering and tempered transitions may perform differently. Which is better depends on the ways in which the interpolating distributions are \\deceptive\".", "title": "" }, { "docid": "1acc97afa9facf77289ddf1015b1e110", "text": "This short note presents a new formal language, lambda dependency-based compositional semantics (lambda DCS) for representing logical forms in semantic parsing. By eliminating variables and making existential quantification implicit, lambda DCS logical forms are generally more compact than those in lambda calculus.", "title": "" }, { "docid": "322141533594ed1927f36b850b8d963f", "text": "Microelectrodes are widely used in the physiological recording of cell field potentials. As microelectrode signals are generally in the μV range, characteristics of the cell-electrode interface are important to the recording accuracy. Although the impedance of the microelectrode-solution interface has been well studied and modeled in the past, no effective model has been experimentally verified to estimate the noise of the cell-electrode interface. Also in existing interface models, spectral information is largely disregarded. In this work, we developed a model for estimating the noise of the cell-electrode interface from interface impedances. This model improves over existing noise models by including the cell membrane capacitor and frequency dependent impedances. With low-noise experiment setups, this model is verified by microelectrode array (MEA) experiments with mouse muscle myoblast cells. Experiments show that the noise estimated from this model has <;10% error, which is much less than estimations from existing models. With this model, noise of the cell-electrode interface can be estimated by simply measuring interface impedances. This model also provides insights for micro- electrode design to achieve good recording signal-to-noise ratio.", "title": "" } ]
scidocsrr
5f5b949a4f90253e6585c69ecc2325e1
Four Principles of Memory Improvement : A Guide to Improving Learning Efficiency
[ { "docid": "660d47a9ffc013f444954f3f210de05e", "text": "Taking tests enhances learning. But what happens when one cannot answer a test question-does an unsuccessful retrieval attempt impede future learning or enhance it? The authors examined this question using materials that ensured that retrieval attempts would be unsuccessful. In Experiments 1 and 2, participants were asked fictional general-knowledge questions (e.g., \"What peace treaty ended the Calumet War?\"). In Experiments 3-6, participants were shown a cue word (e.g., whale) and were asked to guess a weak associate (e.g., mammal); the rare trials on which participants guessed the correct response were excluded from the analyses. In the test condition, participants attempted to answer the question before being shown the answer; in the read-only condition, the question and answer were presented together. Unsuccessful retrieval attempts enhanced learning with both types of materials. These results demonstrate that retrieval attempts enhance future learning; they also suggest that taking challenging tests-instead of avoiding errors-may be one key to effective learning.", "title": "" }, { "docid": "4d7cd44f2bbe9896049a7868165bd415", "text": "Testing previously studied information enhances long-term memory, particularly when the information is successfully retrieved from memory. The authors examined the effect of unsuccessful retrieval attempts on learning. Participants in 5 experiments read an essay about vision. In the test condition, they were asked about embedded concepts before reading the passage; in the extended study condition, they were given a longer time to read the passage. To distinguish the effects of testing from attention direction, the authors emphasized the tested concepts in both conditions, using italics or bolded keywords or, in Experiment 5, by presenting the questions but not asking participants to answer them before reading the passage. Posttest performance was better in the test condition than in the extended study condition in all experiments--a pretesting effect--even though only items that were not successfully retrieved on the pretest were analyzed. The testing effect appears to be attributable, in part, to the role unsuccessful tests play in enhancing future learning.", "title": "" }, { "docid": "3faeedfe2473dc837ab0db9eb4aefc4b", "text": "The spacing effect—that is, the benefit of spacing learning events apart rather than massing them together—has been demonstrated in hundreds of experiments, but is not well known to educators or learners. I investigated the spacing effect in the realistic context of flashcard use. Learners often divide flashcards into relatively small stacks, but compared to a large stack, small stacks decrease the spacing between study trials. In three experiments, participants used a web-based study programme to learn GRE-type word pairs. Studying one large stack of flashcards (i.e. spacing) was more effective than studying four smaller stacks of flashcards separately (i.e. massing). Spacing was also more effective than cramming—that is, massing study on the last day before the test. Across experiments, spacing was more effective than massing for 90% of the participants, yet after the first study session, 72% of the participants believed that massing had been more effective than spacing. Copyright # 2009 John Wiley & Sons, Ltd.", "title": "" } ]
[ { "docid": "42d5712d781140edbc6a35703d786e15", "text": "This paper considers control and estimation problems where the sensor signals and the actuator signals are transmitted to various subsystems over a network. In contrast to traditional control and estimation problems, here the observation and control packets may be lost or delayed. The unreliability of the underlying communication network is modeled stochastically by assigning probabilities to the successful transmission of packets. This requires a novel theory which generalizes classical control/estimation paradigms. The paper offers the foundations of such a novel theory. The central contribution is to characterize the impact of the network reliability on the performance of the feedback loop. Specifically, it is shown that for network protocols where successful transmissions of packets is acknowledged at the receiver (e.g., TCP-like protocols), there exists a critical threshold of network reliability (i.e., critical probabilities for the successful delivery of packets), below which the optimal controller fails to stabilize the system. Further, for these protocols, the separation principle holds and the optimal LQG controller is a linear function of the estimated state. In stark contrast, it is shown that when there is no acknowledgement of successful delivery of control packets (e.g., UDP-like protocols), the LQG optimal controller is in general nonlinear. Consequently, the separation principle does not hold in this circumstance", "title": "" }, { "docid": "244745da710e8c401173fe39359c7c49", "text": "BACKGROUND\nIntegrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression.\n\n\nRESULTS\nUsing functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs.\n\n\nCONCLUSIONS\nThe data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.", "title": "" }, { "docid": "bd3374fefa94fbb11d344d651c0f55bc", "text": "Extensive study has been conducted in the detection of license plate for the applications in intelligent transportation system (ITS). However, these results are all based on images acquired at a resolution of 640 times 480. In this paper, a new method is proposed to extract license plate from the surveillance video which is shot at lower resolution (320 times 240) as well as degraded by video compression. Morphological operations of bottom-hat and morphology gradient are utilized to detect the LP candidates, and effective schemes are applied to select the correct one. The average rates of correct extraction and false alarms are 96.62% and 1.77%, respectively, based on the experiments using more than four hours of video. The experimental results demonstrate the effectiveness and robustness of the proposed method", "title": "" }, { "docid": "e776c87ec35d67c6acbdf79d8a5cac0a", "text": "Continuous deployment speeds up the process of existing agile methods, such as Scrum, and Extreme Programming (XP) through the automatic deployment of software changes to end-users upon passing of automated tests. Continuous deployment has become an emerging software engineering process amongst numerous software companies, such as Facebook, Github, Netflix, and Rally Software. A systematic analysis of software practices used in continuous deployment can facilitate a better understanding of continuous deployment as a software engineering process. Such analysis can also help software practitioners in having a shared vocabulary of practices and in choosing the software practices that they can use to implement continuous deployment. The goal of this paper is to aid software practitioners in implementing continuous deployment through a systematic analysis of software practices that are used by software companies. We studied the continuous deployment practices of 19 software companies by performing a qualitative analysis of Internet artifacts and by conducting follow-up inquiries. In total, we found 11 software practices that are used by 19 software companies. We also found that in terms of use, eight of the 11 software practices are common across 14 software companies. We observe that continuous deployment necessitates the consistent use of sound software engineering practices such as automated testing, automated deployment, and code review.", "title": "" }, { "docid": "512d29a398f51041466884f4decec84a", "text": "Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.2", "title": "" }, { "docid": "113b8cfda23cf7e8b3d7b4821d549bf7", "text": "A load dependent zero-current detector is proposed in this paper for speeding up the transient response when load current changes from heavy to light loads. The fast transient control signal determines how long the reversed inductor current according to sudden load variations. At the beginning of load variation from heavy to light loads, the sensed voltage compared with higher voltage to discharge the overshoot output voltage for achieving fast transient response. Besides, for an adaptive reversed current period, the fast transient mechanism is turned off since the output voltage is rapidly regulated back to the acceptable level. Simulation results demonstrate that the ZCD circuit permits the reverse current flowing back into n-type power MOSFET at the beginning of load variations. The settling time is decreased to about 35 mus when load current suddenly changes from 500mA to 10 mA.", "title": "" }, { "docid": "dc5bb80426556e3dd9090a705d3e17b4", "text": "OBJECTIVES\nThe aim of this study was to locate the scientific literature dealing with addiction to the Internet, video games, and cell phones and to characterize the pattern of publications in these areas.\n\n\nMETHODS\nOne hundred seventy-nine valid articles were retrieved from PubMed and PsycINFO between 1996 and 2005 related to pathological Internet, cell phone, or video game use.\n\n\nRESULTS\nThe years with the highest numbers of articles published were 2004 (n = 42) and 2005 (n = 40). The most productive countries, in terms of number of articles published, were the United States (n = 52), China (n = 23), the United Kingdom (n = 17), Taiwan (n = 13), and South Korea (n = 9). The most commonly used language was English (65.4%), followed by Chinese (12.8%) and Spanish (4.5%). Articles were published in 96 different journals, of which 22 published 2 or more articles. The journal that published the most articles was Cyberpsychology & Behavior (n = 41). Addiction to the Internet was the most intensely studied (85.3%), followed by addiction to video games (13.6%) and cell phones (2.1%).\n\n\nCONCLUSIONS\nThe number of publications in this area is growing, but it is difficult to conduct precise searches due to a lack of clear terminology. To facilitate retrieval, bibliographic databases should include descriptor terms referring specifically to Internet, video games, and cell phone addiction as well as to more general addictions involving communications and information technologies and other behavioral addictions.", "title": "" }, { "docid": "b240041ea6a885151fd39d863b9217dc", "text": "Engaging in a test over previously studied information can serve as a potent learning event, a phenomenon referred to as the testing effect. Despite a surge of research in the past decade, existing theories have not yet provided a cohesive account of testing phenomena. The present study uses meta-analysis to examine the effects of testing versus restudy on retention. Key results indicate support for the role of effortful processing as a contributor to the testing effect, with initial recall tests yielding larger testing benefits than recognition tests. Limited support was found for existing theoretical accounts attributing the testing effect to enhanced semantic elaboration, indicating that consideration of alternative mechanisms is warranted in explaining testing effects. Future theoretical accounts of the testing effect may benefit from consideration of episodic and contextually derived contributions to retention resulting from memory retrieval. Additionally, the bifurcation model of the testing effect is considered as a viable framework from which to characterize the patterns of results present across the literature.", "title": "" }, { "docid": "43ef67c897e7f998b1eb7d3524d514f4", "text": "This brief proposes a delta-sigma modulator that operates at extremely low voltage without using a clock boosting technique. To maintain the advantages of a discrete-time integrator in oversampled data converters, a mixed differential difference amplifier (DDA) integrator is developed that removes the input sampling switch in a switched-capacitor integrator. Conventionally, many low-voltage delta-sigma modulators have used high-voltage generating circuits to boost the clock voltage levels. A mixed DDA integrator with both a switched-resistor and a switched-capacitor technique is developed to implement a discrete-time integrator without clock boosted switches. The proposed mixed DDA integrator is demonstrated by a third-order delta-sigma modulator with a feedforward topology. The fabricated modulator shows a 68-dB signal-to-noise-plus-distortion ratio for a 20-kHz signal bandwidth with an oversampling ratio of 80. The chip consumes 140 μW of power at a true 0.4-V power supply, which is the lowest voltage without a clock boosting technique among the state-of-the-art modulators in this signal band.", "title": "" }, { "docid": "106fefb169c7e95999fb411b4e07954e", "text": "Additional contents in web pages, such as navigation panels, advertisements, copyrights and disclaimer notices, are typically not related to the main subject and may hamper the performance of Web data mining. They are traditionally taken as noises and need to be removed properly. To achieve this, two intuitive and crucial kinds of information—the textual information and the visual information of web pages—is considered in this paper. Accordingly, Text Density and Visual Importance are defined for the Document Object Model (DOM) nodes of a web page. Furthermore, a content extraction method with these measured values is proposed. It is a fast, accurate and general method for extracting content from diverse web pages. And with the employment of DOM nodes, the original structure of the web page can be preserved. Evaluated with the CleanEval benchmark and with randomly selected pages from well-known Web sites, where various web domains and styles are tested, the effect of the method is demonstrated. The average F1-scores with our method were 8.7 % higher than the best scores among several alternative methods.", "title": "" }, { "docid": "e797fbf7b53214df32d5694527ce5ba3", "text": "One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model 1 employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results.", "title": "" }, { "docid": "2f17160c9f01aa779b1745a57e34e1aa", "text": "OBJECTIVE\nTo report an ataxic variant of Alzheimer disease expressing a novel molecular phenotype.\n\n\nDESIGN\nDescription of a novel phenotype associated with a presenilin 1 mutation.\n\n\nSETTING\nThe subject was an outpatient who was diagnosed at the local referral center.\n\n\nPATIENT\nA 28-year-old man presented with psychiatric symptoms and cerebellar signs, followed by cognitive dysfunction. Severe beta-amyloid (Abeta) deposition was accompanied by neurofibrillary tangles and cell loss in the cerebral cortex and by Purkinje cell dendrite loss in the cerebellum. A presenilin 1 gene (PSEN1) S170F mutation was detected.\n\n\nMAIN OUTCOME MEASURES\nWe analyzed the processing of Abeta precursor protein in vitro as well as the Abeta species in brain tissue.\n\n\nRESULTS\nThe PSEN1 S170F mutation induced a 3-fold increase of both secreted Abeta(42) and Abeta(40) species and a 60% increase of secreted Abeta precursor protein in transfected cells. Soluble and insoluble fractions isolated from brain tissue showed a prevalence of N-terminally truncated Abeta species ending at both residues 40 and 42.\n\n\nCONCLUSION\nThese findings define a new Alzheimer disease molecular phenotype and support the concept that the phenotypic variability associated with PSEN1 mutations may be dictated by the Abeta aggregates' composition.", "title": "" }, { "docid": "0b5f0cd5b8d49d57324a0199b4925490", "text": "Deep brain stimulation (DBS) has an increasing role in the treatment of idiopathic Parkinson's disease. Although, the subthalamic nucleus (STN) is the commonly chosen target, a number of groups have reported that the most effective contact lies dorsal/dorsomedial to the STN (region of the pallidofugal fibres and the rostral zona incerta) or at the junction between the dorsal border of the STN and the latter. We analysed our outcome data from Parkinson's disease patients treated with DBS between April 2002 and June 2004. During this period we moved our target from the STN to the region dorsomedial/medial to it and subsequently targeted the caudal part of the zona incerta nucleus (cZI). We present a comparison of the motor outcomes between these three groups of patients with optimal contacts within the STN (group 1), dorsomedial/medial to the STN (group 2) and in the cZI nucleus (group 3). Thirty-five patients with Parkinson's disease underwent MRI directed implantation of 64 DBS leads into the STN (17), dorsomedial/medial to STN (20) and cZI (27). The primary outcome measure was the contralateral Unified Parkinson's Disease Rating Scale (UPDRS) motor score (off medication/off stimulation versus off medication/on stimulation) measured at follow-up (median time 6 months). The secondary outcome measures were the UPDRS III subscores of tremor, bradykinesia and rigidity. Dyskinesia score, L-dopa medication reduction and stimulation parameters were also recorded. The mean adjusted contralateral UPDRS III score with cZI stimulation was 3.1 (76% reduction) compared to 4.9 (61% reduction) in group 2 and 5.7 (55% reduction) in the STN (P-value for trend <0.001). There was a 93% improvement in tremor with cZI stimulation versus 86% in group 2 versus 61% in group 1 (P-value = 0.01). Adjusted 'off-on' rigidity scores were 1.0 for the cZI group (76% reduction), 2.0 for group 2 (52% reduction) and 2.1 for group 1 (50% reduction) (P-value for trend = 0.002). Bradykinesia was more markedly improved in the cZI group (65%) compared to group 2 (56%) or STN group (59%) (P-value for trend = 0.17). There were no statistically significant differences in the dyskinesia scores, L-dopa medication reduction and stimulation parameters between the three groups. Stimulation related complications were seen in some group 2 patients. High frequency stimulation of the cZI results in greater improvement in contralateral motor scores in Parkinson's disease patients than stimulation of the STN. We discuss the implications of this finding and the potential role played by the ZI in Parkinson's disease.", "title": "" }, { "docid": "06502355f6db37b73806e9e57476e749", "text": "BACKGROUND\nBecause the trend of pharmacotherapy is toward controlling diet rather than administration of drugs, in our study we examined the probable relationship between Creatine (Cr) or Whey (Wh) consumption and anesthesia (analgesia effect of ketamine). Creatine and Wh are among the most favorable supplements in the market. Whey is a protein, which is extracted from milk and is a rich source of amino acids. Creatine is an amino acid derivative that can change to ATP in the body. Both of these supplements result in Nitric Oxide (NO) retention, which is believed to be effective in N-Methyl-D-aspartate (NMDA) receptor analgesia.\n\n\nOBJECTIVES\nThe main question of this study was whether Wh and Cr are effective on analgesic and anesthetic characteristics of ketamine and whether this is related to NO retention or amino acids' features.\n\n\nMATERIALS AND METHODS\nWe divided 30 male Wistar rats to three (n = 10) groups; including Cr, Wh and sham (water only) groups. Each group was administered (by gavage) the supplements for an intermediate dosage during 25 days. After this period, they became anesthetized using a Ketamine-Xylazine (KX) and their time to anesthesia and analgesia, and total sleep time were recorded.\n\n\nRESULTS\nData were analyzed twice using the SPSS 18 software with Analysis of Variance (ANOVA) and post hoc test; first time we expunged the rats that didn't become anesthetized and the second time we included all of the samples. There was a significant P-value (P < 0.05) for total anesthesia time in the second analysis. Bonferroni multiple comparison indicated that the difference was between Cr and Sham groups (P < 0.021).\n\n\nCONCLUSIONS\nThe data only indicated that there might be a significant relationship between Cr consumption and total sleep time. Further studies, with rats of different gender and different dosage of supplement and anesthetics are suggested.", "title": "" }, { "docid": "5bf2c4a187b35ad5c4e69aef5eb9ffea", "text": "In the last decade, the research of the usability of mobile phones has been a newly evolving area with few established methodologies and realistic practices that ensure capturing usability in evaluation. Thus, there exists growing demand to explore appropriate evaluation methodologies that evaluate the usability of mobile phones quickly as well as comprehensively. This study aims to develop a task-based usability checklist based on heuristic evaluations in views of mobile phone user interface (UI) practitioners. A hierarchical structure of UI design elements and usability principles related to mobile phones were developed and then utilized to develop the checklist. To demonstrate the practical effectiveness of the proposed checklist, comparative experiments were conducted on the usability checklist and usability testing. The majority of usability problems found by usability testing and additional problems were discovered by the proposed checklist. It is expected that the usability checklist proposed in this study could be used quickly and efficiently by usability practitioners to evaluate the mobile phone UI in the middle of the mobile phone development process.", "title": "" }, { "docid": "35ae4e59fd277d57c2746dfccf9b26b0", "text": "In the field of saliency detection, many graph-based algorithms heavily depend on the accuracy of the pre-processed superpixel segmentation, which leads to significant sacrifice of detail information from the input image. In this paper, we propose a novel bottom-up saliency detection approach that takes advantage of both region-based features and image details. To provide more accurate saliency estimations, we first optimize the image boundary selection by the proposed erroneous boundary removal. By taking the image details and region-based estimations into account, we then propose the regularized random walks ranking to formulate pixel-wised saliency maps from the superpixel-based background and foreground saliency estimations. Experiment results on two public datasets indicate the significantly improved accuracy and robustness of the proposed algorithm in comparison with 12 state-of-the-art saliency detection approaches.", "title": "" }, { "docid": "cd3d9bb066729fc7107c0fef89f664fe", "text": "The extended contact hypothesis proposes that knowledge that an in-group member has a close relationship with an out-group member can lead to more positive intergroup attitudes. Proposed mechanisms are the in-group or out-group member serving as positive exemplars and the inclusion of the out-group member's group membership in the self. In Studies I and 2, respondents knowing an in-group member with an out-group friend had less negative attitudes toward that out-group, even controlling for disposition.il variables and direct out-group friendships. Study 3, with constructed intergroup-conflict situations (on the robbers cave model). found reduced negative out-group attitudes after participants learned of cross-group friendships. Study 4, a minimal group experiment, showed less negative out-group attitudes for participants observing an apparent in-group-out-group friendship.", "title": "" }, { "docid": "f04682957e97b8ccb4f40bf07dde2310", "text": "This paper introduces a dataset gathered entirely in urban scenarios with a car equipped with one stereo camera and five laser scanners, among other sensors. One distinctive feature of the present dataset is the existence of high-resolution stereo images grabbed at high rate (20 fps) during a 36.8 km trajectory, which allows the benchmarking of a variety of computer vision techniques. We describe the employed sensors and highlight some applications which could be benchmarked with the presented work. Both plain text and binary files are provided, as well as open source tools for working with the binary versions. The dataset is available for download in http://www.mrpt.org/MalagaUrbanDataset.", "title": "" }, { "docid": "644d2fcc7f2514252c2b9da01bb1ef42", "text": "We now described an interesting application of SVD to text do cuments. Suppose we represent documents as a bag of words, soXij is the number of times word j occurs in document i, for j = 1 : W andi = 1 : D, where W is the number of words and D is the number of documents. To find a document that contains a g iven word, we can use standard search procedures, but this can get confuse d by ynonomy (different words with the same meaning) andpolysemy (same word with different meanings). An alternative approa ch is to assume that X was generated by some low dimensional latent representation X̂ ∈ IR, whereK is the number of latent dimensions. If we compare documents in the latent space, we should get improved retrie val performance, because words of similar meaning get mapped to similar low dimensional locations. We can compute a low dimensional representation of X by computing the SVD, and then taking the top k singular values/ vectors: 1", "title": "" }, { "docid": "e289d20455fd856ce4cf72589b3e206b", "text": "Transfer learning is a vital technique that generalizes models trained for one setting or task to other settings or tasks. For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data. Transfer learning is closely related to multi-task learning (cross-lingual vs. multilingual), and is traditionally studied in the name of `model adaptation'. Recent advance in deep learning shows that transfer learning becomes much easier and more effective with high-level abstract features learned by deep models, and the `transfer' can be conducted not only between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models). This review paper summarizes some recent prominent research towards this direction, particularly for speech and language processing. We also report some results from our group and highlight the potential of this very interesting research field1.", "title": "" } ]
scidocsrr
94bc3baaf884e3038c21f1fe51cdd7ae
Sample Compression, Learnability, and the Vapnik-Chervonenkis Dimension
[ { "docid": "b74e8a911368384ccf7126c0dcbf55fd", "text": "Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.", "title": "" } ]
[ { "docid": "c10d33abc6ed1d47c11bf54ed38e5800", "text": "The past decade has seen a steady growth of interest in statistical language models for information retrieval, and much research work has been conducted on this subject. This book by ChengXiang Zhai summarizes most of this research. It opens with an introduction covering the basic concepts of information retrieval and statistical languagemodels, presenting the intuitions behind these concepts. This introduction is then followed by a chapter providing an overview of:", "title": "" }, { "docid": "5339554b6f753b69b5ace705af0263cd", "text": "We explore several oversampling techniques for an imbalanced multi-label classification problem, a setting often encountered when developing models for Computer-Aided Diagnosis (CADx) systems. While most CADx systems aim to optimize classifiers for overall accuracy without considering the relative distribution of each class, we look into using synthetic sampling to increase perclass performance when predicting the degree of malignancy. Using low-level image features and a random forest classifier, we show that using synthetic oversampling techniques increases the sensitivity of the minority classes by an average of 7.22% points, with as much as a 19.88% point increase in sensitivity for a particular minority class. Furthermore, the analysis of low-level image feature distributions for the synthetic nodules reveals that these nodules can provide insights on how to preprocess image data for better classification performance or how to supplement the original datasets when more data acquisition is feasible.", "title": "" }, { "docid": "4a51fa781609c0fab79fff536a14aa43", "text": "Recently end-to-end speech recognition has obtained much attention. One of the popular models to achieve end-to-end speech recognition is attention based encoder-decoder model, which usually generating output sequences iteratively by attending the whole representations of the input sequences. However, predicting outputs until receiving the whole input sequence is not practical for online or low time latency speech recognition. In this paper, we present a simple but effective attention mechanism which can make the encoder-decoder model generate outputs without attending the entire input sequence and can apply to online speech recognition. At each prediction step, the attention is assumed to be a time-moving gaussian window with variable size and can be predicted by using previous input and output information instead of the content based computation on the whole input sequence. To further improve the online performance of the model, we employ deep convolutional neural networks as encoder. Experiments show that the gaussian prediction based attention works well and under the help of deep convolutional neural networks the online model achieves 19.5% phoneme error rate in TIMIT ASR task.", "title": "" }, { "docid": "d999bb4717dd07b2560a85c7c775eb0e", "text": "We present a new algorithm for removing motion blur from a single image. Our method computes a deblurred image using a unified probabilistic model of both blur kernel estimation and unblurred image restoration. We present an analysis of the causes of common artifacts found in current deblurring methods, and then introduce several novel terms within this probabilistic model that are inspired by our analysis. These terms include a model of the spatial randomness of noise in the blurred image, as well a new local smoothness prior that reduces ringing artifacts by constraining contrast in the unblurred image wherever the blurred image exhibits low contrast. Finally, we describe an effficient optimization scheme that alternates between blur kernel estimation and unblurred image restoration until convergence. As a result of these steps, we are able to produce high quality deblurred results in low computation time. We are even able to produce results of comparable quality to techniques that require additional input images beyond a single blurry photograph, and to methods that require additional hardware.", "title": "" }, { "docid": "94014090d66c6dc4ec46da2c1de2a605", "text": "Sentence pair modeling is critical for many NLP tasks, such as paraphrase identification, semantic textual similarity, and natural language inference. Most state-of-the-art neural models for these tasks rely on pretrained word embedding and compose sentence-level semantics in varied ways; however, few works have attempted to verify whether we really need pretrained embeddings in these tasks. In this paper, we study how effective subwordlevel (character and character n-gram) representations are in sentence pair modeling. Though it is well-known that subword models are effective in tasks with single sentence input, including language modeling and machine translation, they have not been systematically studied in sentence pair modeling tasks where the semantic and string similarities between texts matter. Our experiments show that subword models without any pretrained word embedding can achieve new state-of-the-art results on two social media datasets and competitive results on news data for paraphrase identification.", "title": "" }, { "docid": "be68222ba029a46cf9c7463b0f233db2", "text": "Solar panels have been improving in efficiency and dropping in price, and are therefore becoming more common and economically viable. However, the performance of solar panels depends not only on the weather, but also on other external factors such as shadow, dirt, dust, etc. In this paper, we describe a simple and practical data-driven method for classifying anomalies in the power output of solar panels. In particular, we propose and experimentally verify (using two solar panel arrays in Ontario, Canada) a simple classification rule based on physical properties of solar radiation that can distinguish between shadows and direct covering of the panel, e.g,. by dirt or snow.", "title": "" }, { "docid": "4e2c466fac826f5e32a51f09355d7585", "text": "Congested networks involve complex traffic dynamics that can be accurately captured with detailed simulation models. However, when performing optimization of such networks the use of simulators is limited due to their stochastic nature and their relatively high evaluation cost. This has lead to the use of general-purpose analytical metamodels, that are cheaper to evaluate and easier to integrate within a classical optimization framework, but do not capture the specificities of the underlying congested conditions. In this paper, we argue that to perform efficient optimization for congested networks it is important to develop analytical surrogates specifically tailored to the context at hand so that they capture the key components of congestion (e.g. its sources, its propagation, its impact) while achieving a good tradeoff between realism and tractability. To demonstrate this, we present a surrogate that provides a detailed description of congestion by capturing the main interactions between the different network components while preserving analytical tractable. In particular, we consider the optimization of vehicle traffic in an urban road network. The proposed surrogate model is an approximate queueing network model that resorts to finite capacity queueing theory to account for congested conditions. Existing analytic queueing models for urban networks are formulated for a single intersection, and thus do not take into account the interactions between queues. The proposed model considers a set of intersections and analytically captures these interactions. We show that this level of detail is sufficient for optimization in the context of signal control for peak hour traffic. Although there is a great variety of signal control methodologies in the literature, there is still a need for solutions that are appropriate and efficient under saturated conditions, where the performance of signal control strategies and the formation and propagation of queues are strongly related. We formulate a fixed-time signal control problem where the network model is included as a set of constraints. We apply this methodology to a subnetwork of the Lausanne city center and use a microscopic traffic simulator to validate its performance. We also compare it with several other methods. As congestion increases, the new method leads to improved average performance measures. The results highlight the importance of taking the interaction between consecutive roads into account when deriving signal plans for congested urban road networks.", "title": "" }, { "docid": "31404322fb03246ba2efe451191e29fa", "text": "OBJECTIVES\nThe aim of this study is to report an unusual form of penile cancer presentation associated with myiasis infestation, treatment options and outcomes.\n\n\nMATERIALS AND METHODS\nWe studied 10 patients with suspected malignant neoplasm of the penis associated with genital myiasis infestation. Diagnostic assessment was conducted through clinical history, physical examination, penile biopsy, larvae identification and computerized tomography scan of the chest, abdomen and pelvis. Clinical and pathological staging was done according to 2002 TNM classification system. Radical inguinal lymphadenectomy was conducted according to the primary penile tumor pathology and clinical lymph nodes status.\n\n\nRESULTS\nPatients age ranged from 41 to 77 years (mean=62.4). All patients presented squamous cell carcinoma of the penis in association with myiasis infestation caused by Psychoda albipennis. Tumor size ranged from 4cm to 12cm (mean=5.3). Circumcision was conducted in 1 (10%) patient, while penile partial penectomy was performed in 5 (50%). Total penectomy was conducted in 2 (20%) patients, while emasculation was the treatment option for 2 (20%). All patients underwent radical inguinal lymphadenectomy. Prophylactic lymphadenectomy was performed on 3 (30%) patients, therapeutic on 5 (50%), and palliative lymphadenectomy on 2 (20%) patients. Time elapsed from primary tumor treatment to radical inguinal lymphadenectomy was 2 to 6 weeks. The mean follow-up was 34.3 months.\n\n\nCONCLUSION\nThe occurrence of myiasis in the genitalia is more common in patients with precarious hygienic practices and low socio-economic level. The treatment option varied according to the primary tumor presentation and clinical lymph node status.", "title": "" }, { "docid": "f0d17b259b699bc7fb7e8f525ec64db0", "text": "Developing Intelligent Systems involves artificial intelligence approaches including artificial neural networks. Here, we present a tutorial of Deep Neural Networks (DNNs), and some insights about the origin of the term “deep”; references to deep learning are also given. Restricted Boltzmann Machines, which are the core of DNNs, are discussed in detail. An example of a simple two-layer network, performing unsupervised learning for unlabeled data, is shown. Deep Belief Networks (DBNs), which are used to build networks with more than two layers, are also described. Moreover, examples for supervised learning with DNNs performing simple prediction and classification tasks, are presented and explained. This tutorial includes two intelligent pattern recognition applications: handwritten digits (benchmark known as MNIST) and speech recognition.", "title": "" }, { "docid": "dbe5661d99798b24856c61b93ddb2392", "text": "Traditionally, appearance models for recognition, reacquisition and tracking problems have been evaluated independently using metrics applied to a complete system. It is shown that appearance models for these three problems can be evaluated using a cumulative matching curve on a standardized dataset, and that this one curve can be converted to a synthetic reacquisition or disambiguation rate for tracking. A challenging new dataset for viewpoint invariant pedestrian recognition (VIPeR) is provided as an example. This dataset contains 632 pedestrian image pairs from arbitrary viewpoints. Several baseline methods are tested on this dataset and the results are presented as a benchmark for future appearance models and matchin methods.", "title": "" }, { "docid": "f514d5177f234e786b9bfc295359c852", "text": "Biological sequence comparison is a very important operation in Bioinformatics. Even though there do exist exact methods to compare biological sequences, these methods are often neglected due to their quadratic time and space complexity. In order to accelerate these methods, many GPU algorithms were proposed in the literature. Nevertheless, all of them restrict the size of the smallest sequence in such a way that Megabase genome comparison is prevented. In this paper, we propose and evaluate CUDAlign, a GPU algorithm that is able to compare Megabase biological sequences with an exact Smith-Waterman affine gap variant. CUDAlign was implemented in CUDA and tested in two GPU boards, separately. For real sequences whose size range from 1MBP (Megabase Pairs) to 47MBP, a close to uniform GCUPS (Giga Cells Updates per Second) was obtained, showing the potential scalability of our approach. Also, CUDAlign was able to compare the human chromosome 21 and the chimpanzee chromosome 22. This operation took 21 hours on GeForce GTX 280, resulting in a peak performance of 20.375 GCUPS. As far as we know, this is the first time such huge chromosomes are compared with an exact method.", "title": "" }, { "docid": "7bbffa53f71207f0f218a09f18586541", "text": "Myelotoxicity induced by chemotherapy may become life-threatening. Neutropenia may be prevented by granulocyte colony-stimulating factors (GCSF), and epoetin may prevent anemia, but both cause substantial side effects and increased costs. According to non-established data, wheat grass juice (WGJ) may prevent myelotoxicity when applied with chemotherapy. In this prospective matched control study, 60 patients with breast carcinoma on chemotherapy were enrolled and assigned to an intervention or control arm. Those in the intervention arm (A) were given 60 cc of WGJ orally daily during the first three cycles of chemotherapy, while those in the control arm (B) received only regular supportive therapy. Premature termination of treatment, dose reduction, and starting GCSF or epoetin were considered as \"censoring events.\" Response rate to chemotherapy was calculated in patients with evaluable disease. Analysis of the results showed that five censoring events occurred in Arm A and 15 in Arm B (P = 0.01). Of the 15 events in Arm B, 11 were related to hematological events. No reduction in response rate was observed in patients who could be assessed for response. Side effects related to WGJ were minimal, including worsening of nausea in six patients, causing cessation of WGJ intake. In conclusion, it was found that WGJ taken during FAC chemotherapy may reduce myelotoxicity, dose reductions, and need for GCSF support, without diminishing efficacy of chemotherapy. These preliminary results need confirmation in a phase III study.", "title": "" }, { "docid": "60a7e9be448a0ac4e25d1eed5b075de9", "text": "Prepositional phrase (PP) attachment disambiguation is a known challenge in syntactic parsing. The lexical sparsity associated with PP attachments motivates research in word representations that can capture pertinent syntactic and semantic features of the word. One promising solution is to use word vectors induced from large amounts of raw text. However, state-of-the-art systems that employ such representations yield modest gains in PP attachment accuracy. In this paper, we show that word vector representations can yield significant PP attachment performance gains. This is achieved via a non-linear architecture that is discriminatively trained to maximize PP attachment accuracy. The architecture is initialized with word vectors trained from unlabeled data, and relearns those to maximize attachment accuracy. We obtain additional performance gains with alternative representations such as dependency-based word vectors. When tested on both English and Arabic datasets, our method outperforms both a strong SVM classifier and state-of-the-art parsers. For instance, we achieve 82.6% PP attachment accuracy on Arabic, while the Turbo and Charniak self-trained parsers obtain 76.7% and 80.8% respectively.", "title": "" }, { "docid": "92fcc4d21872dca232c624a11eb3988c", "text": "Most automobile manufacturers maintain many vehicle types to keep a successful position on the market. Through the further development all vehicle types gain a diverse amount of new functionality. Additional features have to be supported by the car’s software. For time efficient accomplishment, usually the existing electronic control unit (ECU) code is extended. In the majority of cases this evolutionary development process is accompanied by a constant decay of the software architecture. This effect known as software erosion leads to an increasing deviation from the requirements specifications. To counteract the erosion it is necessary to continuously restore the architecture in respect of the specification. Automobile manufacturers cope with the erosion of their ECU software with varying degree of success. Successfully we applied a methodical and structured approach of architecture restoration in the specific case of the brake servo unit (BSU). Software product lines from existing BSU variants were extracted by explicit projection of the architecture variability and decomposition of the original architecture. After initial application, this approach was capable to restore the BSU architecture recurrently.", "title": "" }, { "docid": "6ac8d9cfe3c1f6e6a6a2fd32b675c89a", "text": "Each discrete cosine transform (DCT) uses N real basis vectors whose components are cosines. In the DCT-4, for example, the jth component of vk is cos(j+ 2 )(k+ 1 2 ) π N . These basis vectors are orthogonal and the transform is extremely useful in image processing. If the vector x gives the intensities along a row of pixels, its cosine series ∑ ckvk has the coefficients ck = (x,vk)/N . They are quickly computed from a Fast Fourier Transform. But a direct proof of orthogonality, by calculating inner products, does not reveal how natural these cosine vectors are. We prove orthogonality in a different way. Each DCT basis contains the eigenvectors of a symmetric “second difference” matrix. By varying the boundary conditions we get the established transforms DCT-1 through DCT-4. Other combinations lead to four additional cosine transforms. The type of boundary condition (Dirichlet or Neumann, centered at a meshpoint or a midpoint) determines the applications that are appropriate for each transform. The centering also determines the period: N − 1 or N in the established transforms, N− 2 or N+ 1 2 in the other four. The key point is that all these “eigenvectors of cosines” come from simple and familiar matrices.", "title": "" }, { "docid": "ab07b74740f5353f006e93547a7931c8", "text": "Separation of business logic from any technical platform is an important principle to cope with complexity, and to achieve the required engineering quality factors such as adaptability, maintainability, and reusability. In this context, Model Driven Architecture (MDA) is a framework defined by the OMG for designing high quality software systems. In this paper we are going to present a model-driven approach to the development of the MVC2 web applications especially Spring MVC based on the uml class diagramme. The transformation language is ATL (Atlas Transformation Language). The transformation rules defined in this paper can generate from, the class diagramme, an XML file respecting the architecture MVC2 (Model-View-Controller), this file can be used to generate the end-to-end necessary Spring MVC code of a web application.", "title": "" }, { "docid": "d158d2d0b24fe3766b6ddb9bff8e8010", "text": "We introduce an online learning approach for multitarget tracking. Detection responses are gradually associated into tracklets in multiple levels to produce final tracks. Unlike most previous approaches which only focus on producing discriminative motion and appearance models for all targets, we further consider discriminative features for distinguishing difficult pairs of targets. The tracking problem is formulated using an online learned CRF model, and is transformed into an energy minimization problem. The energy functions include a set of unary functions that are based on motion and appearance models for discriminating all targets, as well as a set of pairwise functions that are based on models for differentiating corresponding pairs of tracklets. The online CRF approach is more powerful at distinguishing spatially close targets with similar appearances, as well as in dealing with camera motions. An efficient algorithm is introduced for finding an association with low energy cost. We evaluate our approach on three public data sets, and show significant improvements compared with several state-of-art methods.", "title": "" }, { "docid": "5894fd2d3749df78afb49b27ad26f459", "text": "Information security policy compliance (ISP) is one of the key concerns that face organizations today. Although technical and procedural measures help improve information security, there is an increased need to accommodate human, social and organizational factors. Despite the plethora of studies that attempt to identify the factors that motivate compliance behavior or discourage abuse and misuse behaviors, there is a lack of studies that investigate the role of ethical ideology per se in explaining compliance behavior. The purpose of this research is to investigate the role of ethics in explaining Information Security Policy (ISP) compliance. In that regard, a model that integrates behavioral and ethical theoretical perspectives is developed and tested. Overall, analyses indicate strong support for the validation of the proposed theoretical model.", "title": "" }, { "docid": "2c2ae81ab314b39dd6523e4b6c546d3f", "text": "The China Brain Project covers both basic research on neural mechanisms underlying cognition and translational research for the diagnosis and intervention of brain diseases as well as for brain-inspired intelligence technology. We discuss some emerging themes, with emphasis on unique aspects.", "title": "" }, { "docid": "93314112049e3bccd7853e63afc97f73", "text": "In this paper, we address the challenging task of scene segmentation. In order to capture the rich contextual dependencies over image regions, we propose Directed Acyclic Graph-Recurrent Neural Networks (DAG-RNN) to perform context aggregation over locally connected feature maps. More specifically, DAG-RNN is placed on top of pre-trained CNN (feature extractor) to embed context into local features so that their representative capability can be enhanced. In comparison with plain CNN (as in Fully Convolutional Networks-FCN), DAG-RNN is empirically found to be significantly more effective at aggregating context. Therefore, DAG-RNN demonstrates noticeably performance superiority over FCNs on scene segmentation. Besides, DAG-RNN entails dramatically less parameters as well as demands fewer computation operations, which makes DAG-RNN more favorable to be potentially applied on resource-constrained embedded devices. Meanwhile, the class occurrence frequencies are extremely imbalanced in scene segmentation, so we propose a novel class-weighted loss to train the segmentation network. The loss distributes reasonably higher attention weights to infrequent classes during network training, which is essential to boost their parsing performance. We evaluate our segmentation network on three challenging public scene segmentation benchmarks: Sift Flow, Pascal Context and COCO Stuff. On top of them, we achieve very impressive segmentation performance.", "title": "" } ]
scidocsrr
3dde2e750caa8624282518369f4f6a1f
Evaluating Display Fidelity and Interaction Fidelity in a Virtual Reality Game
[ { "docid": "c9b7832cd306fc022e4a376f10ee8fc8", "text": "This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of stacking depth, that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of being there, the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.", "title": "" }, { "docid": "467b4537bdc6a466909d819e67d0ebc1", "text": "We have created an immersive application for statistical graphics and have investigated what benefits it offers over more traditional data analysis tools. This paper presents a description of both the traditional data analysis tools and our virtual environment, and results of an experiment designed to determine if an immersive environment based on the XGobi desktop system provides advantages over XGobi for analysis of high-dimensional statistical data. The experiment included two aspects of each environment: three structure detection (visualization) tasks and one ease of interaction task. The subjects were given these tasks in both the C2 virtual environment and a workstation running XGobi. The experiment results showed an improvement in participants’ ability to perform structure detection tasks in the C2 to their performance in the desktop environment. However, participants were more comfortable with the interaction tools in the desktop", "title": "" } ]
[ { "docid": "76049ed267e9327412d709014e8e9ed4", "text": "A wireless massive MIMO system entails a large number (tens or hundreds) of base station antennas serving a much smaller number of users, with large gains in spectralefficiency and energy-efficiency compared with conventional MIMO technology. Until recently it was believed that in multicellular massive MIMO system, even in the asymptotic regime, as the number of service antennas tends to infinity, the performance is limited by directed inter-cellular interference. This interference results from unavoidable re-use of reverse-link training sequences (pilot contamination) by users in different cells. We devise a new concept that leads to the effective elimination of inter-cell interference in massive MIMO systems. This is achieved by outer multi-cellular precoding, which we call LargeScale Fading Precoding (LSFP). The main idea of LSFP is that each base station linearly combines messages aimed to users from different cells that re-use the same training sequence. Crucially, the combining coefficients depend only on the slowfading coefficients between the users and the base stations. Each base station independently transmits its LSFP-combined symbols using conventional linear precoding that is based on estimated fast-fading coefficients. Further, we derive estimates for downlink and uplink SINRs and capacity lower bounds for the case of massive MIMO systems with LSFP and a finite number of base station antennas.", "title": "" }, { "docid": "d65a047b3f381ca5039d75fd6330b514", "text": "This paper presents an enhanced algorithm for matching laser scan maps using histogram correlations. The histogram representation effectively summarizes a map's salient features such that pairs of maps can be matched efficiently without any prior guess as to their alignment. The histogram matching algorithm has been enhanced in order to work well in outdoor unstructured environments by using entropy metrics, weighted histograms and proper thresholding of quality metrics. Thus our large-scale scan-matching SLAM implementation has a vastly improved ability to close large loops in real-time even when odometry is not available. Our experimental results have demonstrated a successful mapping of the largest area ever mapped to date using only a single laser scanner. We also demonstrate our ability to solve the lost robot problem by localizing a robot to a previously built map without any prior initialization.", "title": "" }, { "docid": "39007b91989c42880ff96e7c5bdcf519", "text": "Feature selection has aroused considerable research interests during the last few decades. Traditional learning-based feature selection methods separate embedding learning and feature ranking. In this paper, we propose a novel unsupervised feature selection framework, termed as the joint embedding learning and sparse regression (JELSR), in which the embedding learning and sparse regression are jointly performed. Specifically, the proposed JELSR joins embedding learning with sparse regression to perform feature selection. To show the effectiveness of the proposed framework, we also provide a method using the weight via local linear approximation and adding the ℓ2,1-norm regularization, and design an effective algorithm to solve the corresponding optimization problem. Furthermore, we also conduct some insightful discussion on the proposed feature selection approach, including the convergence analysis, computational complexity, and parameter determination. In all, the proposed framework not only provides a new perspective to view traditional methods but also evokes some other deep researches for feature selection. Compared with traditional unsupervised feature selection methods, our approach could integrate the merits of embedding learning and sparse regression. Promising experimental results on different kinds of data sets, including image, voice data and biological data, have validated the effectiveness of our proposed algorithm.", "title": "" }, { "docid": "038db4d053ff795f35ae9731f6e27c9a", "text": "Intravascular injection leading to skin necrosis or blindness is the most serious complication of facial injection with fillers. It may be underreported and the outcome of cases are unclear. Early recognitions of the symptoms and signs may facilitate prompt treatment if it does occur avoiding the potential sequelae of intravascular injection. To determine the frequency of intravascular injection among experienced injectors, the outcomes of these intravascular events, and the management strategies. An internet-based survey was sent to 127 injectors worldwide who act as trainers for dermal fillers globally. Of the 52 respondents from 16 countries, 71 % had ≥11 years of injection experience, and 62 % reported one or more intravascular injections. The most frequent initial signs were minor livedo (63 % of cases), pallor (41 %), and symptoms of pain (37 %). Mildness/absence of pain was a feature of 47 % of events. Hyaluronidase (5 to >500 U) was used immediately on diagnosis to treat 86 % of cases. The most commonly affected areas were the nasolabial fold and nose (39 % each). Of all the cases, only 7 % suffered moderate scarring requiring surface treatments. Uneventful healing was the usual outcome, with 86 % being resolved within 14 days. Intravascular injection with fillers can occur even at the hands of experienced injectors. It may not be always associated with immediate pain or other classical symptoms and signs. Prompt effective management leads to favorable outcomes, and will prevent catastrophic consequences such as skin necrosis. Intravascular injection leading to blindness may not be salvageable and needs further study. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .", "title": "" }, { "docid": "7a9b9633243d84978d9e975744642e18", "text": "Our aim is to provide a pixel-level object instance labeling of a monocular image. We build on recent work [27] that trained a convolutional neural net to predict instance labeling in local image patches, extracted exhaustively in a stride from an image. A simple Markov random field model using several heuristics was then proposed in [27] to derive a globally consistent instance labeling of the image. In this paper, we formulate the global labeling problem with a novel densely connected Markov random field and show how to encode various intuitive potentials in a way that is amenable to efficient mean field inference [13]. Our potentials encode the compatibility between the global labeling and the patch-level predictions, contrast-sensitive smoothness as well as the fact that separate regions form different instances. Our experiments on the challenging KITTI benchmark [8] demonstrate that our method achieves a significant performance boost over the baseline [27].", "title": "" }, { "docid": "505a9b6139e8cbf759652dc81f989de9", "text": "SQL injection attacks, a class of injection flaw in which specially crafted input strings leads to illegal queries to databases, are one of the topmost threats to web applications. A Number of research prototypes and commercial products that maintain the queries structure in web applications have been developed. But these techniques either fail to address the full scope of the problem or have limitations. Based on our observation that the injected string in a SQL injection attack is interpreted differently on different databases. A characteristic diagnostic feature of SQL injection attacks is that they change the intended structure of queries issued. Pattern matching is a technique that can be used to identify or detect any anomaly packet from a sequential action. Injection attack is a method that can inject any kind of malicious string or anomaly string on the original string. Most of the pattern based techniques are used static analysis and patterns are generated from the attacked statements. In this paper, we proposed a detection and prevention technique for preventing SQL Injection Attack (SQLIA) using Aho–Corasick pattern matching algorithm. In this paper, we proposed an overview of the architecture. In the initial stage evaluation, we consider some sample of standard attack patterns and it shows that the proposed algorithm is works well against the SQL Injection Attack. Keywords—SQL Injection Attack; Pattern matching; Static Pattern; Dynamic Pattern", "title": "" }, { "docid": "5931169b6433d77496dfc638988399eb", "text": "Image annotation has been an important task for visual information retrieval. It usually involves a multi-class multi-label classification problem. To solve this problem, many researches have been conducted during last two decades, although most of the proposed methods rely on the training data with the ground truth. To prepare such a ground truth is an expensive and laborious task that cannot be easily scaled, and “semantic gaps” between low-level visual features and high-level semantics still remain. In this paper, we propose a novel approach, ontology based supervised learning for multi-label image annotation, where classifiers' training is conducted using easily gathered Web data. Moreover, it takes advantage of both low-level visual features and high-level semantic information of given images. Experimental results using 0.507 million Web images database show effectiveness of the proposed framework over existing method.", "title": "" }, { "docid": "940f460457b117c156b6e39e9586a0b9", "text": "The flipped classroom is an innovative pedagogical approach that focuses on learner-centered instruction. The purposes of this report were to illustrate how to implement the flipped classroom and to describe students' perceptions of this approach within 2 undergraduate nutrition courses. The template provided enables faculty to design before, during, and after class activities and assessments based on objectives using all levels of Bloom's taxonomy. The majority of the 142 students completing the evaluation preferred the flipped method compared with traditional pedagogical strategies. The process described in the report was successful for both faculty and students.", "title": "" }, { "docid": "85b3f55fffff67b9d3a0305b258dcd8e", "text": "Sézary syndrome (SS) has a poor prognosis and few guidelines for optimizing therapy. The US Cutaneous Lymphoma Consortium, to improve clinical care of patients with SS and encourage controlled clinical trials of promising treatments, undertook a review of the published literature on therapeutic options for SS. An overview of the immunopathogenesis and standardized review of potential current treatment options for SS including metabolism, mechanism of action, overall efficacy in mycosis fungoides and SS, and common or concerning adverse effects is first discussed. The specific efficacy of each treatment for SS, both as monotherapy and combination therapy, is then reported using standardized criteria for both SS and response to therapy with the type of study defined by a modification of the US Preventive Services guidelines for evidence-based medicine. Finally, guidelines for the treatment of SS and suggestions for adjuvant treatment are noted.", "title": "" }, { "docid": "d6ee313e66b33bfebc87bb9174aed00f", "text": "The majority of arm amputees live in developing countries and cannot afford prostheses beyond cosmetic hands with simple grippers. Customized hand prostheses with high performance are too expensive for the average arm amputee. Currently, commercially available hand prostheses use costly and heavy DC motors for actuation. This paper presents an inexpensive hand prosthesis, which uses a 3D printable design to reduce the cost of customizable parts and novel electro-thermal actuator based on nylon 6-6 polymer muscles. The prosthetic hand was tested and found to be able to grasp a variety of shapes 100% of the time tested (sphere, cylinder, cube, and card) and other commonly used tools. Grip times for each object were repeatable with small standard deviations. With a low estimated material cost of $170 for actuation, this prosthesis could have a potential to be used for low-cost and high-performance system.", "title": "" }, { "docid": "aa749c00010e5391710738cc235c1c35", "text": "Traditional summarization initiatives have been focused on specific types of documents such as articles, reviews, videos, image feeds, or tweets, a practice which may result in pigeonholing the summarization task in the context of modern, content-rich multimedia collections. Consequently, much of the research to date has revolved around mostly toy problems in narrow domains and working on single-source media types. We argue that summarization and story generation systems need to refocus the problem space in order to meet the information needs in the age of user-generated content in di↵erent formats and languages. Here we create a framework for flexible multimedia storytelling. Narratives, stories, and summaries carry a set of challenges in big data and dynamic multi-source media that give rise to new research in spatial-temporal representation, viewpoint generation, and explanation.", "title": "" }, { "docid": "4a1559bd8a401d3273c34ab20931611d", "text": "Spiking Neural Networks (SNNs) are widely regarded as the third generation of artificial neural networks, and are expected to drive new classes of recognition, data analytics and computer vision applications. However, large-scale SNNs (e.g., of the scale of the human visual cortex) are highly compute and data intensive, requiring new approaches to improve their efficiency. Complementary to prior efforts that focus on parallel software and the design of specialized hardware, we propose AxSNN, the first effort to apply approximate computing to improve the computational efficiency of evaluating SNNs. In SNNs, the inputs and outputs of neurons are encoded as a time series of spikes. A spike at a neuron's output triggers updates to the potentials (internal states) of neurons to which it is connected. AxSNN determines spike-triggered neuron updates that can be skipped with little or no impact on output quality and selectively skips them to improve both compute and memory energy. Neurons that can be approximated are identified by utilizing various static and dynamic parameters such as the average spiking rates and current potentials of neurons, and the weights of synaptic connections. Such a neuron is placed into one of many approximation modes, wherein the neuron is sensitive only to a subset of its inputs and sends spikes only to a subset of its outputs. A controller periodically updates the approximation modes of neurons in the network to achieve energy savings with minimal loss in quality. We apply AxSNN to both hardware and software implementations of SNNs. For hardware evaluation, we designed SNNAP, a Spiking Neural Network Approximate Processor that embodies the proposed approximation strategy, and synthesized it to 45nm technology. The software implementation of AxSNN was evaluated on a 2.7 GHz Intel Xeon server with 128 GB memory. Across a suite of 6 image recognition benchmarks, AxSNN achieves 1.4–5.5x reduction in scalar operations for network evaluation, which translates to 1.2–3.62x and 1.26–3.9x improvement in hardware and software energies respectively, for no loss in application quality. Progressively higher energy savings are achieved with modest reductions in output quality.", "title": "" }, { "docid": "d6602271d7024f7d894b14da52299ccc", "text": "BACKGROUND\nMost articles on face composite tissue allotransplantation have considered ethical and immunologic aspects. Few have dealt with the technical aspects of graft procurement. The authors report the technical difficulties involved in procuring a lower face graft for allotransplantation.\n\n\nMETHODS\nAfter a preclinical study of 20 fresh cadavers, the authors carried out an allotransplantation of the lower two-thirds of the face on a patient in January of 2007. The graft included all the perioral muscles, the facial nerves (VII, V2, and V3) and, for the first time, the parotid glands.\n\n\nRESULTS\nThe preclinical study and clinical results confirm that complete revascularization of a graft consisting of the lower two-thirds of the face is possible from a single facial pedicle. All dissections were completed within 3 hours. Graft procurement for the clinical study took 4 hours. The authors harvested the soft tissues of the face en bloc to save time and to prevent tissue injury. They restored the donor's face within approximately 4 hours, using a resin mask colored to resemble the donor's skin tone. All nerves were easily reattached. Voluntary activity was detected on clinical examination 5 months postoperatively, and electromyography confirmed nerve regrowth, with activity predominantly on the left side. The patient requested local anesthesia for biopsies performed in month 4.\n\n\nCONCLUSIONS\nPartial facial composite tissue allotransplantation of the lower two-thirds of the face is technically feasible, with a good cosmetic and functional outcome in selected clinical cases. Flaps of this type establish vascular and neurologic connections in a reliable manner and can be procured with a rapid, standardized procedure.", "title": "" }, { "docid": "8385f72bd060eee8c59178bc0b74d1e3", "text": "Gesture recognition plays an important role in human-computer interaction. However, most existing methods are complex and time-consuming, which limit the use of gesture recognition in real-time environments. In this paper, we propose a static gesture recognition system that combines depth information and skeleton data to classify gestures. Through feature fusion, hand digit gestures of 0-9 can be recognized accurately and efficiently. According to the experimental results, the proposed gesture recognition system is effective and robust, which is invariant to complex background, illumination changes, reversal, structural distortion, rotation etc. We have tested the system both online and offline which proved that our system is satisfactory to real-time requirements, and therefore it can be applied to gesture recognition in real-world human-computer interaction systems.", "title": "" }, { "docid": "af49fef0867a951366cfb21288eeb3ed", "text": "As a discriminative method of one-shot learning, Siamese deep network allows recognizing an object from a single exemplar with the same class label. However, it does not take the advantage of the underlying structure and relationship among a multitude of instances since it only relies on pairs of instances for training. In this paper, we propose a quadruplet deep network to examine the potential connections among the training instances, aiming to achieve a more powerful representation. We design four shared networks that receive multi-tuple of instances as inputs and are connected by a novel loss function consisting of pair-loss and tripletloss. According to the similarity metric, we select the most similar and the most dissimilar instances as the positive and negative inputs of triplet loss from each multi-tuple. We show that this scheme improves the training performance and convergence speed. Furthermore, we introduce a new weighted pair loss for an additional acceleration of the convergence. We demonstrate promising results for model-free tracking-by-detection of objects from a single initial exemplar in the Visual Object Tracking benchmark.", "title": "" }, { "docid": "2dbffa465a1d0b9c7e2ae1044dd0cdcb", "text": "Total variation denoising is a nonlinear filtering method well suited for the estimation of piecewise-constant signals observed in additive white Gaussian noise. The method is defined by the minimization of a particular nondifferentiable convex cost function. This letter describes a generalization of this cost function that can yield more accurate estimation of piecewise constant signals. The new cost function involves a nonconvex penalty (regularizer) designed to maintain the convexity of the cost function. The new penalty is based on the Moreau envelope. The proposed total variation denoising method can be implemented using forward–backward splitting.", "title": "" }, { "docid": "9ff6d7a36646b2f9170bd46d14e25093", "text": "Psychedelic drugs such as LSD and psilocybin are often claimed to be capable of inducing life-changing experiences described as mystical or transcendental, especially if high doses are taken. The present study examined possible enduring effects of such experiences by comparing users of psychedelic drugs (n = 88), users of nonpsychedelic illegal drugs (e.g., marijuana, amphetamines) (n = 29) and non illicit drug-using social drinkers (n = 66) on questionnaire measures of values, beliefs and emotional empathy. Samples were obtained from Israel (n = 110) and Australia (n = 73) in a cross-cultural comparison to see if values associated with psychedelic drug use transcended culture of origin. Psychedelic users scored significantly higher on mystical beliefs (e.g., oneness with God and the universe) and life values of spirituality and concern for others than the other groups, and lower on the value of financial prosperity, irrespective of culture of origin. Users of nonpsychedelic illegal drugs scored significantly lower on a measure of coping ability than both psychedelic users and non illicit drug users. Both groups of illegal drug users scored significantly higher on empathy than non illicit drug users. Results are discussed in the context of earlier findings from Pahnke (1966) and Doblin (1991) of the transformative effect of psychedelic experiences, although the possibility remains that present findings reflect predrug characteristics of those who chose to take psychedelic drugs rather than effects of the drugs themselves.", "title": "" }, { "docid": "bde769df506e361bf374bd494fc5db6f", "text": "Molded interconnect devices (MID) allow the realization of electronic circuits on injection molded thermoplastics. MID antennas can be manufactured as part of device casings without the need for additional printed circuit boards or attachment of antennas printed on foil. Baluns, matching networks, amplifiers and connectors can be placed on the polymer in the vicinity of the antenna. A MID dipole antenna for 1 GHz is designed, manufactured and measured. A prototype of the antenna is built with laser direct structuring (LDS) on a Xantar LDS 3720 substrate. Measured return loss and calibrated gain patterns are compared to simulation results.", "title": "" }, { "docid": "7838934c12f00f987f6999460fc38ca1", "text": "The Internet has fostered an unconventional and powerful style of collaboration: \"wiki\" web sites, where every visitor has the power to become an editor. In this paper we investigate the dynamics of Wikipedia, a prominent, thriving wiki. We make three contributions. First, we introduce a new exploratory data analysis tool, the history flow visualization, which is effective in revealing patterns within the wiki context and which we believe will be useful in other collaborative situations as well. Second, we discuss several collaboration patterns highlighted by this visualization tool and corroborate them with statistical analysis. Third, we discuss the implications of these patterns for the design and governance of online collaborative social spaces. We focus on the relevance of authorship, the value of community surveillance in ameliorating antisocial behavior, and how authors with competing perspectives negotiate their differences.", "title": "" }, { "docid": "d050730d7a5bd591b805f1b9729b0f2d", "text": "In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets.", "title": "" } ]
scidocsrr
7c88fa3651433c7b166bdd061d24bbcd
Revisiting the Impact of Classification Techniques on the Performance of Defect Prediction Models
[ { "docid": "3b7ac492add26938636ae694ebb14b65", "text": "This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (OO) design metrics introduced by [Chidamber&Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Li&Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to clas es. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber&Kemerer’s OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than “traditional” code metrics, which can only be collected at a later phase of the software development processes. Key-words: Object-Oriented Design Metrics; Error Prediction Model; Object-Oriented Software Development; C++ Programming Language. * V. Basili and W. Melo are with the University of Maryland, Institute for Advanced Computer Studies and Computer Science Dept., A. V. Williams Bldg., College Park, MD 20742 USA. {basili | melo}@cs.umd.edu L. Briand is with the CRIM, 1801 McGill College Av., Montréal (Québec), H3A 2N4, Canada. lbriand@crim.ca Technical Report, Univ. of Maryland, Dep. of Computer Science, College Park, MD, 20742 USA. April 1995. CS-TR-3443 2 UMIACS-TR-95-40 1 . Introduction", "title": "" }, { "docid": "e3a2b7d38a777c0e7e06d2dc443774d5", "text": "The area under the ROC (Receiver Operating Characteristic) curve, or simply AUC, has been widely used to measure model performance for binary classification tasks. It can be estimated under parametric, semiparametric and nonparametric assumptions. The non-parametric estimate of the AUC, which is calculated from the ranks of predicted scores of instances, does not always sufficiently take advantage of the predicted scores. This problem is tackled in this paper. On the basis of the ranks and the original values of the predicted scores, we introduce a new metric, called a scored AUC or sAUC. Experimental results on 20 UCI data sets empirically demonstrate the validity of the new metric for classifier evaluation and selection.", "title": "" } ]
[ { "docid": "2cec15eeca04fe361f7d11e63b9f2fa7", "text": "We construct a local Lax-Friedrichs type positivity-preserving flux for compressible Navier-Stokes equations, which can be easily extended to high dimensions for generic forms of equations of state, shear stress tensor and heat flux. With this positivity-preserving flux, any finite volume type schemes including discontinuous Galerkin (DG) schemes with strong stability preserving Runge-Kutta time discretizations satisfy a weak positivity property. With a simple and efficient positivity-preserving limiter, high order explicit Runge-Kutta DG schemes are rendered preserving the positivity of density and internal energy without losing local conservation or high order accuracy. Numerical tests suggest that the positivity-preserving flux and the positivity-preserving limiter do not induce excessive artificial viscosity, and the high order positivity-preserving DG schemes without other limiters can produce satisfying non-oscillatory solutions when the nonlinear diffusion in compressible Navier-Stokes equations is accurately resolved.", "title": "" }, { "docid": "458794281ad44bfad61c686f69e8c067", "text": "This study examined the difference in intra-abdominal pressure (IAP) between abdominal bracing and hollowing in relation to trunk muscular activities. IAP with a pressure transducer placed in the rectum and surface electromyograms for rectus abdominis, external oblique, internal oblique, and erector spinae during the 2 tasks were obtained in 7 young adult men. The difference between IAP at rest and its peak value (ΔIAPmax) showed high intra- and inter-day repeatability, and was significantly greater in abdominal bracing (116.4±15.0 mmHg) than in abdominal hollowing (9.9±4.5 mmHg). The trunk muscular activities at ΔIAPmax were significantly higher in abdominal bracing than in abdominal hollowing, and in the internal oblique than in the other 3 muscles. In both abdominal bracing and hollowing, the changes in IAP during the tasks were linearly correlated with those in trunk muscular activities, but the slope of the regression line for the relationship differed between the 2 tasks. The current results indicate that 1) abdominal bracing is an effective maneuver to elevate IAP compared with abdominal hollowing, and 2) in the 2 tasks, the changes in IAP are linked with those in trunk muscular activities, but the association is task-specific.", "title": "" }, { "docid": "50e9cf4ff8265ce1567a9cc82d1dc937", "text": "Thu, 06 Dec 2018 02:11:00 GMT bayesian reasoning and machine learning pdf Bayesian Reasoning and Machine Learning [David Barber] on Amazon.com. *FREE* shipping on qualifying offers. Machine learning methods extract value from vast data sets ... Thu, 06 Dec 2018 14:35:00 GMT Bayesian Reasoning and Machine Learning: David Barber ... A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of ... Sat, 08 Dec 2018 04:53:00 GMT Bayesian network Wikipedia Bayesian Reasoning and Machine Learning. The book is available in hardcopy from Cambridge University Press. The publishers have kindly agreed to allow the online ... Sun, 09 Dec 2018 20:51:00 GMT Bayesian Reasoning and Machine Learning, David Barber Machine learning (ML) is the study of algorithms and mathematical models that computer systems use to progressively improve their performance on a specific task. Mon, 10 Dec 2018 14:02:00 GMT Machine learning Wikipedia Your friends and colleagues are talking about something called \"Bayes' Theorem\" or \"Bayes' Rule\", or something called Bayesian reasoning. They sound really ... Mon, 10 Dec 2018 14:24:00 GMT Yudkowsky Bayes' Theorem NIPS 2016 Tutorial on ML Methods for Personalization with Application to Medicine. More here. UAI 2017 Tutorial on Machine Learning and Counterfactual Reasoning for ... Thu, 06 Dec 2018 15:33:00 GMT Suchi Saria – Machine Learning, Computational Health ... Gaussian Processes and Kernel Methods Gaussian processes are non-parametric distributions useful for doing Bayesian inference and learning on unknown functions. Mon, 10 Dec 2018 05:12:00 GMT Machine Learning Group Publications University of This practical introduction is geared towards scientists who wish to employ Bayesian networks for applied research using the BayesiaLab software platform. Sun, 09 Dec 2018 17:17:00 GMT Bayesian Networks & BayesiaLab: A Practical Introduction ... Automated Bitcoin Trading via Machine Learning Algorithms Isaac Madan Department of Computer Science Stanford University Stanford, CA 94305 imadan@stanford.edu Tue, 27 Nov 2018 20:01:00 GMT Automated Bitcoin Trading via Machine Learning Algorithms 2.3. Naà ̄ve Bayesian classifier. A Naà ̄ve Bayesian classifier generally seems very simple; however, it is a pioneer in most information and computational applications ... Sun, 09 Dec 2018 03:48:00 GMT Proposed efficient algorithm to filter spam using machine ... Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning) [Kevin P. Murphy, Francis Bach] on Amazon.com. *FREE* shipping on qualifying ... Sun, 01 Jul 2018 19:30:00 GMT Machine Learning: A Probabilistic Perspective (Adaptive ... So itâ€TMs pretty clear by now that statistics and machine learning arenâ€TMt very different fields. I was recently pointed to a very amusing comparison by the ... Fri, 07 Dec 2018 19:56:00 GMT Statistics vs. Machine Learning, fight! | AI and Social ... Need help with Statistics for Machine Learning? Take my free 7-day email crash course now (with sample code). Click to sign-up and also get a free PDF Ebook version ... Thu, 06 Dec 2018 23:39:00 GMT Statistics for Evaluating Machine Learning Models", "title": "" }, { "docid": "3fc3ea7bb6c5342bcbc9d046b0a2537f", "text": "We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include prespecification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.", "title": "" }, { "docid": "e2b1c4da96ea677fd50aa44abc86d119", "text": "The technology of automatic document summarization is maturing and may provide a solution to the information overload problem. Nowadays, document summarization plays an important role in information retrieval. With a large volume of documents, presenting the user with a summary of each document greatly facilitates the task of finding the desired documents. Document summarization is a process of automatically creating a compressed version of a given document that provides useful information to users, and multi-document summarization is to produce a summary delivering the majority of information content from a set of documents about an explicit or implicit main topic. In our study we focus on sentence based extractive document summarization. We propose the generic document summarization method which is based on sentence clustering. The proposed approach is a continue sentence-clustering based extractive summarization methods, proposed in Alguliev [Alguliev, R. M., Aliguliyev, R. M., Bagirov, A. M. (2005). Global optimization in the summarization of text documents. Automatic Control and Computer Sciences 39, 42–47], Aliguliyev [Aliguliyev, R. M. (2006). A novel partitioning-based clustering method and generic document summarization. In Proceedings of the 2006 IEEE/WIC/ACM international conference on web intelligence and intelligent agent technology (WI–IAT 2006 Workshops) (WI–IATW’06), 18–22 December (pp. 626–629) Hong Kong, China], Alguliev and Alyguliev [Alguliev, R. M., Alyguliev, R. M. (2007). Summarization of text-based documents with a determination of latent topical sections and information-rich sentences. Automatic Control and Computer Sciences 41, 132–140] Aliguliyev, [Aliguliyev, R. M. (2007). Automatic document summarization by sentence extraction. Journal of Computational Technologies 12, 5–15.]. The purpose of present paper to show, that summarization result not only depends on optimized function, and also depends on a similarity measure. The experimental results on an open benchmark datasets from DUC01 and DUC02 show that our proposed approach can improve the performance compared to sate-of-the-art summarization approaches. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7fe0c40d6f62d24b4fb565d3341c1422", "text": "Instead of a standard support vector machine (SVM) that classifies points by assigning them to one of two disjoint half-spaces, points are classified by assigning them to the closest of two parallel planes (in input or feature space) that are pushed apart as far as possible. This formulation, which can also be interpreted as regularized least squares and considered in the much more general context of regularized networks [8, 9], leads to an extremely fast and simple algorithm for generating a linear or nonlinear classifier that merely requires the solution of a single system of linear equations. In contrast, standard SVMs solve a quadratic or a linear program that require considerably longer computational time. Computational results on publicly available datasets indicate that the proposed proximal SVM classifier has comparable test set correctness to that of standard SVM classifiers, but with considerably faster computational time that can be an order of magnitude faster. The linear proximal SVM can easily handle large datasets as indicated by the classification of a 2 million point 10-attribute set in 20.8 seconds. All computational results are based on 6 lines of MATLAB code.", "title": "" }, { "docid": "4a9dbf259f14e5874cda6782cb8f981a", "text": "Concept of Safe diagram was introduced 30 years ago (Singh and Schiffer, 1982) for the analysis of the vibration characteristics of packeted bladed disc for steam turbines. A detailed description of Safe diagram for steam turbine blades was presented 25 years ago in the 17 th Turbo Symposium (Singh et. el, 1988). Since that time it has found application in the design and failure analysis of many turbo machineries e.g. steam turbines, centrifugal compressor, axial compressor, expanders etc. The theory was justified using the argument of natural modes of vibration containing single harmonics and alternating forcing represented by pure sine wave around 360 degrees applied to bladed disk. This case is referred as tuned system. It was also explained that packeted bladed disc is a mistuned system where geometrical symmetry is broken deliberately by breaking the shroud in many places. This is a normal practice which provides blade packets design. This is known as deliberate geometrical mistuning. This mistuning gave rise to frequency of certain modes being split in two different modes which otherwise existed in duplicate. Natural modes of this type construction exhibited impurity i.e. it contained many harmonics in place of just one as it occurs in a tuned case. As a result, this phenomenon gives rise to different system response for each split mode. Throughout the years that have passed, Safe diagram has been used for any mistuned systemrandom, known or deliberate. Many co-workers and friends have asked me to write the history of the evolution and of the first application of this concept and its application in more general case. This paper describes application of Safe diagram for general case of tuned system and mistuned system.", "title": "" }, { "docid": "7f14c41cc6ca21e90517961cf12c3c9a", "text": "Probiotic microorganisms have been documented over the past two decades to play a role in cholesterol-lowering properties via various clinical trials. Several mechanisms have also been proposed and the ability of these microorganisms to deconjugate bile via production of bile salt hydrolase (BSH) has been widely associated with their cholesterol lowering potentials in prevention of hypercholesterolemia. Deconjugated bile salts are more hydrophobic than their conjugated counterparts, thus are less reabsorbed through the intestines resulting in higher excretion into the feces. Replacement of new bile salts from cholesterol as a precursor subsequently leads to decreased serum cholesterol levels. However, some controversies have risen attributed to the activities of deconjugated bile acids that repress the synthesis of bile acids from cholesterol. Deconjugated bile acids have higher binding affinity towards some orphan nuclear receptors namely the farsenoid X receptor (FXR), leading to a suppressed transcription of the enzyme cholesterol 7-alpha hydroxylase (7AH), which is responsible in bile acid synthesis from cholesterol. This notion was further corroborated by our current docking data, which indicated that deconjugated bile acids have higher propensities to bind with the FXR receptor as compared to conjugated bile acids. Bile acids-activated FXR also induces transcription of the IBABP gene, leading to enhanced recycling of bile acids from the intestine back to the liver, which subsequently reduces the need for new bile formation from cholesterol. Possible detrimental effects due to increased deconjugation of bile salts such as malabsorption of lipids, colon carcinogenesis, gallstones formation and altered gut microbial populations, which contribute to other varying gut diseases, were also included in this review. Our current findings and review substantiate the need to look beyond BSH deconjugation as a single factor/mechanism in strain selection for hypercholesterolemia, and/or as a sole mean to justify a cholesterol-lowering property of probiotic strains.", "title": "" }, { "docid": "bd2092eabd267771f3d346a5a55b9070", "text": "Community Question Answering (CQA) websites have become valuable repositories which host a massive volume of human knowledge. To maximize the utility of such knowledge, it is essential to evaluate the quality of an existing question or answer, especially soon after it is posted on the CQA website. In this paper, we study the problem of inferring the quality of questions and answers through a case study of a software CQA (Stack Overflow). Our key finding is that the quality of an answer is strongly positively correlated with that of its question. Armed with this observation, we propose a family of algorithms to jointly predict the quality of questions and answers, for both quantifying numerical quality scores and differentiating the high-quality questions/answers from those of low quality. We conduct extensive experimental evaluations to demonstrate the effectiveness and efficiency of our methods.", "title": "" }, { "docid": "7fed1248efb156c8b2585147e2791ed7", "text": "In [1], we proposed a graph-based formulation that links and clusters person hypotheses over time by solving a minimum cost subgraph multicut problem. In this paper, we modify and extend [1] in three ways: 1) We introduce a novel local pairwise feature based on local appearance matching that is robust to partial occlusion and camera motion. 2) We perform extensive experiments to compare different pairwise potentials and to analyze the robustness of the tracking formulation. 3) We consider a plain multicut problem and remove outlying clusters from its solution. This allows us to employ an efficient primal feasible optimization algorithm that is not applicable to the subgraph multicut problem of [1]. Unlike the branch-and-cut algorithm used there, this efficient algorithm used here is applicable to long videos and many detections. Together with the novel feature, it eliminates the need for the intermediate tracklet representation of [1]. We demonstrate the effectiveness of our overall approach on the MOT16 benchmark [2], achieving state-of-art performance.", "title": "" }, { "docid": "5356a208f0f6eb4659b2a09a106bab8d", "text": "Objective: Traditional Cognitive Training with paper-pencil tasks (PPCT) and Computer-Based Cognitive Training (C-BCT) both are effective for people with Mild Cognitive Impairment (MCI). The aim of this study is to evaluate the efficacy of a C-BCT program versus a PPCT one. Methods: One hundred and twenty four (n=124) people with amnesic & multiple domains MCI (aMCImd) diagnosis were randomly assigned in two groups, a PPCT group (n=65), and a C-BCT (n=59). The groups were matched at baseline in age, gender, education, cognitive and functional performance. Both groups attended 48 weekly 1-hour sessions of attention and executive function training for 12 months. Neuropsychological assessment was performed at baseline and 12 months later. Results: At the follow up, the PPCT group was better than the C-BCT group in visual selective attention (p≤ 0.022). The C-BCT group showed improvement in working memory (p=0.042) and in speed of switching of attention (p=0.012), while the PPCT group showed improvement in general cognitive function (p=0.005), learning ability (p=0.000), delayed verbal recall (p=0.000), visual perception (p=0.013) and visual memory (p=0.000), verbal fluency (p=0.000), visual selective attention (p=0.021), speed of switching of attention (p=0.001), visual selective attention/multiple choices (p=0.010) and Activities of Daily Living (ADL) as well (p=0.001). Conclusion: Both C-BCT and PPCT are beneficial for people with aMCImd concerning cognitive functions. However, the administration of a traditional PPCT program seems to affect a greater range of cognitive abilities and transfer the primary cognitive benefit in real life.", "title": "" }, { "docid": "3e807b9119bc13c2ffbdc57e79c6523e", "text": "Social network has gained remarkable attention in t he last decade. Accessing social network sites such as Twitter, Fac ebook LinkedIn and Google+ through the internet and the web 2.0 technologies h a become more affordable. People are becoming more interested in and relying o social network for information, news and opinion of other users on div erse subject matters. The heavy reliance on social network sites causes them to gen erate massive data characterised by three computational issues namely; size, noise a nd dynamism. These issues often make social network data very complex to analyse ma nually, resulting in the pertinent use of computational means of analysing t hem. Data mining provides a wide range of techniques for detecting useful knowl edge from massive datasets like trends, patterns and rules [44]. Data mining techni ques are used for information retrieval, statistical modelling and machine learni ng. These techniques employ data pre-processing, data analysis, and data interpretat ion processes in the course of data analysis. This survey discusses different dat a mining techniques used in mining diverse aspects of the social network over decades going from the historical techniques to the up-to-date models, including our novel technique named TRCM. All the techniques covered in this survey are liste d in the Table.1 including the tools employed as well as names of their authors.", "title": "" }, { "docid": "55f118976784a7244859e0256c4660e3", "text": "The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.", "title": "" }, { "docid": "aa5c5e0bf395df021f007252cd55cd4d", "text": "The Internet of Things promises several exciting opportunities and added value services in several industrial contexts. Such opportunities are enabled by the interconnectivity and cooperation between various things. However, these promises are still facing the interoperability challenge. Semantic technology and linked data are well positioned to tackle the heterogeneity problem. Several efforts contributed to the development of ontology editors and tools for storing and querying linked data. However, despite the potential and the promises, semantic technology remains in the hands of the few, a minority of experts. In this paper, we propose a model driven methodology and a software module (OLGA) that completes existing ontology development libraries and frameworks in order to accelerate the adoption of ontology-based IoT application development. We validated our approach using the ETSI SAREF ontology.", "title": "" }, { "docid": "ff27912cfef17e66266bfcd013a874ee", "text": "The purpose of this note is to describe a useful lesson we learned on authentication protocol design. In a recent article [9], we presented a simple authentication protocol to illustrate the concept of a trusted server. The protocol has a flaw, which was brought to our attention by Mart~n Abadi of DEC. In what follows, we first describe the protocol and its flaw, and how the flaw-was introduced in the process of deriving the protocol from its correct full information version. We then introduce a principle, called the Principle of Full Information, and explain how its use could have prevented the protocol flaw. We believe the Principle of Full Information is a useful authentication protocol design principle, and advocate its use. Lastly, we present several heuristics for simplifying full information protocols and illustrate their application to a mutual authentication protocol.", "title": "" }, { "docid": "7ab232fbbda235c42e0dabb2b128ed59", "text": "Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.", "title": "" }, { "docid": "e8bc7d3616dc355c03f34c912b69de3f", "text": "Manga layout is a core component in manga production, characterized by its unique styles. However, stylistic manga layouts are difficult for novices to produce as it requires hands-on experience and domain knowledge. In this paper, we propose an approach to automatically generate a stylistic manga layout from a set of input artworks with user-specified semantics, thus allowing less-experienced users to create high-quality manga layouts with minimal efforts. We first introduce three parametric style models that encode the unique stylistic aspects of manga layouts, including layout structure, panel importance, and panel shape. Next, we propose a two-stage approach to generate a manga layout: 1) an initial layout is created that best fits the input artworks and layout structure model, according to a generative probabilistic framework; 2) the layout and artwork geometries are jointly refined using an efficient optimization procedure, resulting in a professional-looking manga layout. Through a user study, we demonstrate that our approach enables novice users to easily and quickly produce higher-quality layouts that exhibit realistic manga styles, when compared to a commercially-available manual layout tool.", "title": "" }, { "docid": "7adffc2dd1d6412b4bb01b38ced51c24", "text": "With the popularity of the Internet and mobile intelligent terminals, the number of mobile applications is exploding. Mobile intelligent terminals trend to be the mainstream way of people's work and daily life online in place of PC terminals. Mobile application system brings some security problems inevitably while it provides convenience for people, and becomes a main target of hackers. Therefore, it is imminent to strengthen the security detection of mobile applications. This paper divides mobile application security detection into client security detection and server security detection. We propose a combining static and dynamic security detection method to detect client-side. We provide a method to get network information of server by capturing and analyzing mobile application traffic, and propose a fuzzy testing method based on HTTP protocol to detect server-side security vulnerabilities. Finally, on the basis of this, an automated platform for security detection of mobile application system is developed. Experiments show that the platform can detect the vulnerabilities of mobile application client and server effectively, and realize the automation of mobile application security detection. It can also reduce the cost of mobile security detection and enhance the security of mobile applications.", "title": "" }, { "docid": "1a7e2ca13d00b6476820ad82c2a68780", "text": "To understand the dynamics of mental health, it is essential to develop measures for the frequency and the patterning of mental processes in every-day-life situations. The Experience-Sampling Method (ESM) is an attempt to provide a valid instrument to describe variations in self-reports of mental processes. It can be used to obtain empirical data on the following types of variables: a) frequency and patterning of daily activity, social interaction, and changes in location; b) frequency, intensity, and patterning of psychological states, i.e., emotional, cognitive, and conative dimensions of experience; c) frequency and patterning of thoughts, including quality and intensity of thought disturbance. The article reviews practical and methodological issues of the ESM and presents evidence for its short- and long-term reliability when used as an instrument for assessing the variables outlined above. It also presents evidence for validity by showing correlation between ESM measures on the one hand and physiological measures, one-time psychological tests, and behavioral indices on the other. A number of studies with normal and clinical populations that have used the ESM are reviewed to demonstrate the range of issues to which the technique can be usefully applied.", "title": "" }, { "docid": "9112022c94addc435fc5059a2b2df5e6", "text": "This paper introduces a new type of summarization task, known as microblog summarization, which aims to synthesize content from multiple microblog posts on the same topic into a human-readable prose description of fixed length. Our approach leverages (1) a generative model which induces event structures from text and (2) a user behavior model which captures how users convey relevant content.", "title": "" } ]
scidocsrr
9eeac0fa8aacf08b2adf89d5eacb302c
Information Hiding Techniques: A Tutorial Review
[ { "docid": "efc1a6efe55805609ffc5c0fb6e3115b", "text": "A Note to All Readers This is not an original electronic copy of the master's thesis, but a reproduced version of the authentic hardcopy of the thesis. I lost the original electronic copy during transit from India to USA in December 1999. I could get hold of some of the older version of the files and figures. Some of the missing figures have been scanned from the photocopy version of the hardcopy of the thesis. The scanned figures have been earmarked with an asterisk. Acknowledgement I would like to profusely thank my guide Prof. K. R. Ramakrishnan for is timely advice and encouragement throughout my project work. I would also like to acknowledge Prof. M. Kankanhalli for reviewing my work from time to time. A special note of gratitude goes to Dr. S. H. Srinivas for the support he extended to this work. I would also like to thank all who helped me during my project work.", "title": "" } ]
[ { "docid": "a0c1f5a7e283e1deaff38edff2d8a3b2", "text": "BACKGROUND\nEarly detection of abused children could help decrease mortality and morbidity related to this major public health problem. Several authors have proposed tools to screen for child maltreatment. The aim of this systematic review was to examine the evidence on accuracy of tools proposed to identify abused children before their death and assess if any were adapted to screening.\n\n\nMETHODS\nWe searched in PUBMED, PsycINFO, SCOPUS, FRANCIS and PASCAL for studies estimating diagnostic accuracy of tools identifying neglect, or physical, psychological or sexual abuse of children, published in English or French from 1961 to April 2012. We extracted selected information about study design, patient populations, assessment methods, and the accuracy parameters. Study quality was assessed using QUADAS criteria.\n\n\nRESULTS\nA total of 2 280 articles were identified. Thirteen studies were selected, of which seven dealt with physical abuse, four with sexual abuse, one with emotional abuse, and one with any abuse and physical neglect. Study quality was low, even when not considering the lack of gold standard for detection of abused children. In 11 studies, instruments identified abused children only when they had clinical symptoms. Sensitivity of tests varied between 0.26 (95% confidence interval [0.17-0.36]) and 0.97 [0.84-1], and specificity between 0.51 [0.39-0.63] and 1 [0.95-1]. The sensitivity was greater than 90% only for three tests: the absence of scalp swelling to identify children victims of inflicted head injury; a decision tool to identify physically-abused children among those hospitalized in a Pediatric Intensive Care Unit; and a parental interview integrating twelve child symptoms to identify sexually-abused children. When the sensitivity was high, the specificity was always smaller than 90%.\n\n\nCONCLUSIONS\nIn 2012, there is low-quality evidence on the accuracy of instruments for identifying abused children. Identified tools were not adapted to screening because of low sensitivity and late identification of abused children when they have already serious consequences of maltreatment. Development of valid screening instruments is a pre-requisite before considering screening programs.", "title": "" }, { "docid": "c6b6b7c1955cafa70c4a0c2498591934", "text": "In all Fitzgerald’s fiction women characters are decorative figures of seemingly fragile beauty, though in fact they are often vain, egoistical, even destructive and ruthless and thus frequently the survivors. As prime consumers, they are never capable of idealism or intellectual or artistic interests, nor do they experience passion. His last novel, The Last Tycoon, shows some development; for the first time the narrator is a young woman bent on trying to find the truth about the ruthless social and economic complexity of 1920s Hollywood, but she has no adult role to play in its sexual, artistic or political activities. Women characters are marginalized into purely personal areas of experience.", "title": "" }, { "docid": "b3166dafafda819052f1d40ef04cc304", "text": "Convolutional neural networks (CNNs) have been widely deployed in the fields of computer vision and pattern recognition because of their high accuracy. However, large convolution operations are computing intensive and often require a powerful computing platform such as a graphics processing unit. This makes it difficult to apply CNNs to portable devices. The state-of-the-art CNNs, such as MobileNetV2 and Xception, adopt depthwise separable convolution to replace the standard convolution for embedded platforms, which significantly reduces operations and parameters with only limited loss in accuracy. This highly structured model is very suitable for field-programmable gate array (FPGA) implementation. In this brief, a scalable high performance depthwise separable convolution optimized CNN accelerator is proposed. The accelerator can be fit into an FPGA of different sizes, provided the balancing between hardware resources and processing speed. As an example, MobileNetV2 is implemented on Arria 10 SoC FPGA, and the results show this accelerator can classify each picture from ImageNet in 3.75 ms, which is about 266.6 frames per second. The FPGA design achieves 20x speedup if compared to CPU.", "title": "" }, { "docid": "5454fbb1a924f3360a338c11a88bea89", "text": "PURPOSE OF REVIEW\nThis review describes the most common motor neuron disease, ALS. It discusses the diagnosis and evaluation of ALS and the current understanding of its pathophysiology, including new genetic underpinnings of the disease. This article also covers other motor neuron diseases, reviews how to distinguish them from ALS, and discusses their pathophysiology.\n\n\nRECENT FINDINGS\nIn this article, the spectrum of cognitive involvement in ALS, new concepts about protein synthesis pathology in the etiology of ALS, and new genetic associations will be covered. This concept has changed over the past 3 to 4 years with the discovery of new genes and genetic processes that may trigger the disease. As of 2014, two-thirds of familial ALS and 10% of sporadic ALS can be explained by genetics. TAR DNA binding protein 43 kDa (TDP-43), for instance, has been shown to cause frontotemporal dementia as well as some cases of familial ALS, and is associated with frontotemporal dysfunction in ALS.\n\n\nSUMMARY\nThe anterior horn cells control all voluntary movement: motor activity, respiratory, speech, and swallowing functions are dependent upon signals from the anterior horn cells. Diseases that damage the anterior horn cells, therefore, have a profound impact. Symptoms of anterior horn cell loss (weakness, falling, choking) lead patients to seek medical attention. Neurologists are the most likely practitioners to recognize and diagnose damage or loss of anterior horn cells. ALS, the prototypical motor neuron disease, demonstrates the impact of this class of disorders. ALS and other motor neuron diseases can represent diagnostic challenges. Neurologists are often called upon to serve as a \"medical home\" for these patients: coordinating care, arranging for durable medical equipment, and leading discussions about end-of-life care with patients and caregivers. It is important for neurologists to be able to identify motor neuron diseases and to evaluate and treat patients affected by them.", "title": "" }, { "docid": "1bd9cedbbbd26d670dd718fe47c952e7", "text": "Recent advances in conversational systems have changed the search paradigm. Traditionally, a user poses a query to a search engine that returns an answer based on its index, possibly leveraging external knowledge bases and conditioning the response on earlier interactions in the search session. In a natural conversation, there is an additional source of information to take into account: utterances produced earlier in a conversation can also be referred to and a conversational IR system has to keep track of information conveyed by the user during the conversation, even if it is implicit. We argue that the process of building a representation of the conversation can be framed as a machine reading task, where an automated system is presented with a number of statements about which it should answer questions. The questions should be answered solely by referring to the statements provided, without consulting external knowledge. The time is right for the information retrieval community to embrace this task, both as a stand-alone task and integrated in a broader conversational search setting. In this paper, we focus on machine reading as a stand-alone task and present the Attentive Memory Network (AMN), an end-to-end trainable machine reading algorithm. Its key contribution is in efficiency, achieved by having an hierarchical input encoder, iterating over the input only once. Speed is an important requirement in the setting of conversational search, as gaps between conversational turns have a detrimental effect on naturalness. On 20 datasets commonly used for evaluating machine reading algorithms we show that the AMN achieves performance comparable to the state-of-theart models, while using considerably fewer computations.", "title": "" }, { "docid": "9c37d9388908cd15c2e4d639de686371", "text": "In this paper, novel small-signal averaged models for dc-dc converters operating at variable switching frequency are derived. This is achieved by separately considering the on-time and the off-time of the switching period. The derivation is shown in detail for a synchronous buck converter and the model for a boost converter is also presented. The model for the buck converter is then used for the design of two digital feedback controllers, which exploit the additional insight in the converter dynamics. First, a digital multiloop PID controller is implemented, where the design is based on loop-shaping of the proposed frequency-domain transfer functions. And second, the design and the implementation of a digital LQG state-feedback controller, based on the proposed time-domain state-space model, is presented for the same converter topology. Experimental results are given for the digital multiloop PID controller integrated on an application-specified integrated circuit in a 0.13 μm CMOS technology, as well as for the state-feedback controller implemented on an FPGA. Tight output voltage regulation and an excellent dynamic performance is achieved, as the dynamics of the converter under variable frequency operation are considered during the design of both implementations.", "title": "" }, { "docid": "0f71e64aaf081b6624f442cb95b2220c", "text": "Objective\nElectronic health record (EHR)-based phenotyping infers whether a patient has a disease based on the information in his or her EHR. A human-annotated training set with gold-standard disease status labels is usually required to build an algorithm for phenotyping based on a set of predictive features. The time intensiveness of annotation and feature curation severely limits the ability to achieve high-throughput phenotyping. While previous studies have successfully automated feature curation, annotation remains a major bottleneck. In this paper, we present PheNorm, a phenotyping algorithm that does not require expert-labeled samples for training.\n\n\nMethods\nThe most predictive features, such as the number of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes or mentions of the target phenotype, are normalized to resemble a normal mixture distribution with high area under the receiver operating curve (AUC) for prediction. The transformed features are then denoised and combined into a score for accurate disease classification.\n\n\nResults\nWe validated the accuracy of PheNorm with 4 phenotypes: coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis. The AUCs of the PheNorm score reached 0.90, 0.94, 0.95, and 0.94 for the 4 phenotypes, respectively, which were comparable to the accuracy of supervised algorithms trained with sample sizes of 100-300, with no statistically significant difference.\n\n\nConclusion\nThe accuracy of the PheNorm algorithms is on par with algorithms trained with annotated samples. PheNorm fully automates the generation of accurate phenotyping algorithms and demonstrates the capacity for EHR-driven annotations to scale to the next level - phenotypic big data.", "title": "" }, { "docid": "4a86a0707e6ac99766f89e81cccc5847", "text": "Magnetic core loss is an emerging concern for integrated POL converters. As switching frequency increases, core loss is comparable to or even higher than winding loss. Accurate measurement of core loss is important for magnetic design and converter loss estimation. And exploring new high frequency magnetic materials need a reliable method to evaluate their losses. However, conventional method is limited to low frequency due to sensitivity to phase discrepancy. In this paper, a new method is proposed for high frequency (1MHz∼50MHz) core loss measurement. The new method reduces the phase induced error from over 100% to <5%. So with the proposed methods, the core loss can be accurately measured.", "title": "" }, { "docid": "b5d54f10aebd99d898dfb52d75e468e6", "text": "As the technology to secure information improves, hackers will employ less technical means to get access to unauthorized data. The use of Social Engineering as a non tech method of hacking has been increasingly used during the past few years. There are different types of social engineering methods reported but what is lacking is a unifying effort to understand these methods in the aggregate. This paper aims to classify these methods through taxonomy so that organizations can gain a better understanding of these attack methods and accordingly be vigilant against them.", "title": "" }, { "docid": "3a9bba31f77f4026490d7a0faf4aeaa4", "text": "We explore several different document representation models and two query expansion models for the task of recommending blogs to a user in response to a query. Blog relevance ranking differs from traditional document ranking in ad-hoc information retrieval in several ways: (1) the unit of output (the blog) is composed of a collection of documents (the blog posts) rather than a single document, (2) the query represents an ongoing – and typically multifaceted – interest in the topic rather than a passing ad-hoc information need and (3) due to the propensity of spam, splogs, and tangential comments, the blogosphere is particularly challenging to use as a source for high-quality query expansion terms. We address these differences at the document representation level, by comparing retrieval models that view either the blog or its constituent posts as the atomic units of retrieval, and at the query expansion level, by making novel use of the links and anchor text in Wikipedia to expand a user’s initial query. We develop two complementary models of blog retrieval that perform at comparable levels of precision and recall. We also show consistent and significant improvement across all models using our Wikipedia expansion strategy.", "title": "" }, { "docid": "26deedfae0fd167d35df79f28c75e09c", "text": "In content-based image retrieval, SIFT feature and the feature from deep convolutional neural network (CNN) have demonstrated promising performance. To fully explore both visual features in a unified framework for effective and efficient retrieval, we propose a collaborative index embedding method to implicitly integrate the index matrices of them. We formulate the index embedding as an optimization problem from the perspective of neighborhood sharing and solve it with an alternating index update scheme. After the iterative embedding, only the embedded CNN index is kept for on-line query, which demonstrates significant gain in retrieval accuracy, with very economical memory cost. Extensive experiments have been conducted on the public datasets with million-scale distractor images. The experimental results reveal that, compared with the recent state-of-the-art retrieval algorithms, our approach achieves competitive accuracy performance with less memory overhead and efficient query computation.", "title": "" }, { "docid": "710febdd18f40c9fc82f8a28039362cc", "text": "The paper deals with engineering an electric wheelchair from a common wheelchair and then developing a Brain Computer Interface (BCI) between the electric wheelchair and the human brain. A portable EEG headset and firmware signal processing together facilitate the movement of the wheelchair integrating mind activity and frequency of eye blinks of the patient sitting on the wheelchair with the help of Microcontroller Unit (MCU). The target population for the mind controlled wheelchair is the patients who are paralyzed below the neck and are unable to use conventional wheelchair interfaces. This project aims at creating a cost efficient solution, later intended to be distributed as an add-on conversion unit for a common manual wheelchair. A Neurosky mind wave headset is used to pick up EEG signals from the brain. This is a commercialized version of the Open-EEG Project. The signal obtained from EEG sensor is processed by the ARM microcontroller FRDM KL-25Z, a Freescale board. The microcontroller takes decision for determining the direction of motion of wheelchair based on floor detection and obstacle avoidance sensors mounted on wheelchair’s footplate. The MCU shows real time information on a color LCD interfaced to it. Joystick control of the wheelchair is also provided as an additional interface option that can be chosen from the menu system of the project.", "title": "" }, { "docid": "49f42fd1e0b684f24714bd9c1494fe4a", "text": "We propose a transition-based model for joint word segmentation, POS tagging and text normalization. Different from previous methods, the model can be trained on standard text corpora, overcoming the lack of annotated microblog corpora. To evaluate our model, we develop an annotated corpus based on microblogs. Experimental results show that our joint model can help improve the performance of word segmentation on microblogs, giving an error reduction in segmentation accuracy of 12.02%, compared to the traditional approach.", "title": "" }, { "docid": "9071d7349dccb07a5c3f93075e8d9458", "text": "AIM\nA discussion on how nurse leaders are using social media and developing digital leadership in online communities.\n\n\nBACKGROUND\nSocial media is relatively new and how it is used by nurse leaders and nurses in a digital space is under explored.\n\n\nDESIGN\nDiscussion paper.\n\n\nDATA SOURCES\nSearches used CINAHL, the Royal College of Nursing webpages, Wordpress (for blogs) and Twitter from 2000-2015. Search terms used were Nursing leadership + Nursing social media.\n\n\nIMPLICATIONS FOR NURSING\nUnderstanding the development and value of nursing leadership in social media is important for nurses in formal and informal (online) leadership positions. Nurses in formal leadership roles in organizations such as the National Health Service are beginning to leverage social media. Social media has the potential to become a tool for modern nurse leadership, as it is a space where can you listen on a micro level to each individual. In addition to listening, leadership can be achieved on a much larger scale through the use of social media monitoring tools and exploration of data and crowd sourcing. Through the use of data and social media listening tools nursing leaders can seek understanding and insight into a variety of issues. Social media also places nurse leaders in a visible and accessible position as role models.\n\n\nCONCLUSION\nSocial media and formal nursing leadership do not have to be against each other, but they can work in harmony as both formal and online leadership possess skills that are transferable. If used wisely social media has the potential to become a tool for modern nurse leadership.", "title": "" }, { "docid": "5876bb91b0cbe851b8af677c93c5e708", "text": "This paper proposes an effective end-to-end face detection and recognition framework based on deep convolutional neural networks for home service robots. We combine the state-of-the-art region proposal based deep detection network with the deep face embedding network into an end-to-end system, so that the detection and recognition networks can share the same deep convolutional layers, enabling significant reduction of computation through sharing convolutional features. The detection network is robust to large occlusion, and scale, pose, and lighting variations. The recognition network does not require explicit face alignment, which enables an effective training strategy to generate a unified network. A practical robot system is also developed based on the proposed framework, where the system automatically asks for a minimum level of human supervision when needed, and no complicated region-level face annotation is required. Experiments are conducted over WIDER and LFW benchmarks, as well as a personalized dataset collected from an office setting, which demonstrate state-of-the-art performance of our system.", "title": "" }, { "docid": "0528bc602b9a48e30fbce70da345c0ee", "text": "The power system is a dynamic system and it is constantly being subjected to disturbances. It is important that these disturbances do not drive the system to unstable conditions. For this purpose, additional signal derived from deviation, excitation deviation and accelerating power are injected into voltage regulators. The device to provide these signals is referred as power system stabilizer. The use of power system stabilizer has become very common in operation of large electric power systems. The conventional PSS which uses lead-lag compensation, where gain setting designed for specific operating conditions, is giving poor performance under different loading conditions. Therefore, it is very difficult to design a stabilizer that could present good performance in all operating points of electric power systems. In an attempt to cover a wide range of operating conditions, Fuzzy logic control has been suggested as a possible solution to overcome this problem, thereby using linguist information and avoiding a complex system mathematical model, while giving good performance under different operating conditions.", "title": "" }, { "docid": "6d56e0db0ebdfe58152cb0faa73453c4", "text": "Chatbot is a computer application that interacts with users using natural language in a similar way to imitate a human travel agent. A successful implementation of a chatbot system can analyze user preferences and predict collective intelligence. In most cases, it can provide better user-centric recommendations. Hence, the chatbot is becoming an integral part of the future consumer services. This paper is an implementation of an intelligent chatbot system in travel domain on Echo platform which would gather user preferences and model collective user knowledge base and recommend using the Restricted Boltzmann Machine (RBM) with Collaborative Filtering. With this chatbot based on DNN, we can improve human to machine interaction in the travel domain.", "title": "" }, { "docid": "81780f32d64eb7c5e3662268f48a67ec", "text": "Mobile ad hoc network (MANET) is a group of mobile nodes which communicates with each other without any supporting infrastructure. Routing in MANET is extremely challenging because of MANETs dynamic features, its limited bandwidth and power energy. Nature-inspired algorithms (swarm intelligence) such as ant colony optimization (ACO) algorithms have shown to be a good technique for developing routing algorithms for MANETs. Swarm intelligence is a computational intelligence technique that involves collective behavior of autonomous agents that locally interact with each other in a distributed environment to solve a given problem in the hope of finding a global solution to the problem. In this paper, we propose a hybrid routing algorithm for MANETs based on ACO and zone routing framework of bordercasting. The algorithm, HOPNET, based on ants hopping from one zone to the next, consists of the local proactive route discovery within a node’s neighborhood and reactive communication between the neighborhoods. The algorithm has features extracted from ZRP and DSR protocols and is simulated on GlomoSim and is compared to AODV routing protocol. The algorithm is also compared to the well known hybrid routing algorithm, AntHocNet, which is not based on zone routing framework. Results indicate that HOPNET is highly scalable for large networks compared to AntHocNet. The results also indicate that the selection of the zone radius has considerable impact on the delivery packet ratio and HOPNET performs significantly better than AntHocNet for high and low mobility. The algorithm has been compared to random way point model and random drunken model and the results show the efficiency and inefficiency of bordercasting. Finally, HOPNET is compared to ZRP and the strength of nature-inspired algorithm", "title": "" }, { "docid": "404a662b55baea9402d449fae6192424", "text": "Emotion is expressed in multiple modalities, yet most research has considered at most one or two. This stems in part from the lack of large, diverse, well-annotated, multimodal databases with which to develop and test algorithms. We present a well-annotated, multimodal, multidimensional spontaneous emotion corpus of 140 participants. Emotion inductions were highly varied. Data were acquired from a variety of sensors of the face that included high-resolution 3D dynamic imaging, high-resolution 2D video, and thermal (infrared) sensing, and contact physiological sensors that included electrical conductivity of the skin, respiration, blood pressure, and heart rate. Facial expression was annotated for both the occurrence and intensity of facial action units from 2D video by experts in the Facial Action Coding System (FACS). The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection. The entire corpus will be made available to the research community.", "title": "" }, { "docid": "456fd41267a82663fee901b111ff7d47", "text": "The tagging of Named Entities, the names of particular things or classes, is regarded as an important component technology for many NLP applications. The first Named Entity set had 7 types, organization, location, person, date, time, money and percent expressions. Later, in the IREX project artifact was added and ACE added two, GPE and facility, to pursue the generalization of the technology. However, 7 or 8 kinds of NE are not broad enough to cover general applications. We proposed about 150 categories of NE (Sekine et al. 2002) and now we have extended it again to 200 categories. Also we have developed dictionaries and an automatic tagger for NEs in Japanese.", "title": "" } ]
scidocsrr
7e380c297fa3bd050b8775eb5853f45a
Addressing vital sign alarm fatigue using personalized alarm thresholds
[ { "docid": "913b3e09f6b12744a8044d95a67d8dc7", "text": "Research has demonstrated that 72% to 99% of clinical alarms are false. The high number of false alarms has led to alarm fatigue. Alarm fatigue is sensory overload when clinicians are exposed to an excessive number of alarms, which can result in desensitization to alarms and missed alarms. Patient deaths have been attributed to alarm fatigue. Patient safety and regulatory agencies have focused on the issue of alarm fatigue, and it is a 2014 Joint Commission National Patient Safety Goal. Quality improvement projects have demonstrated that strategies such as daily electrocardiogram electrode changes, proper skin preparation, education, and customization of alarm parameters have been able to decrease the number of false alarms. These and other strategies need to be tested in rigorous clinical trials to determine whether they reduce alarm burden without compromising patient safety.", "title": "" } ]
[ { "docid": "f7d535f9a5eeae77defe41318d642403", "text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.", "title": "" }, { "docid": "596949afaabdbcc68cd8bda175400f30", "text": "We propose improved Deep Neural Network (DNN) training loss functions for more accurate single keyword spotting on resource-constrained embedded devices. The loss function modifications consist of a combination of multi-task training and weighted cross entropy. In the multi-task architecture, the keyword DNN acoustic model is trained with two tasks in parallel the main task of predicting the keyword-specific phone states, and an auxiliary task of predicting LVCSR senones. We show that multi-task learning leads to comparable accuracy over a previously proposed transfer learning approach where the keyword DNN training is initialized by an LVCSR DNN of the same input and hidden layer sizes. The combination of LVCSRinitialization and Multi-task training gives improved keyword detection accuracy compared to either technique alone. We also propose modifying the loss function to give a higher weight on input frames corresponding to keyword phone targets, with a motivation to balance the keyword and background training data. We show that weighted cross-entropy results in additional accuracy improvements. Finally, we show that the combination of 3 techniques LVCSR-initialization, multi-task training and weighted cross-entropy gives the best results, with significantly lower False Alarm Rate than the LVCSR-initialization technique alone, across a wide range of Miss Rates.", "title": "" }, { "docid": "78e8d8b0508e011f5dc0e63fa1f0a1ee", "text": "This paper proposes chordal surface transform for representation and discretization of thin section solids, such as automobile bodies, plastic injection mold components and sheet metal parts. A multiple-layered all-hex mesh with a high aspect ratio is a typical requirement for mold flow simulation of thin section objects. The chordal surface transform reduces the problem of 3D hex meshing to 2D quad meshing on the chordal surface. The chordal surface is generated by cutting a tet mesh of the input CAD model at its mid plane. Radius function and curvature of the chordal surface are used to provide sizing function for quad meshing. Two-way mapping between the chordal surface and the boundary is used to sweep the quad elements from the chordal surface onto the boundary, resulting in a layered all-hex mesh. The algorithm has been tested on industrial models, whose chordal surface is 2-manifold. The graphical results of the chordal surface and the multiple-layered all-hex mesh are presented along with the quality measures. The results show geometrically adaptive high aspect ratio all-hex mesh, whose average scaled Jacobean, is close to 1.0.", "title": "" }, { "docid": "ea048488791219be809072862a061444", "text": "Our object oriented programming approach have great ability to improve the programming behavior for modern system and software engineering but it does not give the proper interaction of real world .In real world , programming required powerful interlinking among properties and characteristics towards the various objects. Basically this approach of programming gives the better presentation of object with real world and provide the better relationship among the objects. I have explained the new concept of my neuro object oriented approach .This approach contains many new features like originty , new concept of inheritance , new concept of encapsulation , object relation with dimensions , originty relation with dimensions and time , category of NOOPA like high order thinking object and low order thinking object , differentiation model for achieving the various requirements from the user and a rotational model .", "title": "" }, { "docid": "20fbb79c467e70dccf28f438e3c99efb", "text": "Surface water is a source of drinking water in most rural communities in Nigeria. This study evaluated the total heterotrophic bacteria (THB) counts and some physico-chemical characteristics of Rivers surrounding Wilberforce Island, Nigeria.Samples were collected in July 2007 and analyzed using standard procedures. The result of the THB ranged from 6.389 – 6.434Log cfu/ml. The physico-chemical parameters results ranged from 6.525 – 7.105 (pH), 56.075 – 64.950μS/cm (Conductivity), 0.010 – 0.050‰ (Salinity), 103.752 – 117.252 NTU (Turbidity), 27.250 – 27.325 oC (Temperature), 10.200 – 14.225 mg/l (Dissolved oxygen), 28.180 – 32.550 mg/l (Total dissolved solid), 0.330 – 0.813 mg/l (Nitrate), 0.378 – 0.530 mg/l (Ammonium). Analysis of variance showed that there were significant variation (P<0.05) in the physicochemical properties except for Salinity and temperature between the two rivers. Also no significant different (P>0.05) exist in the THB density of both rivers; upstream (Agudama-Ekpetiama) and downstream (Akaibiri) of River Nun with regard to ammonium and nitrate. Significant positive correlation (P<0.01) exist between dissolved oxygen with ammonium, Conductivity with salinity and total dissolved solid, salinity with total dissolved solid, turbidity with nitrate, and pH with nitrate. The positive correlation (P<0.05) also exist between pH with turbidity. High turbidity and bacteria density in the water samples is an indication of pollution and contamination respectively. Hence, the consumption of these surface water without treatment could cause health related effects. Keyword: Drinking water sources, microorganisms, physico-chemistry, surface water, Wilberforce Island", "title": "" }, { "docid": "a310039e0fd3f732805a6088ad3d1777", "text": "Unsupervised learning of visual similarities is of paramount importance to computer vision, particularly due to lacking training data for fine-grained similarities. Deep learning of similarities is often based on relationships between pairs or triplets of samples. Many of these relations are unreliable and mutually contradicting, implying inconsistencies when trained without supervision information that relates different tuples or triplets to each other. To overcome this problem, we use local estimates of reliable (dis-)similarities to initially group samples into compact surrogate classes and use local partial orders of samples to classes to link classes to each other. Similarity learning is then formulated as a partial ordering task with soft correspondences of all samples to classes. Adopting a strategy of self-supervision, a CNN is trained to optimally represent samples in a mutually consistent manner while updating the classes. The similarity learning and grouping procedure are integrated in a single model and optimized jointly. The proposed unsupervised approach shows competitive performance on detailed pose estimation and object classification.", "title": "" }, { "docid": "d73b277bf829a3295dfa86b33ad19c4a", "text": "Biodiesel is a renewable and environmentally friendly liquid fuel. However, the feedstock, predominantly crop oil, is a limited and expensive food resource which prevents large scale application of biodiesel. Development of non-food feedstocks are therefore, needed to fully utilize biodiesel’s potential. In this study, the larvae of a high fat containing insect, black soldier fly (Hermetia illucens) (BSFL), was evaluated for biodiesel production. Specifically, the BSFL was grown on organic wastes for 10 days and used for crude fat extraction by petroleum ether. The extracted crude fat was then converted into biodiesel by acid-catalyzed (1% H2SO4) esterification and alkaline-catalyzed (0.8% NaOH) transesterification, resulting in 35.5 g, 57.8 g and 91.4 g of biodiesel being produced from 1000 BSFL growing on 1 kg of cattle manure, pig manure and chicken manure, respectively. The major ester components of the resulting biodiesel were lauric acid methyl ester (35.5%), oleinic acid methyl ester (23.6%) and palmitic acid methyl ester (14.8%). Fuel properties of the BSFL fat-based biodiesel, such as density (885 kg/m), viscosity (5.8 mm/s), ester content (97.2%), flash point (123 C), and cetane number (53) were comparable to those of rapeseed-oil-based biodiesel. These results demonstrated that the organic waste-grown BSFL could be a feasible non-food feedstock for biodiesel production. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "dfc26119288ee136d00c6306377b93f6", "text": "Part-of-speech tagging is a basic step in Natural Language Processing that is often essential. Labeling the word forms of a text with fine-grained word-class information adds new value to it and can be a prerequisite for downstream processes like a dependency parser. Corpus linguists and lexicographers also benefit greatly from the improved search options that are available with tagged data. The Albanian language has some properties that pose difficulties for the creation of a part-of-speech tagset. In this paper, we discuss those difficulties and present a proposal for a part-of-speech tagset that can adequately represent the underlying linguistic phenomena.", "title": "" }, { "docid": "62999806021ff2533ddf7f06117f7d1a", "text": "In response to the new challenges in the design and operation of communication networks, and taking inspiration from how living beings deal with complexity and scalability, in this paper we introduce an innovative system concept called COgnition-BAsed NETworkS (COBANETS). The proposed approach develops around the systematic application of advanced machine learning techniques and, in particular, unsupervised deep learning and probabilistic generative models for system-wide learning, modeling, optimization, and data representation. Moreover, in COBANETS, we propose to combine this learning architecture with the emerging network virtualization paradigms, which make it possible to actuate automatic optimization and reconfiguration strategies at the system level, thus fully unleashing the potential of the learning approach. Compared with the past and current research efforts in this area, the technical approach outlined in this paper is deeply interdisciplinary and more comprehensive, calling for the synergic combination of expertise of computer scientists, communications and networking engineers, and cognitive scientists, with the ultimate aim of breaking new ground through a profound rethinking of how the modern understanding of cognition can be used in the management and optimization of telecommunication networks.", "title": "" }, { "docid": "bd3cc8370fd8669768f62d465f2c5531", "text": "Cognitive radio technology has been proposed to improve spectrum efficiency by having the cognitive radios act as secondary users to opportunistically access under-utilized frequency bands. Spectrum sensing, as a key enabling functionality in cognitive radio networks, needs to reliably detect signals from licensed primary radios to avoid harmful interference. However, due to the effects of channel fading/shadowing, individual cognitive radios may not be able to reliably detect the existence of a primary radio. In this paper, we propose an optimal linear cooperation framework for spectrum sensing in order to accurately detect the weak primary signal. Within this framework, spectrum sensing is based on the linear combination of local statistics from individual cognitive radios. Our objective is to minimize the interference to the primary radio while meeting the requirement of opportunistic spectrum utilization. We formulate the sensing problem as a nonlinear optimization problem. By exploiting the inherent structures in the problem formulation, we develop efficient algorithms to solve for the optimal solutions. To further reduce the computational complexity and obtain solutions for more general cases, we finally propose a heuristic approach, where we instead optimize a modified deflection coefficient that characterizes the probability distribution function of the global test statistics at the fusion center. Simulation results illustrate significant cooperative gain achieved by the proposed strategies. The insights obtained in this paper are useful for the design of optimal spectrum sensing in cognitive radio networks.", "title": "" }, { "docid": "1e30732092d2bcdeff624364c27e4c9c", "text": "Beliefs that individuals hold about whether emotions are malleable or fixed, also referred to as emotion malleability beliefs, may play a crucial role in individuals' emotional experiences and their engagement in changing their emotions. The current review integrates affective science and clinical science perspectives to provide a comprehensive review of how emotion malleability beliefs relate to emotionality, emotion regulation, and specific clinical disorders and treatment. Specifically, we discuss how holding more malleable views of emotion could be associated with more active emotion regulation efforts, greater motivation to engage in active regulatory efforts, more effort expended regulating emotions, and lower levels of pathological distress. In addition, we explain how extending emotion malleability beliefs into the clinical domain can complement and extend current conceptualizations of major depressive disorder, social anxiety disorder, and generalized anxiety disorder. This may prove important given the increasingly central role emotion dysregulation has been given in conceptualization and intervention for these psychiatric conditions. Additionally, discussion focuses on how emotion beliefs could be more explicitly addressed in existing cognitive therapies. Promising future directions for research are identified throughout the review.", "title": "" }, { "docid": "0382ad43b6d31a347d9826194a7261ce", "text": "In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.", "title": "" }, { "docid": "faf0e45405b3c31135a20d7bff6e7a5a", "text": "Law enforcement is in a perpetual race with criminals in the application of digital technologies, and requires the development of tools to systematically search digital devices for pertinent evidence. Another part of this race, and perhaps more crucial, is the development of a methodology in digital forensics that encompasses the forensic analysis of all genres of digital crime scene investigations. This paper explores the development of the digital forensics process, compares and contrasts four particular forensic methodologies, and finally proposes an abstract model of the digital forensic procedure. This model attempts to address some of the shortcomings of previous methodologies, and provides the following advantages: a consistent and standardized framework for digital forensic tool development; a mechanism for applying the framework to future digital technologies; a generalized methodology that judicial members can use to relate technology to non-technical observers; and, the potential for incorporating nondigital electronic technologies within the abstractionmodel of the digital forensic procedure. This model attempts to address some of the shortcomings of previous methodologies, and provides the following advantages: a consistent and standardized framework for digital forensic tool development; a mechanism for applying the framework to future digital technologies; a generalized methodology that judicial members can use to relate technology to non-technical observers; and, the potential for incorporating nondigital electronic technologies within the abstraction Introduction The digital age can be characterized as the application of computer technology as a tool that enhances traditional methodologies. The incorporation of computer systems as a tool into private, commercial, educational, governmental, and other facets of modern life has improved", "title": "" }, { "docid": "3ca2d95885303f1ab395bd31d32df0c2", "text": "Curiosity to predict personality, behavior and need for this is not as new as invent of social media. Personality prediction to better accuracy could be very useful for society. There are many papers and researches conducted on usefulness of the data for various purposes like in marketing, dating suggestions, organization development, personalized recommendations and health care to name a few. With the introduction and extreme popularity of Online Social Networking Sites like Facebook, Twitter and LinkedIn numerous researches were conducted based on public data available, online social networking applications and social behavior towards friends and followers to predict the personality. Structured mining of the social media content can provide us the ability to predict some personality traits. This survey aims at providing researchers with an overview of various strategies used for studies and research concentrating on predicting user personality and behavior using online social networking site content. There positives, limitations are well summarized as reported in the literature. Finally, a brief discussion including open issues for further research in the area of social networking site based personality prediction preceding conclusion.", "title": "" }, { "docid": "0ca477c017da24940bb5af79b2c8826a", "text": "Code comprehension is critical in software maintenance. Towards providing tools and approaches to support maintenance tasks, researchers have investigated various research lines related to how software code can be described in an abstract form. So far, studies on change pattern mining, code clone detection, or semantic patch inference have mainly adopted text-, tokenand tree-based representations as the basis for computing similarity among code fragments. Although, in general, existing techniques form clusters of “similar” code, our experience in patch mining has revealed that clusters of patches formed by such techniques do not usually carry explainable semantics that can be associated to bug-fixing patterns. In this paper, we propose a novel, automated approach for mining semantically-relevant fix patterns based on an iterative, three-fold, clustering strategy. Our technique, FixMiner, leverages different tree representations for each round of clustering: the Abstract syntax tree, the edit actions tree, and the code context tree. We have evaluated FixMiner on thousands of software patches collected from open source projects. Preliminary results show that we are able to mine accurate patterns, efficiently exploiting change information in AST diff trees. Eventually, FixMiner yields patterns which can be associated to the semantics of the bugs that the associated patches address. We further leverage the mined patterns to implement an automated program repair pipeline with which we are able to correctly fix 25 bugs from the Defects4J benchmark. Beyond this quantitative performance, we show that the mined fix patterns are sufficiently relevant to produce patches with a high probability of correctness: 80% of FixMiner’s A. Koyuncu, K. Liu, T. F. Bissyandé, D. Kim, J. Klein, K. Kim, and Y. Le Traon SnT, University of Luxembourg E-mail: {firstname.lastname}@uni.lu M. Monperrus KTH Royal Institute of Technology E-mail: martin.monperrus@csc.kth.se ar X iv :1 81 0. 01 79 1v 1 [ cs .S E ] 3 O ct 2 01 8 2 Anil Koyuncu et al. generated plausible patches are correct, while the closest related works, namely HDRepair and SimFix, achieve respectively 26% and 70% of correctness.", "title": "" }, { "docid": "79eafa032a3f0cb367a008e5a7345dd5", "text": "Data Mining techniques are widely used in educational field to find new hidden patterns from student’s data. The hidden patterns that are discovered can be used to understand the problem arise in the educational field. This paper surveys the three elements needed to make prediction on Students’ Academic Performances which are parameters, methods and tools. This paper also proposes a framework for predicting the performance of first year bachelor students in computer science course. Naïve Bayes Classifier is used to extract patterns using the Data Mining Weka tool. The framework can be used as a basis for the system implementation and prediction of Students’ Academic Performance in Higher Learning Institutions.", "title": "" }, { "docid": "3e70a22831b064bff3ff784a932d068b", "text": "An ultrawideband (UWB) antenna that rejects extremely sharply the two narrow and closely-spaced U.S. WLAN 802.11a bands is presented. The antenna is designed on a single surface (it is uniplanar) and uses only linear sections for easy scaling and fine-tuning. Distributed-element and lumped-element equivalent circuit models of this dual band-reject UWB antenna are presented and used to support the explanation of the physical principles of operation of the dual band-rejection mechanism thoroughly. The circuits are evaluated by comparing with the response of the presented UWB antenna that has very high selectivity and achieves dual-frequency rejection of the WLAN 5.25 GHz and 5.775 GHz bands, while it receives signal from the intermediate band between 5.35-5.725 GHz. The rejection is achieved using double open-circuited stubs, which is uncommon and are chosen based on their narrowband performance. The antenna was fabricated on a single side of a thin, flexible, LCP substrate. The measured achieved rejection is the best reported for a dual-band reject antenna with so closely-spaced rejected bands. The measured group delay of the antenna validates its suitability for UWB links. Such antennas improve both UWB and WLAN communication links at practically zero cost.", "title": "" }, { "docid": "77ce917536f59d5489d0d6f7000c7023", "text": "In this supplementary document, we present additional results to complement the paper. First, we provide the detailed configurations and parameters of the generator and discriminator in the proposed Generative Adversarial Network. Second, we present the qualitative comparisons with the state-ofthe-art CNN-based optical flow methods. The complete results and source code are publicly available on http://vllab.ucmerced.edu/wlai24/semiFlowGAN.", "title": "" }, { "docid": "cc4458a843a2a6ffa86b4efd1956ffca", "text": "There is a growing interest in the use of chronic deep brain stimulation (DBS) for the treatment of medically refractory movement disorders and other neurological and psychiatric conditions. Fundamental questions remain about the physiologic effects and safety of DBS. Previous basic research studies have focused on the direct polarization of neuronal membranes by electrical stimulation. The goal of this paper is to provide information on the thermal effects of DBS using finite element models to investigate the magnitude and spatial distribution of DBS induced temperature changes. The parameters investigated include: stimulation waveform, lead selection, brain tissue electrical and thermal conductivity, blood perfusion, metabolic heat generation during the stimulation. Our results show that clinical deep brain stimulation protocols will increase the temperature of surrounding tissue by up to 0.8degC depending on stimulation/tissue parameters", "title": "" }, { "docid": "5d9112213e6828d5668ac4a33d4582f9", "text": "This paper describes four patients whose chief symptoms were steatorrhoea and loss of weight. Despite the absence of a history of abdominal pain investigations showed that these patients had chronic pancreatitis, which responded to medical treatment. The pathological findings in two of these cases and in six which came to necropsy are reported.", "title": "" } ]
scidocsrr
56a4b805c07d673759188dfb1a473c49
A Review of Data Classification Using K-Nearest Neighbour Algorithm
[ { "docid": "c698f7d6b487cc7c87d7ff215d7f12b2", "text": "This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).", "title": "" }, { "docid": "e494f926c9b2866d2c74032d200e4d0a", "text": "This chapter describes a new algorithm for training Support Vector Machines: Sequential Minimal Optimization, or SMO. Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because large matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while a standard projected conjugate gradient (PCG) chunking algorithm scales somewhere between linear and cubic in the training set size. SMO's computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. For the MNIST database, SMO is as fast as PCG chunking; while for the UCI Adult database and linear SVMs, SMO can be more than 1000 times faster than the PCG chunking algorithm.", "title": "" }, { "docid": "3b5b3802d4863a6569071b346b65600d", "text": "In vector space model (VSM), text representation is the task of transforming the content of a textual document into a vector in the term space so that the document could be recognized and classified by a computer or a classifier. Different terms (i.e. words, phrases, or any other indexing units used to identify the contents of a text) have different importance in a text. The term weighting methods assign appropriate weights to the terms to improve the performance of text categorization. In this study, we investigate several widely-used unsupervised (traditional) and supervised term weighting methods on benchmark data collections in combination with SVM and kNN algorithms. In consideration of the distribution of relevant documents in the collection, we propose a new simple supervised term weighting method, i.e. tf.rf, to improve the terms' discriminating power for text categorization task. From the controlled experimental results, these supervised term weighting methods have mixed performance. Specifically, our proposed supervised term weighting method, tf.rf, has a consistently better performance than other term weighting methods while other supervised term weighting methods based on information theory or statistical metric perform the worst in all experiments. On the other hand, the popularly used tf.idf method has not shown a uniformly good performance in terms of different data sets.", "title": "" } ]
[ { "docid": "26f87882e45628dd75775dd26d8ac05f", "text": "Game theory is one of the key paradigms behind many scientific disciplines from biology to behavioral sciences to economics. In its evolutionary form and especially when the interacting agents are linked in a specific social network the underlying solution concepts and methods are very similar to those applied in non-equilibrium statistical physics. This review gives a tutorial-type overview of the field for physicists. The first four sections introduce the necessary background in classical and evolutionary game theory from the basic definitions to the most important results. The fifth section surveys the topological complications implied by non-mean-field-type social network structures in general. The next three sections discuss in detail the dynamic behavior of three prominent classes of models: the Prisoner’s Dilemma, the Rock–Scissors–Paper game, and Competing Associations. The major theme of the review is in what sense and how the graph structure of interactions can modify and enrich the picture of long term behavioral patterns emerging in evolutionary games. © 2007 Published by Elsevier B.V. PACS: 02.50.Le; 89.65.−s; 87.23.Kg; 05.65.+b; 87.23.Ge", "title": "" }, { "docid": "f8cabe441efdd4bbd50865d32a899bc6", "text": "In this paper, a novel balanced amplifier antenna has been proposed for X-band. The antenna module consists of a microstrip fed ground slot excited cylindrical dielectric resonator (CDR). The unbalanced antenna output is converted into balanced outputs by a 180 degree rat-race hybrid which feed a differential low noise amplifier (LNA) consisting of two cascade stages. A marchand balun type biasing arrangement is employed for the differential LNA to increase common mode rejection ratio (CMRR). The differential LNA provides peak differential gain of 30 dB, CMRR of 16.8 dB and 12.6% 3 dB FBW. The insertion gain of the hybrid fed LNA is about 21 dB, 3 dB FBW is 8.4% and overall noise figure of the system is about 4 dB. The CDRA has peak gain of 5.47 dBi. The proposed design is suitable for modern receiver front-end applications requiring balanced outputs.", "title": "" }, { "docid": "29c0952b1593ccd830cc0fb392091065", "text": "Effective IT governance will ensure alignment between IT and business goals. Organizations with ineffective IT governance will suffer due to poor performance of IT resources such as inaccurate information quality, inefficient operating costs, runaway IT project and even the demise of its IT department. This study seeks to examine empirically the individual IT governance mechanisms that influence the overall effectiveness of IT governance. Furthermore, this study examines the relationship of effective IT governance, the extent of IT outsourcing decisions within the organizations, and the level of IT Intensity in the organizations. We used structural equation modeling analysis to examine 110 responses from members of ISACA (Information Systems and Audit Control Association) Australia in which their organizations have outsourced their IT functions. Results suggest significant positive relationships between the overall level of effective IT governance and the following mechanisms: the involvement of senior management in IT, the existence of ethic or culture of compliance in IT, and corporate communication systems.", "title": "" }, { "docid": "68982ce5d5a61584f125856b10e0653f", "text": "The mature human brain is organized into a collection of specialized functional networks that flexibly interact to support various cognitive functions. Studies of development often attempt to identify the organizing principles that guide the maturation of these functional networks. In this report, we combine resting state functional connectivity MRI (rs-fcMRI), graph analysis, community detection, and spring-embedding visualization techniques to analyze four separate networks defined in earlier studies. As we have previously reported, we find, across development, a trend toward 'segregation' (a general decrease in correlation strength) between regions close in anatomical space and 'integration' (an increased correlation strength) between selected regions distant in space. The generalization of these earlier trends across multiple networks suggests that this is a general developmental principle for changes in functional connectivity that would extend to large-scale graph theoretic analyses of large-scale brain networks. Communities in children are predominantly arranged by anatomical proximity, while communities in adults predominantly reflect functional relationships, as defined from adult fMRI studies. In sum, over development, the organization of multiple functional networks shifts from a local anatomical emphasis in children to a more \"distributed\" architecture in young adults. We argue that this \"local to distributed\" developmental characterization has important implications for understanding the development of neural systems underlying cognition. Further, graph metrics (e.g., clustering coefficients and average path lengths) are similar in child and adult graphs, with both showing \"small-world\"-like properties, while community detection by modularity optimization reveals stable communities within the graphs that are clearly different between young children and young adults. These observations suggest that early school age children and adults both have relatively efficient systems that may solve similar information processing problems in divergent ways.", "title": "" }, { "docid": "0d9bf4c634b2ec665c54511568bd00cf", "text": "BACKGROUND\nEpidemiologic studies have shown a relationship between glycated hemoglobin levels and cardiovascular events in patients with type 2 diabetes. We investigated whether intensive therapy to target normal glycated hemoglobin levels would reduce cardiovascular events in patients with type 2 diabetes who had either established cardiovascular disease or additional cardiovascular risk factors.\n\n\nMETHODS\nIn this randomized study, 10,251 patients (mean age, 62.2 years) with a median glycated hemoglobin level of 8.1% were assigned to receive intensive therapy (targeting a glycated hemoglobin level below 6.0%) or standard therapy (targeting a level from 7.0 to 7.9%). Of these patients, 38% were women, and 35% had had a previous cardiovascular event. The primary outcome was a composite of nonfatal myocardial infarction, nonfatal stroke, or death from cardiovascular causes. The finding of higher mortality in the intensive-therapy group led to a discontinuation of intensive therapy after a mean of 3.5 years of follow-up.\n\n\nRESULTS\nAt 1 year, stable median glycated hemoglobin levels of 6.4% and 7.5% were achieved in the intensive-therapy group and the standard-therapy group, respectively. During follow-up, the primary outcome occurred in 352 patients in the intensive-therapy group, as compared with 371 in the standard-therapy group (hazard ratio, 0.90; 95% confidence interval [CI], 0.78 to 1.04; P=0.16). At the same time, 257 patients in the intensive-therapy group died, as compared with 203 patients in the standard-therapy group (hazard ratio, 1.22; 95% CI, 1.01 to 1.46; P=0.04). Hypoglycemia requiring assistance and weight gain of more than 10 kg were more frequent in the intensive-therapy group (P<0.001).\n\n\nCONCLUSIONS\nAs compared with standard therapy, the use of intensive therapy to target normal glycated hemoglobin levels for 3.5 years increased mortality and did not significantly reduce major cardiovascular events. These findings identify a previously unrecognized harm of intensive glucose lowering in high-risk patients with type 2 diabetes. (ClinicalTrials.gov number, NCT00000620.)", "title": "" }, { "docid": "714c06da1a728663afd8dbb1cd2d472d", "text": "This paper proposes hybrid semiMarkov conditional random fields (SCRFs) for neural sequence labeling in natural language processing. Based on conventional conditional random fields (CRFs), SCRFs have been designed for the tasks of assigning labels to segments by extracting features from and describing transitions between segments instead of words. In this paper, we improve the existing SCRF methods by employing word-level and segment-level information simultaneously. First, word-level labels are utilized to derive the segment scores in SCRFs. Second, a CRF output layer and an SCRF output layer are integrated into an unified neural network and trained jointly. Experimental results on CoNLL 2003 named entity recognition (NER) shared task show that our model achieves state-of-the-art performance when no external knowledge is used.", "title": "" }, { "docid": "4929aae1291a93873ab77961e9aa6e60", "text": "We describe a nonnegative variant of the ”Sparse PCA” problem. The goal is to create a low dimensional representation from a collection of points which on the one hand maximizes the variance of the projected points and on the other uses only parts of the original coordinates, and thereby creating a sparse representation. What distinguishes our problem from other Sparse PCA formulations is that the projection involves only nonnegative weights of the original coordinates — a desired quality in various fields, including economics, bioinformatics and computer vision. Adding nonnegativity contributes to sparseness, where it enforces a partitioning of the original coordinates among the new axes. We describe a simple yet efficient iterative coordinate-descent type of scheme which converges to a local optimum of our optimization criteria, giving good results on large real world datasets.", "title": "" }, { "docid": "8d40b29088a331578e502abb2148ea8c", "text": "Governments are increasingly realizing the importance of utilizing Information and Communication Technologies (ICT) as a tool to better address user’s/citizen’s needs. As citizen’s expectations grow, governments need to deliver services of high quality level to motivate more users to utilize these available e-services. In spite of this, governments still fall short in their service quality level offered to citizens/users. Thus understanding and measuring service quality factors become crucial as the number of services offered is increasing while not realizing what citizens/users really look for when they utilize these services. The study presents an extensive literature review on approaches used to evaluate e-government services throughout a phase of time. The study also suggested those quality/factors indicators government’s need to invest in of high priority in order to meet current and future citizen’s expectations of service quality.", "title": "" }, { "docid": "1802e14988d1c5c1469859616b6441a2", "text": "Twitter is a microblogging platform in which users can post status messages, called “tweets,” to their friends. It has provided an enormous dataset of the so-called sentiments, whose classification can take place through supervised learning. To build supervised learning models, classification algorithms require a set of representative labeled data. However, labeled data are usually difficult and expensive to obtain, which motivates the interest in semi-supervised learning. This type of learning uses unlabeled data to complement the information provided by the labeled data in the training process; therefore, it is particularly useful in applications including tweet sentiment analysis, where a huge quantity of unlabeled data is accessible. Semi-supervised learning for tweet sentiment analysis, although appealing, is relatively new. We provide a comprehensive survey of semi-supervised approaches applied to tweet classification. Such approaches consist of graph-based, wrapper-based, and topic-based methods. A comparative study of algorithms based on self-training, co-training, topic modeling, and distant supervision highlights their biases and sheds light on aspects that the practitioner should consider in real-world applications.", "title": "" }, { "docid": "dfb16d97d293776e255397f1dc49bbbf", "text": "Self-service automatic teller machines (ATMs) have dramatically altered the ways in which customers interact with banks. ATMs provide the convenience of completing some banking transactions remotely and at any time. AT&T Global Information Solutions (GIS) is the world's leading provider of ATMs. These machines support such familiar services as cash withdrawals and balance inquiries. Further technological development has extended the utility and convenience of ATMs produced by GIS by facilitating check cashing and depositing, as well as direct bill payment, using an on-line system. These enhanced services, discussed in this paper, are made possible primarily through sophisticated optical character recognition (OCR) technology. Developed by an AT&T team that included GIS, AT&T Bell Laboratories Quality, Engineering, Software, and Technologies (QUEST), and AT&T Bell Laboratories Research, OCR technology was crucial to the development of these advanced ATMs.", "title": "" }, { "docid": "2fdf4618c0519bfdee5c83bef9012e0f", "text": "In most Western countries females have higher rates of suicidal ideation and behavior than males, yet mortality from suicide is typically lower for females than for males. This article explores the gender paradox of suicidal behavior, examines its validity, and critically examines some of the explanations, concluding that the gender paradox of suicidal behavior is a real phenomenon and not a mere artifact of data collection. At the same time, the gender paradox in suicide is a more culture-bound phenomenon than has been traditionally assumed; cultural expectations about gender and suicidal behavior strongly determine its existence. Evidence from the United States and Canada suggests that the gender gap may be more prominent in communities where different suicidal behaviors are expected of females and males. These divergent expectations may affect the scenarios chosen by females and males, once suicide becomes a possibility, as well as the interpretations of those who are charged with determining whether a particular behavior is suicidal (e.g., coroners). The realization that cultural influences play an important role in the gender paradox of suicidal behaviors holds important implications for research and for public policy.", "title": "" }, { "docid": "24b62b4d3ecee597cffef75e0864bdd8", "text": "Botnets can cause significant security threat and huge loss to organizations, and are difficult to discover their existence. Therefore they have become one of the most severe threats on the Internet. The core component of botnets is their command and control channel. Botnets often use IRC (Internet Relay Chat) as a communication channel through which the botmaster can control the bots to launch attacks or propagate more infections. In this paper, anomaly score based botnet detection is proposed to identify the botnet activities by using the similarity measurement and the periodic characteristics of botnets. To improve the detection rate, the proposed system employs two-level correlation relating the set of hosts with same anomaly behaviors. The proposed method can differentiate the malicious network traffic generated by infected hosts (bots) from that by normal IRC clients, even in a network with only a very small number of bots. The experiment results show that, regardless the size of the botnet in a network, the proposed approach efficiently detects abnormal IRC traffic and identifies botnet activities. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2bb21a94c803c74ad6c286c7a04b8c5b", "text": "Recently, social media, such as Twitter, has been successfully used as a proxy to gauge the impacts of disasters in real time. However, most previous analyses of social media during disaster response focus on the magnitude and location of social media discussion. In this work, we explore the impact that disasters have on the underlying sentiment of social media streams. During disasters, people may assume negative sentiments discussing lives lost and property damage, other people may assume encouraging responses to inspire and spread hope. Our goal is to explore the underlying trends in positive and negative sentiment with respect to disasters and geographically related sentiment. In this paper, we propose a novel visual analytics framework for sentiment visualization of geo-located Twitter data. The proposed framework consists of two components, sentiment modeling and geographic visualization. In particular, we provide an entropy-based metric to model sentiment contained in social media data. The extracted sentiment is further integrated into a visualization framework to explore the uncertainty of public opinion. We explored Ebola Twitter dataset to show how visual analytics techniques and sentiment modeling can reveal interesting patterns in disaster scenarios.", "title": "" }, { "docid": "c1220e87212f8a9f005cc8a62eda58f8", "text": "This paper argues that double-object verbs decompose into two heads, an external-argumentselecting CAUSE predicate (vCAUSE) and a prepositional element, PHAVE. Two primary types of argument are presented. First, a consideration of the well-known Oerhle’s generalization effects in English motivate such a decomposition, in combination with a consideration of idioms in ditransitive structures. These facts mitigate strongly against a Transform approach to the dative alternation, like that of Larson 1988, and point towards an Alternative Projection approach, similar in many respects to that of Pesetsky 1995. Second, the PHAVE prepositional element is identified with the prepositional component of verbal have, treated in the literature by Benveniste 1966; Freeze 1992; Kayne 1993; Guéron 1995. Languages without PHAVE do not allow possessors to c-command possessees, and show no evidence of a double-object construction, in which Goals c-command Themes. On the current account, these two facts receive the same explanation: PHAVE does not form part of the inventory of morphosyntactic primitives of these languages.", "title": "" }, { "docid": "b155a3b9aedc681acd5b6d26448a1379", "text": "This paper describes a virtual reality based programming by demonstration system for grasp recognition in manipulation tasks and robot pregrasp planning. The system classifies the human hand postures taking advantage of virtual grasping and information about the contact points and normals computed in the virtual reality environment. A pregrasp planning algorithm mimicking the human hand motion is also proposed. Reconstruction of human hand trajectories, approaching the objects in the environment, is based on NURBS curves and a data smoothing algorithm. Some experiments involving grasp classification and pregrasp planning, while avoiding obstacles in the workspace, show the viability and effectiveness of the approach", "title": "" }, { "docid": "2eb344b6701139be184624307a617c1b", "text": "This work combines the central ideas from two different areas, crowd simulation and social network analysis, to tackle some existing problems in both areas from a new angle. We present a novel spatio-temporal social crowd simulation framework, Social Flocks, to revisit three essential research problems, (a) generation of social networks, (b) community detection in social networks, (c) modeling collective social behaviors in crowd simulation. Our framework produces social networks that satisfy the properties of high clustering coefficient, low average path length, and power-law degree distribution. It can also be exploited as a novel dynamic model for community detection. Finally our framework can be used to produce real-life collective social behaviors over crowds, including community-guided flocking, leader following, and spatio-social information propagation. Social Flocks can serve as visualization of simulated crowds for domain experts to explore the dynamic effects of the spatial, temporal, and social factors on social networks. In addition, it provides an experimental platform of collective social behaviors for social gaming and movie animations. Social Flocks demo is at http://mslab.csie.ntu.edu.tw/socialflocks/ .", "title": "" }, { "docid": "03826954a304a4d6bdb2c1f55bbe8001", "text": "This paper gives an overview of the channel access methods of three wireless technologies that are likely to be used in the environment of vehicle networks: IEEE 802.15.4, IEEE 802.11 and Bluetooth. Researching the coexistence of IEEE 802.15.4 with IEEE 802.11 and Bluetooth, results of experiments conducted in a radio frequency anechoic chamber are presented. The power densities of the technologies on a single IEEE 802.15.4 channel are compared. It is shown that the pure existence of an IEEE 802.11 access point leads to collisions due to different timing scales. Furthermore, the packet drop rate caused by Bluetooth is analyzed and an estimation formula for it is given.", "title": "" }, { "docid": "291ece850c1c6afcda49ac2e8a74319e", "text": "The aim of this paper is to explore how well the task of text vs. nontext distinction can be solved in online handwritten documents using only offline information. Two systems are introduced. The first system generates a document segmentation first. For this purpose, four methods originally developed for machine printed documents are compared: x-y cut, morphological closing, Voronoi segmentation, and whitespace analysis. A state-of-the art classifier then distinguishes between text and non-text zones. The second system follows a bottom-up approach that classifies connected components. Experiments are performed on a new dataset of online handwritten documents containing different content types in arbitrary arrangements. The best system assigns 94.3% of the pixels to the correct class.", "title": "" }, { "docid": "e5b95c0b6f9843ccf81f652c92768f66", "text": "Many visual applications have benefited from the outburst of web images, yet the imprecise and incomplete tags arbitrarily provided by users, as the thorn of the rose, may hamper the performance of retrieval or indexing systems relying on such data. In this paper, we propose a novel locality sensitive low-rank model for image tag completion, which approximates the global nonlinear model with a collection of local linear models. To effectively infuse the idea of locality sensitivity, a simple and effective pre-processing module is designed to learn suitable representation for data partition, and a global consensus regularizer is introduced to mitigate the risk of overfitting. Meanwhile, low-rank matrix factorization is employed as local models, where the local geometry structures are preserved for the low-dimensional representation of both tags and samples. Extensive empirical evaluations conducted on three datasets demonstrate the effectiveness and efficiency of the proposed method, where our method outperforms pervious ones by a large margin.", "title": "" }, { "docid": "3412cf0349cee1c21c433477696641b4", "text": "Three experiments examined the impact of excessive violence in sport video games on aggression-related variables. Participants played either a nonviolent simulation-based sports video game (baseball or football) or a matched excessively violent sports video game. Participants then completed measures assessing aggressive cognitions (Experiment 1), aggressive affect and attitudes towards violence in sports (Experiment 2), or aggressive behavior (Experiment 3). Playing an excessively violent sports video game increased aggressive affect, aggressive cognition, aggressive behavior, and attitudes towards violence in sports. Because all games were competitive, these findings indicate that violent content uniquely leads to increases in several aggression-related variables, as predicted by the General Aggression Model and related social–cognitive models. 2009 Elsevier Inc. All rights reserved. In 2002, ESPN aired an investigative piece examining the impact of excessively violent sports video games on youth’s attitudes towards sports (ESPN, 2002). At the time, Midway Games produced several sports games (e.g., NFL Blitz, MLB Slugfest, and NHL Hitz) containing excessive and unrealistic violence, presumably to appeal to non-sport fan video game players. These games were officially licensed by the National Football League, Major League Baseball, and the National Hockey League, which permitted Midway to implement team logos, players’ names, and players’ likenesses into the games. Within these games, players control real-life athletes and can perform excessively violent behaviors on the electronic field. The ESPN program questioned why the athletic leagues would allow their license to be used in this manner and what effect these violent sports games had on young players. Then in December 2004, the NFL granted exclusive license rights to EA Sports (ESPN.com, 2005). In response, Midway Games began publishing a more violent, grittier football game based on a fictitious league. The new football video game, which is rated appropriate only for people seventeen and older, features fictitious players engaging in excessive violent behaviors on and off the field, drug use, sex, and gambling (IGN.com, 2005). Violence in video games has been a major social issue, not limited to violence in sports video games. Over 85% of the games on ll rights reserved. ychology, Iowa State Univernited States. Fax: +1 515 294 the market contain some violence (Children Now, 2001). Approximately half of video games include serious violent actions toward other game characters (Children Now, 2001; Dietz, 1998; Dill, Gentile, Richter, & Dill, 2005). Indeed, Congressman Joe Baca of California recently introduced Federal legislation to require that violent video games contain a warning label about their link to aggression (Baca, 2009). Since 1999, the amount of daily video game usage by youth has nearly doubled (Roberts, Foehr, & Rideout, 2005). Almost 60% of American youth from ages 8 to 18 report playing video games on ‘‘any given day” and 30% report playing for more than an average of an hour a day (Roberts et al., 2005). Video game usage is high in youth regardless of sex, race, parental education, or household income (Roberts et al., 2005). Competition-only versus violent-content hypotheses Recent meta-analyses (e.g., Anderson et al., 2004, submitted for publication) have shown that violent video game exposure increases physiological arousal, aggressive affect, aggressive cognition, and aggressive behavior. Other studies link violent video game play to physiological desensitization to violence (e.g., Bartholow, Bushman, & Sestir, 2006; Carnagey, Anderson, & Bushman, 2007). Particularly interesting is the recent finding that violent video game play can increase aggression in both short and long term contexts. Besides the empirical evidence, there are strong theoretical reasons from the cognitive, social, and personality domains to expect 732 C.A. Anderson, N.L. Carnagey / Journal of Experimental Social Psychology 45 (2009) 731–739 violent video game effects on aggression-related variables. However, currently there are two competing hypotheses as to how violent video games increases aggression: the violent-content hypothesis and the competition-only hypothesis. General Aggression Model and the violent-content hypothesis The General Aggression Model (GAM) is an integration of several prior models of aggression (e.g., social learning theory, cognitive neoassociation) and has been detailed in several publications (Anderson & Bushman, 2002; Anderson & Carnagey, 2004; Anderson, Gentile, & Buckley, 2007; Anderson & Huesmann, 2003). GAM describes a cyclical pattern of interaction between the person and the environment. Input variables, such as provocation and aggressive personality, can affect decision processes and behavior by influencing one’s present internal state in at least one of three primary ways: by influencing current cognitions, affective state, and physiological arousal. That is, a specific input variable may directly influence only one, or two, or all three aspects of a person’s internal state. For example, uncomfortably hot temperature appears to increase aggression primarily by its direct impact on affective state (Anderson, Anderson, Dorr, DeNeve, & Flanagan, 2000). Of course, because affect, arousal, and cognition tend to influence each other, even input variables that primarily influence one aspect of internal state also tend to indirectly influence the other aspects. Although GAM is a general model and not specifically a model of media violence effects, it can easily be applied to media effects. Theoretically, violent media exposure might affect all three components of present internal state. Research has shown that playing violent video games can temporarily increase aggressive thoughts (e.g., Kirsh, 1998), affect (e.g., Ballard & Weist, 1996), and arousal (e.g., Calvert & Tan, 1994). Of course, nonviolent games also can increase arousal, and for this reason much prior work has focused on testing whether violent content can increase aggressive behavior even when physiological arousal is controlled. This usually is accomplished by selecting nonviolent games that are equally arousing (e.g., Anderson et al., 2004). Despite’s GAM’s primary focus on the current social episode, it is not restricted to short-term effects. With repeated exposure to certain types of stimuli (e.g., media violence, certain parenting practices), particular knowledge structures (e.g., aggressive scripts, attitudes towards violence) become chronically accessible. Over time, the individual employs these knowledge structures and occasionally receives environmental reinforcement for their usage. With time and repeated use, these knowledge structures gain strength and connections to other stimuli and knowledge structures, and therefore are more likely to be used in later situations. This accounts for the finding that repeatedly exposing children to media violence increases later aggression, even into adulthood (Anderson, Sakamoto, Gentile, Ihori, & Shibuya, 2008; Huesmann & Miller, 1994; Huesmann, Moise-Titus, Podolski, & Eron, 2003; Möller & Krahé, 2009; Wallenius & Punamaki, 2008). Such longterm effects result from the development, automatization, and reinforcement of aggression-related knowledge structures. In essence, the creation and automatization of these aggression-related knowledge structures and concomitant emotional desensitization changes the individual’s personality. For example, long-term consumers of violent media can become more aggressive in outlook, perceptual biases, attitudes, beliefs, and behavior than they were before the repeated exposure, or would have become without such exposure (e.g., Funk, Baldacci, Pasold, & Baumgardner, 2004; Gentile, Lynch, Linder, & Walsh, 2004; Krahé & Möller, 2004; Uhlmann & Swanson, 2004). In sum, GAM predicts that one way violent video games increase aggression is by the violent content increasing at least one of the aggression-related aspects of a person’s current internal state (short-term context), and over time increasing the chronic accessibility of aggression-related knowledge structures. This is the violent-content hypothesis. The competition hypothesis The competition hypothesis maintains that competitive situations stimulate aggressiveness. According to this hypothesis, many previous short-term (experimental) video game studies have found links between violent games and aggression not because of the violent content, but because violent video games typically involve competition, whereas nonviolent video games frequently are noncompetitive. The competitive aspect of video games might increase aggression by increasing arousal or by increasing aggressive thoughts or affect. Previous research has demonstrated that increases in physiological arousal can cause increases in aggression under some circumstances (Berkowitz, 1993). Competitive aspects of violent video games could also increase aggressive cognitions via links between aggressive and competition concepts (Anderson & Morrow, 1995; Deutsch, 1949, 1993). Thus, at a general level such competition effects are entirely consistent with GAM and with the violentcontent hypothesis. However, a strong version of the competition hypothesis states that violent content has no impact beyond its effects on competition and its sequela. This strong version, which we call the competition-only hypothesis, has not been adequately tested. Testing the competition-only hypothesis There has been little research conducted to examine the violent-content hypothesis versus the competition-only hypothesis (see Carnagey & Anderson, 2005 for one such example). To test these hypotheses against each other, one must randomly assign participants to play either violent or nonviolent video games, all of which are competitive. The use of sports video games meets this requirement and has other benefits. E", "title": "" } ]
scidocsrr
45496e802019324e75a7495fe0651307
The Berlin brain-computer interface: EEG-based communication without subject training
[ { "docid": "5d247482bb06e837bf04c04582f4bfa2", "text": "This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.", "title": "" } ]
[ { "docid": "06abf2a7c6d0c25cfe54422268300e58", "text": "The purpose of the present study is to provide useful data that could be applied to various types of periodontal plastic surgery by detailing the topography of the greater palatine artery (GPA), looking in particular at its depth from the palatal masticatory mucosa (PMM) and conducting a morphometric analysis of the palatal vault. Forty-three hemisectioned hard palates from embalmed Korean adult cadavers were used in this study. The morphometry of the palatal vault was analyzed, and then the specimens were decalcified and sectioned. Six parameters were measured using an image-analysis system after performing a standard calibration. In one specimen, the PMM was separated from the hard palate and subjected to a partial Sihler's staining technique, allowing the branching pattern of the GPA to be observed in a new method. The distances between the GPA and the gingival margin, and between the GPA and the cementoenamel junction were greatest at the maxillary second premolar. The shortest vertical distance between the GPA and the PMM decreased gradually as it proceeded anteriorly. The GPA was located deeper in the high-vault group than in the low-vault group. The premolar region should be recommended as the optimal donor site for tissue grafting, and in particular the second premolar region. The maximum size and thickness of tissue that can be harvested from the region were 9.3 mm and 4.0 mm, respectively.", "title": "" }, { "docid": "dfdf2581010777e51ff3e29c5b9aee7f", "text": "This paper proposes a parallel architecture with resistive crosspoint array. The design of its two essential operations, read and write, is inspired by the biophysical behavior of a neural system, such as integrate-and-fire and local synapse weight update. The proposed hardware consists of an array with resistive random access memory (RRAM) and CMOS peripheral circuits, which perform matrix-vector multiplication and dictionary update in a fully parallel fashion, at the speed that is independent of the matrix dimension. The read and write circuits are implemented in 65 nm CMOS technology and verified together with an array of RRAM device model built from experimental data. The overall system exploits array-level parallelism and is demonstrated for accelerated dictionary learning tasks. As compared to software implementation running on a 8-core CPU, the proposed hardware achieves more than 3000 × speedup, enabling high-speed feature extraction on a single chip.", "title": "" }, { "docid": "d9789c6dc7febc25732617f0d57a43a1", "text": "When a binary or ordinal regression model incorrectly assumes that error variances are the same for all cases, the standard errors are wrong and (unlike OLS regression) the parameter estimates are biased. Heterogeneous choice (also known as location-scale or heteroskedastic ordered) models explicitly specify the determinants of heteroskedasticity in an attempt to correct for it. Such models are also useful when the variance itself is of substantive interest. This paper illustrates how the author’s Stata program oglm (Ordinal Generalized Linear Models) can be used to estimate heterogeneous choice and related models. It shows that two other models that have appeared in the literature (Allison’s model for group comparisons and Hauser and Andrew’s logistic response model with proportionality constraints) are special cases of a heterogeneous choice model and alternative parameterizations of it. The paper further argues that heterogeneous choice models may sometimes be an attractive alternative to other ordinal regression models, such as the generalized ordered logit model estimated by gologit2. Finally, the paper offers guidelines on how to interpret, test and modify heterogeneous choice models.", "title": "" }, { "docid": "6c106d560d8894d941851386d96afe2b", "text": "Cooperative vehicular networks require the exchange of positioning and basic status information between neighboring nodes to support higher layer protocols and applications, including active safety applications. The information exchange is based on the periodic transmission/reception of 1-hop broadcast messages on the so called control channel. The dynamic adaptation of the transmission parameters of such messages will be key for the reliable and efficient operation of the system. On one hand, congestion control protocols need to be applied to control the channel load, typically through the adaptation of the transmission parameters based on certain channel load metrics. On the other hand, awareness control protocols are also required to adequately support cooperative vehicular applications. Such protocols typically adapt the transmission parameters of periodic broadcast messages to ensure each vehicle's capacity to detect, and possibly communicate, with the relevant vehicles and infrastructure nodes present in its local neighborhood. To date, congestion and awareness control protocols have been normally designed and evaluated separately, although both will be required for the reliable and efficient operation of the system. To this aim, this paper proposes and evaluates INTERN, a new control protocol that integrates two congestion and awareness control processes. The simulation results obtained demonstrate that INTERN is able to satisfy the application's requirements of all vehicles, while effectively controlling the channel load.", "title": "" }, { "docid": "645395d46f653358d942742711d50c0b", "text": "Feature descriptors play a crucial role in a wide range of geometry analysis and processing applications, including shape correspondence, retrieval, and segmentation. In this paper, we propose ShapeNet, a generalization of the popular convolutional neural networks (CNN) paradigm to non-Euclidean manifolds. Our construction is based on a local geodesic system of polar coordinates to extract “patches”, which are then passed through a cascade of filters and linear and non-linear operators. The coefficients of the filters and linear combination weights are optimization variables that are learned to minimize a task-specific cost function. We use ShapeNet to learn invariant shape feature descriptors that significantly outperform recent state-of-the-art methods, and show that previous approaches such as heat and wave kernel signatures, optimal spectral descriptors, and intrinsic shape contexts can be obtained as particular configurations of ShapeNet. CR Categories: I.2.6 [Artificial Intelligence]: Learning— Connectionism and neural nets", "title": "" }, { "docid": "24ac33300d3ea99441068c20761e8305", "text": "Purpose – The purpose of this research is to examine the critical success factors of mobile web site adoption. Design/methodology/approach – Based on the valid responses collected from a questionnaire survey, the structural equation modelling technique was employed to examine the research model. Findings – The results indicate that system quality is the main factor affecting perceived ease of use, whereas information quality is the main factor affecting perceived usefulness. Service quality has significant effects on trust and perceived ease of use. Perceived usefulness, perceived ease of use and trust determine user satisfaction. Practical implications – Mobile service providers need to improve the system quality, information quality and service quality of mobile web sites to enhance user satisfaction. Originality/value – Previous research has mainly focused on e-commerce web site success and seldom examined the factors affecting mobile web site success. This research fills the gap. The research draws on information systems success theory, the technology acceptance model and trust theory as the theoretical bases.", "title": "" }, { "docid": "b92d89fec6f0e1cfd869290b015a7be5", "text": "Vertex-centric graph processing is employed by many popular algorithms (e.g., PageRank) due to its simplicity and efficient use of asynchronous parallelism. The high compute power provided by SIMT architecture presents an opportunity for accelerating these algorithms using GPUs. Prior works of graph processing on a GPU employ Compressed Sparse Row (CSR) form for its space-efficiency; however, CSR suffers from irregular memory accesses and GPU underutilization that limit its performance. In this paper, we present CuSha, a CUDA-based graph processing framework that overcomes the above obstacle via use of two novel graph representations: G-Shards and Concatenated Windows (CW). G-Shards uses a concept recently introduced for non-GPU systems that organizes a graph into autonomous sets of ordered edges called shards. CuSha's mapping of GPU hardware resources on to shards allows fully coalesced memory accesses. CW is a novel representation that enhances the use of shards to achieve higher GPU utilization for processing sparse graphs. Finally, CuSha fully utilizes the GPU power by processing multiple shards in parallel on GPU's streaming multiprocessors. For ease of programming, CuSha allows the user to define the vertex-centric computation and plug it into its framework for parallel processing of large graphs. Our experiments show that CuSha provides significant speedups over the state-of-the-art CSR-based virtual warp-centric method for processing graphs on GPUs.", "title": "" }, { "docid": "8fe823702191b4a56defaceee7d19db6", "text": "We propose a method of stacking multiple long short-term memory (LSTM) layers for modeling sentences. In contrast to the conventional stacked LSTMs where only hidden states are fed as input to the next layer, our architecture accepts both hidden and memory cell states of the preceding layer and fuses information from the left and the lower context using the soft gating mechanism of LSTMs. Thus the proposed stacked LSTM architecture modulates the amount of information to be delivered not only in horizontal recurrence but also in vertical connections, from which useful features extracted from lower layers are effectively conveyed to upper layers. We dub this architecture Cell-aware Stacked LSTM (CAS-LSTM) and show from experiments that our models achieve state-of-the-art results on benchmark datasets for natural language inference, paraphrase detection, and sentiment classification.", "title": "" }, { "docid": "77ec15fd35f9bceee4537afc63c82079", "text": "Grapheme-to-phoneme conversion plays an important role in text-to-speech applications and other fields of computational linguistics. Although Korean uses a phonemic writing system, it must have a grapheme-to-phoneme conversion for speech synthesis because Korean writing system does not always reflect its actual pronunciations. This paper describes a grapheme-to-phoneme conversion method based on sound patterns to convert Korean text strings into phonemic representations. In the experiment with Korean news broadcasting evaluation set of 20 sentences, the accuracy of our system achieve as high as 98.70% on conversion. The performance of our rule-based system shows that the rule-based sound patterns are effective on Korean grapheme-to-phoneme conversion.", "title": "" }, { "docid": "617db9b325e211b45571db6fb8dc6c87", "text": "This paper gives a review of acoustic and ultrasonic optical fiber sensors (OFSs). The review covers optical fiber sensing methods for detecting dynamic strain signals, including general sound and acoustic signals, high-frequency signals, i.e., ultrasonic/ultrasound, and other signals such as acoustic emissions, and impact induced dynamic strain. Several optical fiber sensing methods are included, in an attempted to summarize the majority of optical fiber sensing methods used to date. The OFS include single fiber sensors and optical fiber devices, fiber-optic interferometers, and fiber Bragg gratings (FBGs). The single fiber and fiber device sensors include optical fiber couplers, microbend sensors, refraction-based sensors, and other extrinsic intensity sensors. The optical fiber interferometers include Michelson, Mach-Zehnder, Fabry-Perot, Sagnac interferometers, as well as polarization and model interference. The specific applications addressed in this review include optical fiber hydrophones, biomedical sensors, and sensors for nondestructive evaluation and structural health monitoring. Future directions are outlined and proposed for acousto-ultrasonic OFS.", "title": "" }, { "docid": "368e72277a5937cb8ee94cea3fa11758", "text": "Monoclinic Gd2O3:Eu(3+) nanoparticles (NPs) possess favorable magnetic and optical properties for biomedical application. However, how to obtain small enough NPs still remains a challenge. Here we combined the standard solid-state reaction with the laser ablation in liquids (LAL) technique to fabricate sub-10 nm monoclinic Gd2O3:Eu(3+) NPs and explained their formation mechanism. The obtained Gd2O3:Eu(3+) NPs exhibit bright red fluorescence emission and can be successfully used as fluorescence probe for cells imaging. In vitro and in vivo magnetic resonance imaging (MRI) studies show that the product can also serve as MRI good contrast agent. Then, we systematically investigated the nanotoxicity including cell viability, apoptosis in vitro, as well as the immunotoxicity and pharmacokinetics assays in vivo. This investigation provides a platform for the fabrication of ultrafine monoclinic Gd2O3:Eu(3+) NPs and evaluation of their efficiency and safety in preclinical application.", "title": "" }, { "docid": "3dc3e680c68aefb6968fbe120d203cdf", "text": "A procedure for reflection and discourse on the behavior of bots in the context of law, deception, and societal norms.", "title": "" }, { "docid": "49e5f9e36efb6b295868a307c1486c60", "text": "This paper reviews ultrasound segmentation methods, in a broad sense, focusing on techniques developed for medical B-mode ultrasound images. First, we present a review of articles by clinical application to highlight the approaches that have been investigated and degree of validation that has been done in different clinical domains. Then, we present a classification of methodology in terms of use of prior information. We conclude by selecting ten papers which have presented original ideas that have demonstrated particular clinical usefulness or potential specific to the ultrasound segmentation problem", "title": "" }, { "docid": "e7d5dd2926238db52cf406f20947f90e", "text": "The development of the capital markets is changing the relevance and empirical validity of the efficient market hypothesis. The dynamism of capital markets determines the need for efficiency research. The authors analyse the development and the current status of the efficient market hypothesis with an emphasis on the Baltic stock market. Investors often fail to earn an excess profit, but yet stock market anomalies are observed and market prices often deviate from their intrinsic value. The article presents an analysis of the concept of efficient market. Also, the market efficiency evolution is reviewed and its current status is analysed. This paper presents also an examination of stock market efficiency in the Baltic countries. Finally, the research methods are reviewed and the methodology of testing the weak-form efficiency in a developing market is suggested.", "title": "" }, { "docid": "059583d1d8a6f99bae3736d900008caa", "text": "Ultraviolet disinfection is a frequent option for eliminating viable organisms in ballast water to fulfill international and national regulations. The objective of this work is to evaluate the reduction of microalgae able to reproduce after UV irradiation, based on their growth features. A monoculture of microalgae Tisochrysis lutea was irradiated with different ultraviolet doses (UV-C 254 nm) by a flow-through reactor. A replicate of each treated sample was held in the dark for 5 days simulating a treatment during the ballasting; another replicate was incubated directly under the light, corresponding to the treatment application during de-ballasting. Periodic measurements of cell density were taken in order to obtain the corresponding growth curves. Irradiated samples depicted a regrowth following a logistic curve in concordance with the applied UV dose. By modeling these curves, it is possible to obtain the initial concentration of organisms able to reproduce for each applied UV dose, thus obtaining the dose-survival profiles, needed to determine the disinfection kinetics. These dose-survival profiles enable detection of a synergic effect between the ultraviolet irradiation and a subsequent dark period; in this sense, the UV dose applied during the ballasting operation and subsequent dark storage exerts a strong influence on microalgae survival. The proposed methodology, based on growth modeling, established a framework for comparing the UV disinfection by different devices and technologies on target organisms. This procedure may also assist the understanding of the evolution of treated organisms in more complex assemblages such as those that exist in natural ballast water.", "title": "" }, { "docid": "2c95ebadb6544904b791cdbbbd70dc1c", "text": "This report describes a small heartbeat monitoring system using capacitively coupled ECG sensors. Capacitively coupled sensors using an insulated electrode have been proposed to obtain ECG signals without pasting electrodes directly onto the skin. Although the sensors have better usability than conventional ECG sensors, it is difficult to remove noise contamination. Power-line noise can be a severe noise source that increases when only a single electrode is used. However, a multiple electrode system degrades usability. To address this problem, we propose a noise cancellation technique using an adaptive noise feedback approach, which can improve the availability of the capacitive ECG sensor using a single electrode. An instrumental amplifier is used in the proposed method for the first stage amplifier instead of voltage follower circuits. A microcontroller predicts the noise waveform from an ADC output. To avoid saturation caused by power-line noise, the predicted noise waveform is fed back to an amplifier input through a DAC. We implemented the prototype sensor system to evaluate the noise reduction performance. Measurement results using a prototype board show that the proposed method can suppress 28-dB power-line noise.", "title": "" }, { "docid": "1dd4bed5dd52b18f39c0e96c0a14c153", "text": "Understanding the generalization of deep learning has raised lots of concerns recently, where the learning algorithms play an important role in generalization performance, such as stochastic gradient descent (SGD). Along this line, we particularly study the anisotropic noise introduced by SGD, and investigate its importance for the generalization in deep neural networks. Through a thorough empirical analysis, it is shown that the anisotropic diffusion of SGD tends to follow the curvature information of the loss landscape, and thus is beneficial for escaping from sharp and poor minima effectively, towards more stable and flat minima. We verify our understanding through comparing this anisotropic diffusion with full gradient descent plus isotropic diffusion (i.e. Langevin dynamics) and other types of positiondependent noise.", "title": "" }, { "docid": "6f242ee8418eebdd9fdce50ca1e7cfa2", "text": "HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt età la diffusion de documents scientifiques de niveau recherche, publiés ou non, ´ emanant desétablissements d'enseignement et de recherche français oú etrangers, des laboratoires publics ou privés. Summary. This paper describes the construction and functionality of an Autonomous Fruit Picking Machine (AFPM) for robotic apple harvesting. The key element for the success of the AFPM is the integrated approach which combines state of the art industrial components with the newly designed flexible gripper. The gripper consist of a silicone funnel with a camera mounted inside. The proposed concepts guarantee adequate control of the autonomous fruit harvesting operation globally and of the fruit picking cycle particularly. Extensive experiments in the field validate the functionality of the AFPM.", "title": "" }, { "docid": "aa4b36c95058177167c58d4e192c8c1d", "text": "Face detection is a prominent research domain in the field of digital image processing. Out of various algorithms developed so far, Viola–Jones face detection has been highly successful. However, because of its complex nature, there is need to do more exploration in its various phases including training as well as actual face detection to find the scope of further improvement in terms of efficiency as well as accuracy under various constraints so as to detect and process the faces in real time. Its training phase for the screening of large amount of Haar features and generation of cascade classifiers is quite tedious and computationally intensive task. Any modification for improvement in its features or cascade classifiers requires re-training of all the features through example images, which are very large in number. Therefore, there is need to enhance the computational efficiency of training process of Viola–Jones face detection algorithm so that further enhancement in this framework is made easy. There are three main contributions in this research work. Firstly, we have achieved a considerable speedup by parallelizing the training as well as detection of rectangular Haar features based upon Viola–Jones framework on GPU. Secondly, the analysis of features selected through AdaBoost has been done, which can give intuitiveness in developing more innovative and efficient techniques for selecting competitive classifiers for the task of face detection, which can further be generalized for any type of object detection. Thirdly, implementation of parallelization techniques of modified version of Viola–Jones face detection algorithm in combination with skin color filtering to reduce the search space has been done. We have been able to achieve considerable reduction in the search space and time cost by using the skin color filtering in conjunction with the Viola–Jones algorithm. Time cost reduction of the order of 54.31% at the image resolution of 640*480 of GPU time versus CPU time has been achieved by the proposed parallelized algorithm.", "title": "" }, { "docid": "45ec93ccf4b2f6a6b579a4537ca73e9c", "text": "Concurrent collections provide thread-safe, highly-scalable operations, and are widely used in practice. However, programmers can misuse these concurrent collections when composing two operations where a check on the collection (such as non-emptiness) precedes an action (such as removing an entry). Unless the whole composition is atomic, the program contains an atomicity violation bug. In this paper we present the first empirical study of CHECK-THEN-ACT idioms of Java concurrent collections in a large corpus of open-source applications. We catalog nine commonly misused CHECK-THEN-ACT idioms and show the correct usage. We quantitatively and qualitatively analyze 28 widely-used open source Java projects that use Java concurrency collections - comprising 6.4M lines of code. We classify the commonly used idioms, the ones that are the most error-prone, and the evolution of the programs with respect to misused idioms. We implemented a tool, CTADetector, to detect and correct misused CHECK-THEN-ACT idioms. Using CTADetector we found 282 buggy instances. We reported 155 to the developers, who examined 90 of them. The developers confirmed 60 as new bugs and accepted our patch. This shows that CHECK-THEN-ACT idioms are commonly misused in practice, and correcting them is important.", "title": "" } ]
scidocsrr
91d3aaa0c760b2f9d43f6f7e15235d23
Can a mind have two time lines? Exploring space-time mapping in Mandarin and English speakers.
[ { "docid": "d159042f8f88d86ffe8e8e186953ba86", "text": "How do we construct abstract ideas like justice, mathematics, or time-travel? In this paper we investigate whether mental representations that result from physical experience underlie people's more abstract mental representations, using the domains of space and time as a testbed. People often talk about time using spatial language (e.g., a long vacation, a short concert). Do people also think about time using spatial representations, even when they are not using language? Results of six psychophysical experiments revealed that people are unable to ignore irrelevant spatial information when making judgments about duration, but not the converse. This pattern, which is predicted by the asymmetry between space and time in linguistic metaphors, was demonstrated here in tasks that do not involve any linguistic stimuli or responses. These findings provide evidence that the metaphorical relationship between space and time observed in language also exists in our more basic representations of distance and duration. Results suggest that our mental representations of things we can never see or touch may be built, in part, out of representations of physical experiences in perception and motor action.", "title": "" }, { "docid": "5b55b1c913aa9ec461c6c51c3d00b11b", "text": "Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.", "title": "" } ]
[ { "docid": "79c7bf1036877ca867da7595e8cef6e2", "text": "A two-process theory of human information processing is proposed and applied to detection, search, and attention phenomena. Automatic processing is activation of a learned sequence of elements in long-term memory that is initiated by appropriate inputs and then proceeds automatically—without subject control, without stressing the capacity limitations of the system, and without necessarily demanding attention. Controlled processing is a temporary activation of a sequence of elements that can be set up quickly and easily but requires attention, is capacity-limited (usually serial in nature), and is controlled by the subject. A series of studies using both reaction time and accuracy measures is presented, which traces these concepts in the form of automatic detection and controlled, search through the areas of detection, search, and attention. Results in these areas are shown to arise from common mechanisms. Automatic detection is shown to develop following consistent mapping of stimuli to responses over trials. Controlled search is utilized in varied-mapping paradigms, and in our studies, it takes the form of serial, terminating search. The approach resolves a number of apparent conflicts in the literature.", "title": "" }, { "docid": "e591165d8e141970b8263007b076dee1", "text": "Treating a human mind like a machine is an essential component of dehumanization, whereas attributing a humanlike mind to a machine is an essential component of anthropomorphism. Here we tested how a cue closely connected to a person's actual mental experience-a humanlike voice-affects the likelihood of mistaking a person for a machine, or a machine for a person. We predicted that paralinguistic cues in speech are particularly likely to convey the presence of a humanlike mind, such that removing voice from communication (leaving only text) would increase the likelihood of mistaking the text's creator for a machine. Conversely, adding voice to a computer-generated script (resulting in speech) would increase the likelihood of mistaking the text's creator for a human. Four experiments confirmed these hypotheses, demonstrating that people are more likely to infer a human (vs. computer) creator when they hear a voice expressing thoughts than when they read the same thoughts in text. Adding human visual cues to text (i.e., seeing a person perform a script in a subtitled video clip), did not increase the likelihood of inferring a human creator compared with only reading text, suggesting that defining features of personhood may be conveyed more clearly in speech (Experiments 1 and 2). Removing the naturalistic paralinguistic cues that convey humanlike capacity for thinking and feeling, such as varied pace and intonation, eliminates the humanizing effect of speech (Experiment 4). We discuss implications for dehumanizing others through text-based media, and for anthropomorphizing machines through speech-based media. (PsycINFO Database Record", "title": "" }, { "docid": "eee51fc5cd3bee512b01193fa396e19a", "text": "Croston’s method is a widely used to predict inventory demand when it is inter­ mittent. However, it is an ad hoc method with no properly formulated underlying stochastic model. In this paper, we explore possible models underlying Croston’s method and three related methods, and we show that any underlying model will be inconsistent with the prop­ erties of intermittent demand data. However, we find that the point forecasts and prediction intervals based on such underlying models may still be useful. [JEL: C53, C22, C51]", "title": "" }, { "docid": "bbcd26c47892476092a779869be7040c", "text": "This article reviews the thyroid system, mainly from a mammalian standpoint. However, the thyroid system is highly conserved among vertebrate species, so the general information on thyroid hormone production and feedback through the hypothalamic-pituitary-thyroid (HPT) axis should be considered for all vertebrates, while species-specific differences are highlighted in the individual articles. This background article begins by outlining the HPT axis with its components and functions. For example, it describes the thyroid gland, its structure and development, how thyroid hormones are synthesized and regulated, the role of iodine in thyroid hormone synthesis, and finally how the thyroid hormones are released from the thyroid gland. It then progresses to detail areas within the thyroid system where disruption could occur or is already known to occur. It describes how thyroid hormone is transported in the serum and into the tissues on a cellular level, and how thyroid hormone is metabolized. There is an in-depth description of the alpha and beta thyroid hormone receptors and their functions, including how they are regulated, and what has been learned from the receptor knockout mouse models. The nongenomic actions of thyroid hormone are also described, such as in glucose uptake, mitochondrial effects, and its role in actin polymerization and vesicular recycling. The article discusses the concept of compensation within the HPT axis and how this fits into the paradigms that exist in thyroid toxicology/endocrinology. There is a section on thyroid hormone and its role in mammalian development: specifically, how it affects brain development when there is disruption to the maternal, the fetal, the newborn (congenital), or the infant thyroid system. Thyroid function during pregnancy is critical to normal development of the fetus, and several spontaneous mutant mouse lines are described that provide research tools to understand the mechanisms of thyroid hormone during mammalian brain development. Overall this article provides a basic understanding of the thyroid system and its components. The complexity of the thyroid system is clearly demonstrated, as are new areas of research on thyroid hormone physiology and thyroid hormone action developing within the field of thyroid endocrinology. This review provides the background necessary to review the current assays and endpoints described in the following articles for rodents, fishes, amphibians, and birds.", "title": "" }, { "docid": "16c6e41746c451d66b43c5736f622cda", "text": "In this study, we report a multimodal energy harvesting device that combines electromagnetic and piezoelectric energy harvesting mechanism. The device consists of piezoelectric crystals bonded to a cantilever beam. The tip of the cantilever beam has an attached permanent magnet which, oscillates within a stationary coil fixed to the top of the package. The permanent magnet serves two purpose (i) acts as a tip mass for the cantilever beam and lowers the resonance frequency, and (ii) acts as a core which oscillates between the inductive coils resulting in electric current generation through Faraday’s effect. Thus, this design combines the energy harvesting from two different mechanisms, piezoelectric and electromagnetic, on the same platform. The prototype system was optimized using the finite element software, ANSYS, to find the resonance frequency and stress distribution. The power generated from the fabricated prototype was found to be 0.25W using the electromagnetic mechanism and 0.25mW using the piezoelectric mechanism at 35 g acceleration and 20Hz frequency.", "title": "" }, { "docid": "18738a644f88af299d9e94157f804812", "text": "Twitter is among the fastest-growing microblogging and online social networking services. Messages posted on Twitter (tweets) have been reporting everything from daily life stories to the latest local and global news and events. Monitoring and analyzing this rich and continuous user-generated content can yield unprecedentedly valuable information, enabling users and organizations to acquire actionable knowledge. This article provides a survey of techniques for event detection from Twitter streams. These techniques aim at finding real-world occurrences that unfold over space and time. In contrast to conventional media, event detection from Twitter streams poses new challenges. Twitter streams contain large amounts of meaningless messages and polluted content, which negatively affect the detection performance. In addition, traditional text mining techniques are not suitable, because of the short length of tweets, the large number of spelling and grammatical errors, and the frequent use of informal and mixed language. Event detection techniques presented in literature address these issues by adapting techniques from various fields to the uniqueness of Twitter. This article classifies these techniques according to the event type, detection task, and detection method and discusses commonly used features. Finally, it highlights the need for public benchmarks to evaluate the performance of different detection approaches and various features.", "title": "" }, { "docid": "bd963a55c28304493118028fe5f47bab", "text": "Tables are a common structuring element in many documents, s uch as PDF files. To reuse such tables, appropriate methods need to b e develop, which capture the structure and the content information. We have d e loped several heuristics which together recognize and decompose tables i n PDF files and store the extracted data in a structured data format (XML) for easi er reuse. Additionally, we implemented a prototype, which gives the user the ab ility of making adjustments on the extracted data. Our work shows that purel y heuristic-based approaches can achieve good results, especially for lucid t ables.", "title": "" }, { "docid": "cb4966a838bbefccbb1b74e5f541ce76", "text": "Theories of human behavior are an important but largely untapped resource for software engineering research. They facilitate understanding of human developers’ needs and activities, and thus can serve as a valuable resource to researchers designing software engineering tools. Furthermore, theories abstract beyond specific methods and tools to fundamental principles that can be applied to new situations. Toward filling this gap, we investigate the applicability and utility of Information Foraging Theory (IFT) for understanding information-intensive software engineering tasks, drawing upon literature in three areas: debugging, refactoring, and reuse. In particular, we focus on software engineering tools that aim to support information-intensive activities, that is, activities in which developers spend time seeking information. Regarding applicability, we consider whether and how the mathematical equations within IFT can be used to explain why certain existing tools have proven empirically successful at helping software engineers. Regarding utility, we applied an IFT perspective to identify recurring design patterns in these successful tools, and consider what opportunities for future research are revealed by our IFT perspective.", "title": "" }, { "docid": "a92772d3d3b6bf34ddf750f8d111f511", "text": "More than 20 years ago, researchers proposed that individual differences in performance in such domains as music, sports, and games largely reflect individual differences in amount of deliberate practice, which was defined as engagement in structured activities created specifically to improve performance in a domain. This view is a frequent topic of popular-science writing-but is it supported by empirical evidence? To answer this question, we conducted a meta-analysis covering all major domains in which deliberate practice has been investigated. We found that deliberate practice explained 26% of the variance in performance for games, 21% for music, 18% for sports, 4% for education, and less than 1% for professions. We conclude that deliberate practice is important, but not as important as has been argued.", "title": "" }, { "docid": "424239765383edd8079d90f63b3fde1d", "text": "The availability of huge amounts of medical data leads to the need for powerful data analysis tools to extract useful knowledge. Researchers have long been concerned with applying statistical and data mining tools to improve data analysis on large data sets. Disease diagnosis is one of the applications where data mining tools are proving successful results. Heart disease is the leading cause of death all over the world in the past ten years. Several researchers are using statistical and data mining tools to help health care professionals in the diagnosis of heart disease. Using single data mining technique in the diagnosis of heart disease has been comprehensively investigated showing acceptable levels of accuracy. Recently, researchers have been investigating the effect of hybridizing more than one technique showing enhanced results in the diagnosis of heart disease. However, using data mining techniques to identify a suitable treatment for heart disease patients has received less attention. This paper identifies gaps in the research on heart disease diagnosis and treatment and proposes a model to systematically close those gaps to discover if applying data mining techniques to heart disease treatment data can provide as reliable performance as that achieved in diagnosing heart disease.", "title": "" }, { "docid": "5fefeace0e6b5db92fa26e5201429c4b", "text": "For a real-time visualization of one of the Dutch harbors we needed a realistic looking water surface. The old shader showed the same waves everywhere, but inside a harbor waves have many different directions and sizes. To solve this problem we needed a shader capable of visualizing flow. We developed a new algorithm called Tiled Directional Flow which has several advantages over other implementations.", "title": "" }, { "docid": "33df4246544a1847b09018cc65ffc995", "text": "In this paper, we propose a method for computing partial functional correspondence between non-rigid shapes. We use perturbation analysis to show how removal of shape parts changes the Laplace-Beltrami eigenfunctions, and exploit it as a prior on the spectral representation of the correspondence. Corresponding parts are optimization variables in our problem and are used to weight the functional correspondence; we are looking for the largest and most regular (in the Mumford-Shah sense) parts that minimize correspondence distortion. We show that our approach can cope with very challenging correspondence settings.", "title": "" }, { "docid": "255de21131ccf74c3269cc5e7c21820b", "text": "This paper discusses the effect of driving current on frequency response of the two types of light emitting diodes (LEDs), namely, phosphor-based LED and single color LED. The experiments show that the influence of the change of driving current on frequency response of phosphor-based LED is not obvious compared with the single color LED(blue, red and green). The experiments also find that the bandwidth of the white LED was expanded from 1MHz to 32MHz by the pre-equalization strategy and 26Mbit/s transmission speed was taken under Bit Error Ratio of 7.55×10-6 within 3m by non-return-to-zero on-off-keying modulation. Especially, the frequency response intensity of the phosphor-based LED is little influenced by the fluctuation of the driving current, which meets the requirements that the indoor light source needs to be adjusted in real-time by driving current. As the bandwidth of the single color LED is changed by the driving current obviously, the LED modulation bandwidth should be calculated according to the minimum driving current while we consider the requirement of the VLC transmission speed.", "title": "" }, { "docid": "aed7f6b54aeaf11ec6596d1f04b9db48", "text": "Discourse modes play an important role in writing composition and evaluation. This paper presents a study on the manual and automatic identification of narration, exposition, description, argumentandemotion expressingsentences in narrative essays. We annotate a corpus to study the characteristics of discourse modes and describe a neural sequence labeling model for identification. Evaluation results show that discourse modes can be identified automatically with an average F1-score of 0.7. We further demonstrate that discourse modes can be used as features that improve automatic essay scoring (AES). The impacts of discourse modes for AES are also discussed.", "title": "" }, { "docid": "c30d53cd8c350615f20d5baef55de6d0", "text": "The Internet of Things (IoT) is everywhere around us. Smart communicating objects offer the digitalization of lives. Thus, IoT opens new opportunities in criminal investigations such as a protagonist or a witness to the event. Any investigation process involves four phases: firstly the identification of an incident and its evidence, secondly device collection and preservation, thirdly data examination and extraction and then finally data analysis and formalization.\n In recent years, the scientific community sought to develop a common digital framework and methodology adapted to IoT-based infrastructure. However, the difficulty of IoT lies in the heterogeneous nature of the device, lack of standards and the complex architecture. Although digital forensics are considered and adopted in IoT investigations, this work only focuses on collection. Indeed the identification phase is relatively unexplored. It addresses challenges of finding the best evidence and locating hidden devices. So, the traditional method of digital forensics does not fully fit the IoT environment.\n In this paperwork, we investigate the mobility in the context of IoT at the crime scene. This paper discusses the data identification and the classification methodology from IoT to looking for the best evidences. We propose tools and techniques to identify and locate IoT devices. We develop the recent concept of \"digital footprint\" in the crime area based on frequencies and interactions mapping between devices. We propose technical and data criteria to efficiently select IoT devices. Finally, the paper introduces a generalist classification table as well as the limits of such an approach.", "title": "" }, { "docid": "f87a4ddb602d9218a0175a9e804c87c6", "text": "We present a novel online audio-score alignment approach for multi-instrument polyphonic music. This approach uses a 2-dimensional state vector to model the underlying score position and tempo of each time frame of the audio performance. The process model is defined by dynamic equations to transition between states. Two representations of the observed audio frame are proposed, resulting in two observation models: a multi-pitch-based and a chroma-based. Particle filtering is used to infer the hidden states from observations. Experiments on 150 music pieces with polyphony from one to four show the proposed approach outperforms an existing offline global string alignment-based score alignment approach. Results also show that the multi-pitch-based observation model works better than the chroma-based one.", "title": "" }, { "docid": "1d1caa539215e7051c25a9f28da48651", "text": "Physiological changes occur in pregnancy to nurture the developing foetus and prepare the mother for labour and delivery. Some of these changes influence normal biochemical values while others may mimic symptoms of medical disease. It is important to differentiate between normal physiological changes and disease pathology. This review highlights the important changes that take place during normal pregnancy.", "title": "" }, { "docid": "cb8fa49be63150e1b85f98a44df691a5", "text": "SQL tuning---the attempt to improve a poorly-performing execution plan produced by the database query optimizer---is a critical aspect of database performance tuning. Ironically, as commercial databases strive to improve on the manageability front, SQL tuning is becoming more of a black art. It requires a high level of expertise in areas like (i) query optimization, run-time execution of query plan operators, configuration parameter settings, and other database internals; (ii) identification of missing indexes and other access structures; (iii) statistics maintained about the data; and (iv) characteristics of the underlying storage system. Since database systems, their workloads, and the data that they manage are not getting any simpler, database users and administrators often rely on trial and error for SQL tuning.\n In this paper, we take the position that the trial-and-error (or, experiment-driven) process of SQL tuning can be automated by the database system in an efficient manner; freeing the user or administrator from this burden in most cases. A number of current approaches to SQL tuning indeed take an experiment-driven approach. We are prototyping a tool, called zTuned, that automates experiment-driven SQL tuning. This paper describes the design choices in zTuned to address three nontrivial issues: (i) how is the SQL tuning logic integrated with the regular query optimizer, (ii) how to plan the experiments to conduct so that a satisfactory (new) plan can be found quickly, and (iii) how to conduct experiments with minimal impact on the user-facing production workload. We conclude with a preliminary empirical evaluation and outline promising new directions in automated SQL tuning.", "title": "" }, { "docid": "2f1acb3378e5281efac7db5b3371b131", "text": "Model-based reinforcement learning (RL) is considered to be a promising approach to reduce the sample complexity that hinders model-free RL. However, the theoretical understanding of such methods has been rather limited. This paper introduces a novel algorithmic framework for designing and analyzing model-based RL algorithms with theoretical guarantees. We design a meta-algorithm with a theoretical guarantee of monotone improvement to a local maximum of the expected reward. The meta-algorithm iteratively builds a lower bound of the expected reward based on the estimated dynamical model and sample trajectories, and then maximizes the lower bound jointly over the policy and the model. The framework extends the optimism-in-face-of-uncertainty principle to non-linear dynamical models in a way that requires no explicit uncertainty quantification. Instantiating our framework with simplification gives a variant of model-based RL algorithms Stochastic Lower Bounds Optimization (SLBO). Experiments demonstrate that SLBO achieves stateof-the-art performance when only one million or fewer samples are permitted on a range of continuous control benchmark tasks.1", "title": "" }, { "docid": "6de71e8106d991d2c3d2b845a9e0a67e", "text": "XML repositories are now a widespread means for storing and exchanging information on the Web. As these repositories become increasingly used in dynamic applications such as e-commerce, there is a rapidly growing need for a mechanism to incorporate reactive functionality in an XML setting. Event-condition-action (ECA) rules are a technology from active databases and are a natural method for supporting suchfunctionality. ECA rules can be used for activities such as automatically enforcing document constraints, maintaining repository statistics, and facilitating publish/subscribe applications. An important question associated with the use of a ECA rules is how to statically predict their run-time behaviour. In this paper, we define a language for ECA rules on XML repositories. We then investigate methods for analysing the behaviour of a set of ECA rules, a task which has added complexity in this XML setting compared with conventional active databases.", "title": "" } ]
scidocsrr
c93eb746fd3537a1ea9f7f5374b87d00
Cytoscape Web: an interactive web-based network browser
[ { "docid": "6f77e74cd8667b270fae0ccc673b49a5", "text": "GeneMANIA (http://www.genemania.org) is a flexible, user-friendly web interface for generating hypotheses about gene function, analyzing gene lists and prioritizing genes for functional assays. Given a query list, GeneMANIA extends the list with functionally similar genes that it identifies using available genomics and proteomics data. GeneMANIA also reports weights that indicate the predictive value of each selected data set for the query. Six organisms are currently supported (Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Mus musculus, Homo sapiens and Saccharomyces cerevisiae) and hundreds of data sets have been collected from GEO, BioGRID, Pathway Commons and I2D, as well as organism-specific functional genomics data sets. Users can select arbitrary subsets of the data sets associated with an organism to perform their analyses and can upload their own data sets to analyze. The GeneMANIA algorithm performs as well or better than other gene function prediction methods on yeast and mouse benchmarks. The high accuracy of the GeneMANIA prediction algorithm, an intuitive user interface and large database make GeneMANIA a useful tool for any biologist.", "title": "" } ]
[ { "docid": "4d0b163e7c4c308696fa5fd4d93af894", "text": "Modeling agent behavior is central to understanding the emergence of complex phenomena in multiagent systems. Prior work in agent modeling has largely been task-specific and driven by handengineering domain-specific prior knowledge. We propose a general learning framework for modeling agent behavior in any multiagent system using only a handful of interaction data. Our framework casts agent modeling as a representation learning problem. Consequently, we construct a novel objective inspired by imitation learning and agent identification and design an algorithm for unsupervised learning of representations of agent policies. We demonstrate empirically the utility of the proposed framework in (i) a challenging highdimensional competitive environment for continuous control and (ii) a cooperative environment for communication, on supervised predictive tasks, unsupervised clustering, and policy optimization using deep reinforcement learning.", "title": "" }, { "docid": "b4b2c5f66c948cbd4c5fbff7f9062f12", "text": "China is taking major steps to improve Beijing’s air quality for the 2008 Olympic Games. However, concentrations of fine particulate matter and ozone in Beijing often exceed healthful levels in the summertime. Based on the US EPA’s Models-3/CMAQ model simulation over the Beijing region, we estimate that about 34% of PM2.5 on average and 35–60% of ozone during high ozone episodes at the Olympic Stadium site can be attributed to sources outside Beijing. Neighboring Hebei and Shandong Provinces and the Tianjin Municipality all exert significant influence on Beijing’s air quality. During sustained wind flow from the south, Hebei Province can contribute 50–70% of Beijing’s PM2.5 concentrations and 20–30% of ozone. Controlling only local sources in Beijing will not be sufficient to attain the air quality goal set for the Beijing Olympics. There is an urgent need for regional air quality management studies and new emission control strategies to ensure that the air quality goals for 2008 are met. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "35527aff5ef7f67a19166c0e7e81f77f", "text": "BACKGROUND\nAtherosclerotic plaque stability is related to histological composition. However, current diagnostic tools do not allow adequate in vivo identification and characterization of plaques. Spectral analysis of backscattered intravascular ultrasound (IVUS) data has potential for real-time in vivo plaque classification.\n\n\nMETHODS AND RESULTS\nEighty-eight plaques from 51 left anterior descending coronary arteries were imaged ex vivo at physiological pressure with the use of 30-MHz IVUS transducers. After IVUS imaging, the arteries were pressure-fixed and corresponding histology was collected in matched images. Regions of interest, selected from histology, were 101 fibrous, 56 fibrolipidic, 50 calcified, and 70 calcified-necrotic regions. Classification schemes for model building were computed for autoregressive and classic Fourier spectra by using 75% of the data. The remaining data were used for validation. Autoregressive classification schemes performed better than those from classic Fourier spectra with accuracies of 90.4% for fibrous, 92.8% for fibrolipidic, 90.9% for calcified, and 89.5% for calcified-necrotic regions in the training data set and 79.7%, 81.2%, 92.8%, and 85.5% in the test data, respectively. Tissue maps were reconstructed with the use of accurate predictions of plaque composition from the autoregressive classification scheme.\n\n\nCONCLUSIONS\nCoronary plaque composition can be predicted through the use of IVUS radiofrequency data analysis. Autoregressive classification schemes performed better than classic Fourier methods. These techniques allow real-time analysis of IVUS data, enabling in vivo plaque characterization.", "title": "" }, { "docid": "14682892d663cb1d351f54f3534c44b2", "text": "Feel lonely? What about reading books? Book is one of the greatest friends to accompany while in your lonely time. When you have no friends and activities somewhere and sometimes, reading book can be a great choice. This is not only for spending the time, it will increase the knowledge. Of course the b=benefits to take will relate to what kind of book that you are reading. And now, we will concern you to try reading data quality concepts methodologies and techniques as one of the reading material to finish quickly.", "title": "" }, { "docid": "86dd65bddeb01d4395b81cef0bc4f00e", "text": "Many people may see the development of software and hardware like different disciplines. However, there are great similarities between them that have been shown due to the appearance of extensions for general purpose programming languages for its use as hardware description languages. In this contribution, the approach proposed by the MyHDL package to use Python as an HDL is analyzed by making a comparative study. This study is based on the independent application of Verilog and Python based flows to the development of a real peripheral. The use of MyHDL has revealed to be a powerful and promising tool, not only because of the surprising results, but also because it opens new horizons towards the development of new techniques for modeling and verification, using the full power of one of the most versatile programming languages nowadays.", "title": "" }, { "docid": "2b972c01c0cac24cbbf15f8f2a3d4fa7", "text": "We present techniques for gathering data that expose errors of automatic predictive models. In certain common settings, traditional methods for evaluating predictive models tend to miss rare-but-important errors—most importantly, rare cases for which the model is confident of its prediction (but wrong). In this paper we present a system that, in a game-like setting, asks humans to identify cases that will cause the predictivemodel-based system to fail. Such techniques are valuable in discovering problematic cases that do not reveal themselves during the normal operation of the system, and may include cases that are rare but catastrophic. We describe the design of the system, including design iterations that did not quite work. In particular, the system incentivizes humans to provide examples that are difficult for the model to handle, by providing a reward proportional to the magnitude of the predictive model’s error. The humans are asked to “Beat the Machine” and find cases where the automatic model (“the Machine”) is wrong. Experiments show that the humans using Beat the Machine identify more errors than traditional techniques for discovering errors in from predictive models, and indeed, they identify many more errors where the machine is confident it is correct. Further, the cases the humans identify seem to be not simply outliers, but coherent areas missed completely by the model. Beat the machine identifies the “unknown unknowns.”", "title": "" }, { "docid": "8310851d5115ec570953a8c4a1757332", "text": "We present a global optimization approach for mapping color images onto geometric reconstructions. Range and color videos produced by consumer-grade RGB-D cameras suffer from noise and optical distortions, which impede accurate mapping of the acquired color data to the reconstructed geometry. Our approach addresses these sources of error by optimizing camera poses in tandem with non-rigid correction functions for all images. All parameters are optimized jointly to maximize the photometric consistency of the reconstructed mapping. We show that this optimization can be performed efficiently by an alternating optimization algorithm that interleaves analytical updates of the color map with decoupled parameter updates for all images. Experimental results demonstrate that our approach substantially improves color mapping fidelity.", "title": "" }, { "docid": "1ec52bc459957064fba3bb0feecf264d", "text": "Non-orthogonal transmission, although not entirely new to the wireless industry, is gaining more attention due to its promised throughput gain and unique capability to support a large number of simultaneous transmissions within limited resources. In this article, several key techniques for non-orthogonal transmission are discussed. The downlink technique is featured by MUST, which is being specified in 3GPP for mobile broadband services. In the uplink, grantfree schemes such as multi-user shared access and sparse code multiple access, are promising in supporting massive machine-type communication services. The multi-antenna aspect is also addressed in the context of MUST, showing that MIMO technology and non-orthogonal transmission can be used jointly to provide combined gain.", "title": "" }, { "docid": "4cb7a6a3dee9f5398e779f353d2f542c", "text": "Data mining approach was used in this paper to predict labor market needs, by implementing Naïve Bayes Classifiers, Decision Trees, and Decision Rules techniques. Naïve Bayes technique implemented by creating tables of training; the sets of these tables were generated by using four factors that affect continuity in their jobs. The training tables used to predict the classification of other (unclassified) instances, and tabulate the results of conditional and prior probabilities to test unknown instance for classification. The information obtained can classify unknown instances for employment in the labor market. In Decision Tree technique, a model was constructed from a dataset in the form of a tree, created by a process known as splitting on the value of attributes. The Decision Rules, which was constructed from Decision Trees of rules gave the best results, therefore we recommended using this method in predicting labor market. © 2013 The Authors. Published by Elsevier B.V. Selection and/or peer-review under responsibility of the organizers of the 2013 International Conference on Computational Science", "title": "" }, { "docid": "83688690678b474cd9efe0accfdb93f9", "text": "Feature selection, as a preprocessing step to machine learning, is effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility. However, the recent increase of dimensionality of data poses a severe challenge to many existing feature selection methods with respect to efficiency and effectiveness. In this work, we introduce a novel concept, predominant correlation, and propose a fast filter method which can identify relevant features as well as redundancy among relevant features without pairwise correlation analysis. The efficiency and effectiveness of our method is demonstrated through extensive comparisons with other methods using real-world data of high dimensionality.", "title": "" }, { "docid": "865c1ee7044cbb23d858706aa1af1a63", "text": "Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to protect PV modules from damages and to eliminate the risks of safety hazards. This paper examines two types of unique faults found in photovoltaic (PV) array installations that have not been studied in the literature. One is a fault that occurs under low irradiance conditions. In some circumstances, fault current protection devices are unable to detect certain types of faults so that the fault may remain hidden in the PV system, even after irradiance increases. The other type of fault occurs when a string of PV modules is reversely connected, caused by inappropriate installation. This fault type brings new challenges for overcurrent protection devices because of the high rating voltage requirement. In both cases, these unique PV faults may subsequently lead to unexpected safety hazards, reduced system efficiency and reduced reliability.", "title": "" }, { "docid": "b4e5153f7592394e8743bc0fdee40dcc", "text": "This paper is focussed on the modelling and control of a hydraulically-driven biologically-inspired robotic leg. The study is part of a larger project aiming at the development of an autonomous quadruped robot (hyQ) for outdoor operations. The leg has two hydraulically-actuated degrees of freedom (DOF), the hip and knee joints. The actuation system is composed of proportional valves and asymmetric cylinders. After a brief description of the prototype leg, the paper shows the development of a comprehensive model of the leg where critical parameters have been experimentally identified. Subsequently the leg control design is presented. The core of this work is the experimental assessment of the pros and cons of single-input single-output (SISO) vs. multiple-input multiple-output (MIMO) and linear vs. nonlinear control algorithms in this application (the leg is a coupled multivariable system driven by nonlinear actuators). The control schemes developed are a conventional PID (linear SISO), a Linear Quadratic Regulator (LQR) controller (linear MIMO) and a Feedback Linearisation (FL) controller (nonlinear MIMO). LQR performs well at low frequency but its behaviour worsens at higher frequencies. FL produces the fastest response in simulation, but when implemented is sensitive to parameters uncertainty and needs to be properly modified to achieve equally good performance also in the practical implementation.", "title": "" }, { "docid": "7d25c646a8ce7aa862fba7088b8ea915", "text": "Neuro-dynamic programming (NDP for short) is a relatively new class of dynamic programming methods for control and sequential decision making under uncertainty. These methods have the potential of dealing with problems that for a long time were thought to be intractable due to either a large state space or the lack of an accurate model. They combine ideas from the fields of neural networks, artificial intelligence, cognitive science, simulation, and approximation theory. We will delineate the major conceptual issues, survey a number of recent developments, describe some computational experience, and address a number of open questions. We consider systems where decisions are made in stages. The outcome of each decision is not fully predictable but can be anticipated to some extent before the next decision is made. Each decision results in some immediate cost but also affects the context in which future decisions are to be made and therefore affects the cost incurred in future stages. Dynamic programming (DP for short) provides a mathematical formalization of the tradeoff between immediate and future costs. Generally, in DP formulations there is a discrete-time dynamic system whose state evolves according to given transition probabilities that depend on a decision/control u. In particular, if we are in state i and we choose decision u, we move to state j with given probability pij(u). Simultaneously with this transition, we incur a cost g(i, u, j). In comparing, however, the available decisions u, it is not enough to look at the magnitude of the cost g(i, u, j); we must also take into account how desirable the next state j is. We thus need a way to rank or rate states j. This is done by using the optimal cost (over all remaining stages) starting from state j, which is denoted by J∗(j). These costs can be shown to", "title": "" }, { "docid": "bca883795052e1c14553600f40a0046b", "text": "The SEIR model with nonlinear incidence rates in epidemiology is studied. Global stability of the endemic equilibrium is proved using a general criterion for the orbital stability of periodic orbits associated with higher-dimensional nonlinear autonomous systems as well as the theory of competitive systems of differential equations.", "title": "" }, { "docid": "f64c4946a26f401822539bdd020f4ac5", "text": "This paper reviews the concept of presence in immersive virtual environments, the sense of being there signalled by people acting and responding realistically to virtual situations and events. We argue that presence is a unique phenomenon that must be distinguished from the degree of engagement, involvement in the portrayed environment. We argue that there are three necessary conditions for presence: the (a) consistent low latency sensorimotor loop between sensory data and proprioception; (b) statistical plausibility: images must be statistically plausible in relation to the probability distribution of images over natural scenes. A constraint on this plausibility is the level of immersion; (c) behaviour-response correlations: Presence may be enhanced and maintained over time by appropriate correlations between the state and behaviour of participants and responses within the environment, correlations that show appropriate responses to the activity of the participants. We conclude with a discussion of methods for assessing whether presence occurs, and in particular recommend the approach of comparison with ground truth and give some examples of this.", "title": "" }, { "docid": "10c7b7a19197c8562ebee4ae66c1f5e8", "text": "Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability. However, visualization and understanding of GANs is largely missing. How does a GAN represent our visual world internally? What causes the artifacts in GAN results? How do architectural choices affect GAN learning? Answering such questions could enable us to develop new insights and better models. In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts with a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. Finally, we examine the contextual relationship between these units and their surrounding by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in the scene. We provide open source interpretation tools to help peer researchers and practitioners better understand their GAN models∗.", "title": "" }, { "docid": "d3c3e9877695a8abb2783e685f254eef", "text": "Software systems are constantly evolving, with new versions and patches being released on a continuous basis. Unfortunately, software updates present a high risk, with many releases introducing new bugs and security vulnerabilities. \n We tackle this problem using a simple but effective multi-version based approach. Whenever a new update becomes available, instead of upgrading the software to the new version, we run the new version in parallel with the old one; by carefully coordinating their executions and selecting the behaviour of the more reliable version when they diverge, we create a more secure and dependable multi-version application. \n We implemented this technique in Mx, a system targeting Linux applications running on multi-core processors, and show that it can be applied successfully to several real applications such as Coreutils, a set of user-level UNIX applications; Lighttpd, a popular web server used by several high-traffic websites such as Wikipedia and YouTube; and Redis, an advanced key-value data structure server used by many well-known services such as GitHub and Flickr.", "title": "" }, { "docid": "c5d06fe50c16278943fe1df7ad8be888", "text": "Current main memory organizations in embedded and mobile application systems are DRAM dominated. The ever-increasing gap between today's processor and memory speeds makes the DRAM subsystem design a major aspect of computer system design. However, the limitations to DRAM scaling and other challenges like refresh provide undesired trade-offs between performance, energy and area to be made by architecture designers. Several emerging NVM options are being explored to at least partly remedy this but today it is very hard to assess the viability of these proposals because the simulations are not fully based on realistic assumptions on the NVM memory technologies and on the system architecture level. In this paper, we propose to use realistic, calibrated STT-MRAM models and a well calibrated cross-layer simulation and exploration framework, named SEAT, to better consider technologies aspects and architecture constraints. We will focus on general purpose/mobile SoC multi-core architectures. We will highlight results for a number of relevant benchmarks, representatives of numerous applications based on actual system architecture. The most energy efficient STT-MRAM based main memory proposal provides an average energy consumption reduction of 27% at the cost of 2x the area and the least energy efficient STT-MRAM based main memory proposal provides an average energy consumption reduction of 8% at the around the same area or lesser when compared to DRAM.", "title": "" }, { "docid": "eba545eb04c950ecd9462558c9d3da85", "text": "The ability to recognize facial expressions automatically enables novel applications in human-computer interaction and other areas. Consequently, there has been active research in this field, with several recent works utilizing Convolutional Neural Networks (CNNs) for feature extraction and inference. These works differ significantly in terms of CNN architectures and other factors. Based on the reported results alone, the performance impact of these factors is unclear. In this paper, we review the state of the art in image-based facial expression recognition using CNNs and highlight algorithmic differences and their performance impact. On this basis, we identify existing bottlenecks and consequently directions for advancing this research field. Furthermore, we demonstrate that overcoming one of these bottlenecks – the comparatively basic architectures of the CNNs utilized in this field – leads to a substantial performance increase. By forming an ensemble of modern deep CNNs, we obtain a FER2013 test accuracy of 75.2%, outperforming previous works without requiring auxiliary training data or face registration.", "title": "" }, { "docid": "4d36b2d77713a762040fd4ebc68e0d54", "text": "Diversification and fragmentation of scientific exploration brings an increasing need for integration, for example through interdisciplinary research. The field of nanoscience and nanotechnology appears to exhibit strong interdisciplinary characteristics. Our objective was to explore the structure of the field and ascertain how different research areas within this field reflect interdisciplinarity through citation patterns. The complex relations between the citing and cited articles were examined through schematic visualization. Examination of WOS categories assigned to journals shows the scatter of nano studies across a wide range of research topics. We identified four distinctive groups of categories each showing some detectable shared characteristics. Three alternative measures of similarity were employed to delineate these groups. These distinct groups enabled us to assess interdisciplinarity within the groups and relationships between the groups. Some measurable levels of interdisciplinarity exist in all groups. However, one of the groups indicated that certain categories of both citing as well as cited articles aggregate mostly in the framework of physics, chemistry, and materials. This may suggest that the nanosciences show characteristics of a distinct discipline. The similarity in citing articles is most evident inside the respective groups, though, some subgroups within larger groups are also related to each other through the similarity of cited articles.", "title": "" } ]
scidocsrr
be17afe8340361b4ca29d69c8c94b22d
Emotion Intensities in Tweets
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" } ]
[ { "docid": "0b197f6bcce309812e0300536a266788", "text": "Cross-Site Scripting (XSS) vulnerability is one of the most widespread security problems for web applications, which has been haunting the web application developers for years. Various approaches to defend against attacks (that use XSS vulnerabilities) are available today but no single approach solves all the loopholes. After investigating this area, we have been motivated to propose an efficient approach to prevent persistent XSS attack by applying pattern filtering method. In this work, along with necessary background, we present case studies to show the effectiveness of our approach.", "title": "" }, { "docid": "dc361648080cda9716abb8123e04189d", "text": "To achieve 1000-fold capacity increase in 5G wireless communications, ultradense network (UDN) is believed to be one of the key enabling technologies. Most of the previous research activities on UDNs were based very much on human-to-human communications. However, to provide ubiquitous Internet of Things services, machine-to-machine (M2M) communications will play a critical role in 5G systems. As the number of machine-oriented connections increases, it is expected that supporting M2M communications is an essential requirement in all future UDNs. In this paper, we aim to bridge the gaps between M2M communications and UDNs, which were commonly considered as two separate issues in the literature. The paper begins with a brief introduction on M2M communications and UDNs, and then will discuss the issues on the roles of M2M communications in future UDNs. We will identify different ways to implement M2M communications in the UDNs from the perspectives of layered architecture, including physical, media access control, network, and application layers. Other two important issues, i.e., security and network virtualization, will also be addressed. Before the end of this paper, we will give a summary on identified research topics for future studies.", "title": "" }, { "docid": "b7a4b6f6f3028923853649077c18dfa5", "text": "The increasing ageing population around the world and the increased risk of falling among this demographic, challenges society and technology to find better ways to mitigate the occurrence of such costly and detrimental events as falls. The most common activity associated with falls is bed transfers; therefore, the most significant high risk activity. Several technological solutions exist for bed exiting detection using a variety of sensors which are attached to the body, bed or floor. However, lack of real life performance studies, technical limitations and acceptability are still key issues. In this research, we present and evaluate a novel method for mitigating the high falls risk associated with bed exits based on using an inexpensive, privacy preserving and passive sensor enabled RFID device. Our approach is based on a classification system built upon conditional random fields that requires no preprocessing of sensorial and RF metrics data extracted from an RFID platform. We evaluated our classification algorithm and the wearability of our sensor using elderly volunteers (66-86 y.o.). The results demonstrate the validity of our approach and the performance is an improvement on previous bed exit classification studies. The participants of the study also overwhelmingly agreed that the sensor was indeed wearable and presented no problems.", "title": "" }, { "docid": "391c34e983c99af1cc0a06f6f1d4a6bf", "text": "Network protocol reverse engineering of botnet command and control (C&C) is a challenging task, which requires various manual steps and a significant amount of domain knowledge. Furthermore, most of today's C&C protocols are encrypted, which prevents any analysis on the traffic without first discovering the encryption algorithm and key. To address these challenges, we present an end-to-end system for automatically discovering the encryption algorithm and keys, generating a protocol specification for the C&C traffic, and crafting effective network signatures. In order to infer the encryption algorithm and key, we enhance state-of-the-art techniques to extract this information using lightweight binary analysis. In order to generate protocol specifications we infer field types purely by analyzing network traffic. We evaluate our approach on three prominent malware families: Sality, ZeroAccess and Ramnit. Our results are encouraging: the approach decrypts all three protocols, detects 97% of fields whose semantics are supported, and infers specifications that correctly align with real protocol specifications.", "title": "" }, { "docid": "228a85528befd7c21a014889d8519505", "text": "This double-blind study evaluated change in cognitive performance and functional capacity in lurasidone and quetiapine XR-treated schizophrenia patients over a 6-week, placebo-controlled study, followed by a 6-month, double-blind extension. Cognitive performance and functional capacity were assessed with the CogState computerized cognitive battery and the UPSA-B. Analyses were conducted for all subjects, as well as the subsample whose test scores met prespecified validity criteria. No statistically significant differences were found for change in the composite neurocognitive score for lurasidone (80 mg/day and 160 mg/day) groups, quetiapine XR and placebo in the full sample at week 6. For the evaluable sample (N = 267), lurasidone 160 mg was superior to both placebo and quetiapine on the neurocognitive composite, while lurasidone 80 mg, quetiapine XR, and placebo did not differ. UPSA-B scores were superior to placebo at 6 weeks for all treatments. In the double-blind extension study, analysis of the full sample showed significantly better cognitive performance in the lurasidone (40-160 mg) group compared to the quetiapine XR (200-800 mg) group at both 3 and 6 months. Cognitive and UPSA-B total scores were significantly correlated at baseline and for change over time. This is the first study to date where the investigational treatment was superior to placebo on both cognitive assessments and a functional coprimary measure at 6 weeks, as well as demonstrated superiority to an active comparator on cognitive assessments at 6 weeks and at 6 months of extension study treatment. These findings require replication, but are not due to practice effects, because of the placebo and active controls.", "title": "" }, { "docid": "676540e4b0ce65a71e86bf346f639f22", "text": "Methylation is a prevalent posttranscriptional modification of RNAs. However, whether mammalian microRNAs are methylated is unknown. Here, we show that the tRNA methyltransferase NSun2 methylates primary (pri-miR-125b), precursor (pre-miR-125b), and mature microRNA 125b (miR-125b) in vitro and in vivo. Methylation by NSun2 inhibits the processing of pri-miR-125b2 into pre-miR-125b2, decreases the cleavage of pre-miR-125b2 into miR-125, and attenuates the recruitment of RISC by miR-125, thereby repressing the function of miR-125b in silencing gene expression. Our results highlight the impact of miR-125b function via methylation by NSun2.", "title": "" }, { "docid": "26ad95d4ecbea507c22e429efbfeb1d1", "text": "A considerable portion of web images capture events that occur in our personal lives or social activities. In this paper, we aim to develop an effective method for recognizing events from such images. Despite the sheer amount of study on event recognition, most existing methods rely on videos and are not directly applicable to this task. Generally, events are complex phenomena that involve interactions among people and objects, and therefore analysis of event photos requires techniques that can go beyond recognizing individual objects and carry out joint reasoning based on evidences of multiple aspects. Inspired by the recent success of deep learning, we formulate a multi-layer framework to tackle this problem, which takes into account both visual appearance and the interactions among humans and objects, and combines them via semantic fusion. An important issue arising here is that humans and objects discovered by detectors are in the form of bounding boxes, and there is no straightforward way to represent their interactions and incorporate them with a deep network. We address this using a novel strategy that projects the detected instances onto multi-scale spatial maps. On a large dataset with 60, 000 images, the proposed method achieved substantial improvement over the state-of-the-art, raising the accuracy of event recognition by over 10%.", "title": "" }, { "docid": "c0d7b92c1b88a2c234eac67c5677dc4d", "text": "To appear in G Tesauro D S Touretzky and T K Leen eds Advances in Neural Information Processing Systems MIT Press Cambridge MA A straightforward approach to the curse of dimensionality in re inforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neu ral net Although this has been successful in the domain of backgam mon there is no guarantee of convergence In this paper we show that the combination of dynamic programming and function approx imation is not robust and in even very benign cases may produce an entirely wrong policy We then introduce Grow Support a new algorithm which is safe from divergence yet can still reap the bene ts of successful generalization", "title": "" }, { "docid": "5bf4a17592eca1881a93cd4930f4187d", "text": "The problem of automatically generating a computer program from some specification has been studied since the early days of AI. Recently, two competing approaches for automatic program learning have received significant attention: (1) neural program synthesis, where a neural network is conditioned on input/output (I/O) examples and learns to generate a program, and (2) neural program induction, where a neural network generates new outputs directly using a latent program representation. Here, for the first time, we directly compare both approaches on a large-scale, real-world learning task and we additionally contrast to rule-based program synthesis, which uses hand-crafted semantics to guide the program generation. Our neural models use a modified attention RNN to allow encoding of variable-sized sets of I/O pairs, which achieve 92% accuracy on a real-world test set, compared to the 34% accuracy of the previous best neural synthesis approach. The synthesis model also outperforms a comparable induction model on this task, but we more importantly demonstrate that the strength of each approach is highly dependent on the evaluation metric and end-user application. Finally, we show that we can train our neural models to remain very robust to the type of noise expected in real-world data (e.g., typos), while a highly-engineered rule-based system fails entirely.", "title": "" }, { "docid": "023d547ffb283a377635ad12be9cac99", "text": "Pretend play has recently been of great interest to researchers studying children's understanding of the mind. One reason for this interest is that pretense seems to require many of the same skills as mental state understanding, and these skills seem to emerge precociously in pretense. Pretend play might be a zone of proximal development, an activity in which children operate at a cognitive level higher than they operate at in nonpretense situations. Alternatively, pretend play might be fool's gold, in that it might appear to be more sophisticated than it really is. This paper first discusses what pretend play is. It then investigates whether pretend play is an area of advanced understanding with reference to 3 skills that are implicated in both pretend play and a theory of mind: the ability to represent one object as two things at once, the ability to see one object as representing another, and the ability to represent mental representations.", "title": "" }, { "docid": "e1f5deb5b571e8b9ece9ae7850d686dd", "text": "A collection of Curbside Consultation published in AFP is available at http://www.aafp.org/afp/ curbside. Case Scenario A 16-year-old girl and her parents presented to my office for her wellness evaluation. The patient has generalized anxiety disorder with comorbid major depression, for which she has been prescribed a serotonergic antidepressant. She is a high school student, lives with her parents, and is currently preparing college applications. She has occasional headaches and disturbed sleep. She takes daily three-mile walks and plays on her school tennis team, both of which help relieve her anxiety symptoms. The therapist she sees once a week has suggested enrollment in a therapeutic foster dog walking program to help further relieve her anxiety symptoms. Would animal-assisted therapy be helpful as a part of anxiety and depression management in this patient? Would such treatment be a helpful approach for other teenaged patients in my practice with similar diagnoses?", "title": "" }, { "docid": "4b0b59d137fad3c6a07cb15ac916de3c", "text": "We describe a novel method for blind, single-image spectral super-resolution. While conventional superresolution aims to increase the spatial resolution of an input image, our goal is to spectrally enhance the input, i.e., generate an image with the same spatial resolution, but a greatly increased number of narrow (hyper-spectral) wavelength bands. Just like the spatial statistics of natural images has rich structure, which one can exploit as prior to predict high-frequency content from a low resolution image, the same is also true in the spectral domain: the materials and lighting conditions of the observed world induce structure in the spectrum of wavelengths observed at a given pixel. Surprisingly, very little work exists that attempts to use this diagnosis and achieve blind spectral super-resolution from single images. We start from the conjecture that, just like in the spatial domain, we can learn the statistics of natural image spectra, and with its help generate finely resolved hyper-spectral images from RGB input. Technically, we follow the current best practice and implement a convolutional neural network (CNN), which is trained to carry out the end-to-end mapping from an entire RGB image to the corresponding hyperspectral image of equal size. We demonstrate spectral super-resolution both for conventional RGB images and for multi-spectral satellite data, outperforming the state-of-the-art.", "title": "" }, { "docid": "77b78ec70f390289424cade3850fc098", "text": "As the primary barrier between an organism and its environment, epithelial cells are well-positioned to regulate tolerance while preserving immunity against pathogens. Class II major histocompatibility complex molecules (MHC class II) are highly expressed on the surface of epithelial cells (ECs) in both the lung and intestine, although the functional consequences of this expression are not fully understood. Here, we summarize current information regarding the interactions that regulate the expression of EC MHC class II in health and disease. We then evaluate the potential role of EC as non-professional antigen presenting cells. Finally, we explore future areas of study and the potential contribution of epithelial surfaces to gut-lung crosstalk.", "title": "" }, { "docid": "903dc946b338c178634fcf9f14e1b1eb", "text": "Detecting system anomalies is an important problem in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be powerful in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: (1) fault propagation in the network is ignored, (2) the root casual anomalies may not always be the nodes with a high percentage of vanishing correlations, (3) temporal patterns of vanishing correlations are not exploited for robust detection, and (4) prior knowledge on anomalous nodes are not exploited for (semi-)supervised detection. To address these limitations, in this article we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network and can perform joint inference on both the structural and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations and can compensate for unstructured measurement noise in the system. Moreover, when the prior knowledge on the anomalous status of some nodes are available at certain time points, our approach is able to leverage them to further enhance the anomaly inference accuracy. When the prior knowledge is noisy, our approach also automatically learns reliable information and reduces impacts from noises. By performing extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets, we demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "074d4a552c82511d942a58b93d51c38a", "text": "This is a survey of neural network applications in the real-world scenario. It provides a taxonomy of artificial neural networks (ANNs) and furnish the reader with knowledge of current and emerging trends in ANN applications research and area of focus for researchers. Additionally, the study presents ANN application challenges, contributions, compare performances and critiques methods. The study covers many applications of ANN techniques in various disciplines which include computing, science, engineering, medicine, environmental, agriculture, mining, technology, climate, business, arts, and nanotechnology, etc. The study assesses ANN contributions, compare performances and critiques methods. The study found that neural-network models such as feedforward and feedback propagation artificial neural networks are performing better in its application to human problems. Therefore, we proposed feedforward and feedback propagation ANN models for research focus based on data analysis factors like accuracy, processing speed, latency, fault tolerance, volume, scalability, convergence, and performance. Moreover, we recommend that instead of applying a single method, future research can focus on combining ANN models into one network-wide application.", "title": "" }, { "docid": "69fb72937745829046379800649b4f6f", "text": "For a plane wave incident on either a Luneburg lens or a modified Luneburg lens, the magnitude and phase of the transmitted electric field are calculated as a function of the scattering angle in the context of ray theory. It is found that the ray trajectory and the scattered intensity are not uniformly convergent in the vicinity of edge ray incidence on a Luneburg lens, which corresponds to the semiclassical phenomenon of orbiting. In addition, it is found that rays transmitted through a large-focal-length modified Luneburg lens participate in a far-zone rainbow, the details of which are exactly analytically soluble in ray theory. Using these results, the Airy theory of the modified Luneburg lens is derived and compared with the Airy theory of the rainbows of a homogeneous sphere.", "title": "" }, { "docid": "888bb64b35edc7c4a44012b3d32e70e8", "text": "We present the Sketchy database, the first large-scale collection of sketch-photo pairs. We ask crowd workers to sketch particular photographic objects sampled from 125 categories and acquire 75,471 sketches of 12,500 objects. The Sketchy database gives us fine-grained associations between particular photos and sketches, and we use this to train cross-domain convolutional networks which embed sketches and photographs in a common feature space. We use our database as a benchmark for fine-grained retrieval and show that our learned representation significantly outperforms both hand-crafted features as well as deep features trained for sketch or photo classification. Beyond image retrieval, we believe the Sketchy database opens up new opportunities for sketch and image understanding and synthesis.", "title": "" }, { "docid": "956cba3ab1f500fbb2d3d7a0723a0f86", "text": "Decision guidance models are a means for design space exploration and documentation. In this paper, we present decision guidance models for microservice monitoring. The selection of a monitoring system is an essential part of each microservice architecture due to the high level of dynamic structure and behavior of such a system. We present decision guidance models for generation of monitoring data, data management, processing monitoring data, and for disseminating and presenting monitoring information to stakeholders. The presented models have been derived from literature, our previous work on monitoring for distributed systems and microservice-based systems, and by analyzing existing monitoring systems. The developed models have been used for discussing monitoring requirements for a microservice-based system with a company in the process automation domain. They are part of a larger effort for developing decision guidance models for microservice architecture in general.", "title": "" }, { "docid": "c7d1fe6e9fa7acc54da8a8ab6030e48f", "text": "An ongoing challenge in electrical engineering is the design of antennas whose size is small compared to the broadcast wavelength λ. One difficulty is that the radiation resistance of a small antenna is small compared to that of the typical transmission lines that feed the antenna, so that much of the power in the feed line is reflected off the antenna rather than radiated unless a matching network is used at the antenna terminals (with a large inductance for a small dipole antenna and a large capacitance for a small loop antenna). The radiation resistance of an antenna that emits dipole radiation is proportional to the square of the peak (electric or magnetic) dipole moment of the antenna. This dipole moment is roughly the product of the peak charge times the length of the antenna in the case of a linear (electric) antenna, and is the product of the peak current times the area of the antenna in the case of a loop (magnetic) antenna. Hence, it is hard to increase the radiation resistance of small linear or loop antennas by altering their shapes. One suggestion for a small antenna is the so-called “crossed-field” antenna [2]. Its proponents are not very explicit as to the design of this antenna, so this problem is based on a conjecture as to its motivation. It is well known that in the far zone of a dipole antenna the electric and magnetic fields have equal magnitudes (in Gaussian units), and their directions are at right angles to each other and to the direction of propagation of the radiation. Furthermore, the far zone electric and magnetic fields are in phase. The argument is, I believe, that it is desirable if these conditions could also be met in the near zone of the antenna. The proponents appear to argue that in the near zone the magnetic field B is in phase with the current in a simple, small antenna, while the electric field E is in phase with the charge, but the charge and current have a 90◦ phase difference. Hence, they imply, the electric and magnetic fields are 90◦ out of phase in the near zone, so that the radiation (which is proportional to E× B) is weak. The concept of the “crossed-field” antenna seems to be based on the use of two small antennas driven 90◦ out of phase. The expectation is that the electric field of one of the A center-fed linear dipole antenna of total length l λ has radiation resistance Rlinear = (l/λ) 197 Ω, while a circular loop antenna of diameter d λ has Rloop = (d/λ) 1948 Ω. For example, if l = d = 0.1λ then Rlinear = 2 Ω and Rloop = 0.2 Ω. That there is little advantage to so-called small fractal antennas is explored in [1]. A variant based on combining a small electric dipole antenna with a small magnetic dipole (loop) antenna has been proposed by [3].", "title": "" }, { "docid": "721b6d09f51b268a30d8cf93b19ca7f4", "text": "Permanent-magnet (PM) motors with both magnets and armature windings on the stator (stator PM motors) have attracted considerable attention due to their simple structure, robust configuration, high power density, easy heat dissipation, and suitability for high-speed operations. However, current PM motors in industrial, residential, and automotive applications are still dominated by interior permanent-magnet motors (IPM) because the claimed advantages of stator PM motors have not been fully investigated and validated. Hence, this paper will perform a comparative study between a stator-PM motor, namely, a flux switching PM motor (FSPM), and an IPM which has been used in the 2004 Prius hybrid electric vehicle (HEV). For a fair comparison, the two motors are designed at the same phase current, current density, and dimensions including the stator outer diameter and stack length. First, the Prius-IPM is investigated by means of finite-element method (FEM). The FEM results are then verified by experimental results to confirm the validity of the methods used in this study. Second, the FSPM design is optimized and investigated based on the same method used for the Prius-IPM. Third, the electromagnetic performance and the material mass of the two motors are compared. It is concluded that FSPM has more sinusoidal back-EMF hence is more suitable for BLAC control. It also offers the advantage of smaller torque ripple and better mechanical integrity for safer and smoother operations. But the FSPM has disadvantages such as low magnet utilization ratio and high cost. It may not be able to compete with IPM in automotive and other applications where cost constraints are tight.", "title": "" } ]
scidocsrr
f37802285fe1c5aa36f12e3d75f9a9ce
Active sample selection in scalar fields exhibiting non-stationary noise with parametric heteroscedastic Gaussian process regression
[ { "docid": "444e84c8c46c066b0a78ad4a743a9c78", "text": "This paper presents a novel Gaussian process (GP) approach to regression with input-dependent noise rates. We follow Goldberg et al.'s approach and model the noise variance using a second GP in addition to the GP governing the noise-free output value. In contrast to Goldberg et al., however, we do not use a Markov chain Monte Carlo method to approximate the posterior noise variance but a most likely noise approach. The resulting model is easy to implement and can directly be used in combination with various existing extensions of the standard GPs such as sparse approximations. Extensive experiments on both synthetic and real-world data, including a challenging perception problem in robotics, show the effectiveness of most likely heteroscedastic GP regression.", "title": "" }, { "docid": "528d0d198bb092ece6f824d4e1912bcd", "text": "Monitoring marine ecosystems is challenging due to the dynamic and unpredictable nature of environmental phenomena. In this work we survey a series of techniques used in information gathering that can be used to increase experts' understanding of marine ecosystems through dynamic monitoring. To achieve this, an underwater glider simulator is constructed, and four different path planning algorithms are investigated: Boustrophendon paths, a gradient based approach, a Level-Sets method, and Sequential Bayesian Optimization. Each planner attempts to maximize the time the glider spends in an area where ocean variables are above a threshold value of interest. To emulate marine ecosystem sensor data, ocean temperatures are used. The planners are simulated 50 times each at random starting times and locations. After validation through simulation, we show that informed decision making improves performance, but more accurate prediction of ocean conditions would be necessary to benefit from long horizon lookahead planning.", "title": "" } ]
[ { "docid": "3cfa80815c0e4835e4e081348717459a", "text": "β-defensins are small cationic peptides, with potent immunoregulatory and antimicrobial activity which are produced constitutively and inducibly by eukaryotic cells. This study profiles the expression of a cluster of 19 novel defensin genes which spans 320 kb on chromosome 13 in Bos taurus. It also assesses the genetic variation in these genes between two divergently selected cattle breeds. Using quantitative real-time PCR (qRT-PCR), all 19 genes in this cluster were shown to be expressed in the male genital tract and 9 in the female genital tract, in a region-specific manner. These genes were sequenced in Norwegian Red (NR) and Holstein-Friesian (HF) cattle for population genetic analysis. Of the 17 novel single nucleotide polymorphisms (SNPs) identified, 7 were non-synonymous, 6 synonymous and 4 outside the protein coding region. Significant frequency differences in SNPs in bovine β-defensins (BBD) 115, 117, 121, and 122 were detected between the two breeds, which was also reflected at the haplotype level (P < 0.05). There was clear segregation of the haplotypes into two blocks on chromosome 13 in both breeds, presumably due to historical recombination. This study documents genetic variation in this β-defensin gene cluster between Norwegian Red and Holstein-Friesian cattle which may result from divergent selection for production and fertility traits in these two breeds. Regional expression in the epididymis and fallopian tube suggests a potential reproductive-immunobiology role for these genes in cattle.", "title": "" }, { "docid": "f81cd7e1cfbfc15992fba9368c1df30b", "text": "The most challenging issue of conventional Time Amplifiers (TAs) is their limited Dynamic Range (DR). This paper presents a mathematical analysis to clarify principle of operation of conventional 2× TA's. The mathematical derivations release strength reduction of the current sources of the TA is the simplest way to increase DR. Besides, a new technique is presented to expand the Dynamic Range (DR) of conventional 2× TAs. Proposed technique employs current subtraction in place of changing strength of current sources using conventional gain compensation methods, which results in more stable gain over a wider DR. The TA is simulated using Spectre-rf in TSMC 0.18um COMS technology. DR of the 2× TA is expanded to 300ps only with 9% gain error while it consumes only 28uW from a 1.2V supply voltage.", "title": "" }, { "docid": "969a8e447fb70d22a7cbabe7fc47a9c9", "text": "A novel multi-level AC six-phase motor drive is developed in this paper. The scheme is based on three conventional 2-level three-phase voltage source inverters (VSIs) supplying the open-end windings of a dual three-phase motor (six-phase induction machine). The proposed inverter is capable of supply the machine with multi-level voltage waveforms. The developed system is compared with the conventional solution and it is demonstrated that the drive system permits to reduce the harmonic distortion of the machine currents, to reduce the total semiconductor losses and to decrease the power processed by converter switches. The system model and the Pulse-Width Modulation (PWM) strategy are presented. The experimental verification was obtained by using IGBTs with dedicated drives and a digital signal processor (DSP) with plug-in boards and sensors.", "title": "" }, { "docid": "d9a9339672121fb6c3baeb51f11bfcd8", "text": "The VISION (video indexing for searching over networks) digital video library system has been developed in our laboratory as a testbed for evaluating automatic and comprehensive mechanisms for video archive creation and content-based search, ®ltering and retrieval of video over local and wide area networks. In order to provide access to video footage within seconds of broadcast, we have developed a new pipelined digital video processing architecture which is capable of digitizing, processing, indexing and compressing video in real time on an inexpensive general purpose computer. These videos were automatically partitioned into short scenes using video, audio and closed-caption information. The resulting scenes are indexed based on their captions and stored in a multimedia database. A clientserver-based graphical user interface was developed to enable users to remotely search this archive and view selected video segments over networks of di€erent bandwidths. Additionally, VISION classi®es the incoming videos with respect to a taxonomy of categories and will selectively send users videos which match their individual pro®les. # 1999 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "7e4a222322346abc281d72534902d707", "text": "Humic substances (HS) have been widely recognized as a plant growth promoter mainly by changes on root architecture and growth dynamics, which result in increased root size, branching and/or greater density of root hair with larger surface area. Stimulation of the H+-ATPase activity in cell membrane suggests that modifications brought about by HS are not only restricted to root structure, but are also extended to the major biochemical pathways since the driving force for most nutrient uptake is the electrochemical gradient across the plasma membrane. Changes on root exudation profile, as well as primary and secondary metabolism were also observed, though strongly dependent on environment conditions, type of plant and its ontogeny. Proteomics and genomic approaches with diverse plant species subjected to HS treatment had often shown controversial patterns of protein and gene expression. This is a clear indication that HS effects of plants are complex and involve non-linear, cross-interrelated and dynamic processes that need be treated with an interdisciplinary view. Being the humic associations recalcitrant to microbiological attack, their use as vehicle to introduce beneficial selected microorganisms to crops has been proposed. This represents a perspective for a sort of new biofertilizer designed for a sustainable agriculture, whereby plants treated with HS become more susceptible to interact with bioinoculants, while HS may concomitantly modify the structure/activity of the microbial community in the rhizosphere compartment. An enhanced knowledge of the effects on plants physiology and biochemistry and interaction with rhizosphere and endophytic microbes should lead to achieve increased crop productivity through a better use of HS inputs in Agriculture.", "title": "" }, { "docid": "cefabe1b4193483d258739674b53f773", "text": "This paper describes design and development of omnidirectional magnetic climbing robots with high maneuverability for inspection of ferromagnetic 3D human made structures. The main focus of this article is design, analysis and implementation of magnetic omnidirectional wheels for climbing robots. We discuss the effect of the associated problems of such wheels, e.g. vibration, on climbing robots. This paper also describes the evolution of magnetic omnidirectional wheels throughout the design and development of several solutions, resulting in lighter and smaller wheels which have less vibration and adapt better to smaller radius structures. These wheels are installed on a chassis which adapts passively to flat and curved structures, enabling the robot to climb and navigate on such structures.", "title": "" }, { "docid": "1ebdcfe9c477e6a29bfce1ddeea960aa", "text": "Bitcoin—a cryptocurrency built on blockchain technology—was the first currency not controlled by a single entity.1 Initially known to a few nerds and criminals,2 bitcoin is now involved in hundreds of thousands of transactions daily. Bitcoin has achieved values of more than US$15,000 per coin (at the end of 2017), and this rising value has attracted attention. For some, bitcoin is digital fool’s gold. For others, its underlying blockchain technology heralds the dawn of a new digital era. Both views could be right. The fortunes of cryptocurrencies don’t define blockchain. Indeed, the biggest effects of blockchain might lie beyond bitcoin, cryptocurrencies, or even the economy. Of course, the technical questions about blockchain have not all been answered. We still struggle to overcome the high levels of processing intensity and energy use. These questions will no doubt be confronted over time. If the technology fails, the future of blockchain will be different. In this article, I’ll assume technical challenges will be solved, and although I’ll cover some technical issues, these aren’t the main focus of this paper. In a 2015 article, “The Trust Machine,” it was argued that the biggest effects of blockchain are on trust.1 The article referred to public trust in economic institutions, that is, that such organizations and intermediaries will act as expected. When they don’t, trust deteriorates. Trust in economic institutions hasn’t recovered from the recession of 2008.3 Technology can exacerbate distrust: online trades with distant counterparties can make it hard to settle disputes face to face. Trusted intermediaries can be hard to find, and that’s where blockchain can play a part. Permanent record-keeping that can be sequentially updated but not erased creates visible footprints of all activities conducted on the chain. This reduces the uncertainty of alternative facts or truths, thus creating the “trust machine” The Economist describes. As trust changes, so too does governance.4 Vitalik Buterin of the Ethereum blockchain platform calls blockchain “a magic computer” to which anyone can upload self-executing programs.5 All states of every Beyond Bitcoin: The Rise of Blockchain World", "title": "" }, { "docid": "061c8e8e9d6a360c36158193afee5276", "text": "Distribution transformers are one of the most important equipment in power network. Because of, the large number of transformers distributed over a wide area in power electric systems, the data acquisition and condition monitoring is a important issue. This paper presents design and implementation of a mobile embedded system and a novel software to monitor and diagnose condition of transformers, by record key operation indictors of a distribution transformer like load currents, transformer oil, ambient temperatures and voltage of three phases. The proposed on-line monitoring system integrates a Global Service Mobile (GSM) Modem, with stand alone single chip microcontroller and sensor packages. Data of operation condition of transformer receives in form of SMS (Short Message Service) and will be save in computer server. Using the suggested online monitoring system will help utility operators to keep transformers in service for longer of time.", "title": "" }, { "docid": "3e2df9d6ed3cad12fcfda19d62a0b42e", "text": "We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.", "title": "" }, { "docid": "f0da127d64aa6e9c87d4af704f049d07", "text": "The introduction of the blue-noise spectra-high-frequency white noise with minimal energy at low frequencies-has had a profound impact on digital halftoning for binary display devices, such as inkjet printers, because it represents an optimal distribution of black and white pixels producing the illusion of a given shade of gray. The blue-noise model, however, does not directly translate to printing with multiple ink intensities. New multilevel printing and display technologies require the development of corresponding quantization algorithms for continuous tone images, namely multitoning. In order to define an optimal distribution of multitone pixels, this paper develops the theory and design of multitone, blue-noise dithering. Here, arbitrary multitone dot patterns are modeled as a layered superposition of stack-constrained binary patterns. Multitone blue-noise exhibits minimum energy at low frequencies and a staircase-like, ascending, spectral pattern at higher frequencies. The optimum spectral profile is described by a set of principal frequencies and amplitudes whose calculation requires the definition of a spectral coherence structure governing the interaction between patterns of dots of different intensities. Efficient algorithms for the generation of multitone, blue-noise dither patterns are also introduced.", "title": "" }, { "docid": "79b91aae9a2911e48026f857e88149f4", "text": "Fine-grained visual recognition is challenging because it highly relies on the modeling of various semantic parts and fine-grained feature learning. Bilinear pooling based models have been shown to be effective at fine-grained recognition, while most previous approaches neglect the fact that inter-layer part feature interaction and fine-grained feature learning are mutually correlated and can reinforce each other. In this paper, we present a novel model to address these issues. First, a crosslayer bilinear pooling approach is proposed to capture the inter-layer part feature relations, which results in superior performance compared with other bilinear pooling based approaches. Second, we propose a novel hierarchical bilinear pooling framework to integrate multiple cross-layer bilinear features to enhance their representation capability. Our formulation is intuitive, efficient and achieves state-of-the-art results on the widely used fine-grained recognition datasets.", "title": "" }, { "docid": "8756ef13409ae696ffaf034c873fdaf6", "text": "This paper addresses a data-driven prognostics method for the estimation of the Remaining Useful Life (RUL) and the associated confidence value of bearings. The proposed method is based on the utilization of the Wavelet Packet Decomposition (WPD) technique, and the Mixture of Gaussians Hidden Markov Models (MoG-HMM). The method relies on two phases: an off-line phase, and an on-line phase. During the first phase, the raw data provided by the sensors are first processed to extract features in the form of WPD coefficients. The extracted features are then fed to dedicated learning algorithms to estimate the parameters of a corresponding MoG-HMM, which best fits the degradation phenomenon. The generated model is exploited during the second phase to continuously assess the current health state of the physical component, and to estimate its RUL value with the associated confidence. The developed method is tested on benchmark data taken from the “NASA prognostics data repository” related to several experiments of failures on bearings done under different operating conditions. Furthermore, the method is compared to traditional time-feature prognostics and simulation results are given at the end of the paper. The results of the developed prognostics method, particularly the estimation of the RUL, can help improving the availability, reliability, and security while reducing the maintenance costs. Indeed, the RUL and associated confidence value are relevant information which can be used to take appropriate maintenance and exploitation decisions. In practice, this information may help the maintainers to prepare the necessary material and human resources before the occurrence of a failure. Thus, the traditional maintenance policies involving corrective and preventive maintenance can be replaced by condition based maintenance.", "title": "" }, { "docid": "fb5a38c1dbbc7416f9b15ee19be9cc06", "text": "This study uses a body motion interactive game developed in Scratch 2.0 to enhance the body strength of children with disabilities. Scratch 2.0, using an augmented-reality function on a program platform, creates real world and virtual reality displays at the same time. This study uses a webcam integration that tracks movements and allows participants to interact physically with the project, to enhance the motivation of children with developmental disabilities to perform physical activities. This study follows a single-case research using an ABAB structure, in which A is the baseline and B is the intervention. The experimental period was 2 months. The experimental results demonstrated that the scores for 3 children with developmental disabilities increased considerably during the intervention phrases. The developmental applications of these results are also discussed.", "title": "" }, { "docid": "c313f49d5dd8b553b0638696b6d4482a", "text": "Artificial Bee Colony Algorithm (ABC) is nature-inspired metaheuristic, which imitates the foraging behavior of bees. ABC as a stochastic technique is easy to implement, has fewer control parameters, and could easily be modify and hybridized with other metaheuristic algorithms. Due to its successful implementation, several researchers in the optimization and artificial intelligence domains have adopted it to be the main focus of their research work. Since 2005, several related works have appeared to enhance the performance of the standard ABC in the literature, to meet up with challenges of recent research problems being encountered. Interestingly, ABC has been tailored successfully, to solve a wide variety of discrete and continuous optimization problems. Some other works have modified and hybridized ABC to other algorithms, to further enhance the structure of its framework. In this review paper, we provide a thorough and extensive overview of most research work focusing on the application of ABC, with the expectation that it would serve as a reference material to both old and new, incoming researchers to the field, to support their understanding of current trends and assist their future research prospects and directions. The advantages, applications and drawbacks of the newly developed ABC hybrids are highlighted, critically analyzed and discussed accordingly.", "title": "" }, { "docid": "0659c4f6cd4a6d8ab35dd7dba6c0974e", "text": "Purpose – The purpose of this paper is to examine an integrated model of factors affecting attitudes toward online shopping in Jordan. The paper introduces an integrated model of the roles of perceived website reputation, relative advantage, perceived website image, and trust that affect attitudes toward online shopping. Design/methodology/approach – A structured and self-administered online survey was employed targeting online shoppers of a reputable online retailer in Jordan; MarkaVIP. A sample of 273 of online shoppers was involved in the online survey. A series of exploratory and confirmatory factor analyses were used to assess the research constructs, unidimensionality, validity, and composite reliability (CR). Structural path model analysis was also used to test the proposed research model and hypotheses. Findings – The empirical findings of this study indicate that perceived website reputation, relative advantage, perceived website image, and trust have directly and indirectly affected consumers’ attitudes toward online shopping. Online consumers’ shopping attitudes are mainly affected by perceived relative advantage and trust. Trust is a product of relative advantage and that the later is a function of perceived website reputation. Relative advantage and perceived website reputation are key predictors of perceived website image. Perceived website image was found to be a direct predictor of trust. Also, the authors found that 26 percent of variation in online shopping attitudes was directly caused by relative advantage, trust, and perceived website image. Research limitations/implications – The research examined online consumers’ attitudes toward one website only therefore the generalizability of the research finding is limited to the local Jordanian website; MarkaVIP. Future research is encouraged to conduct comparative studies between local websites and international ones, e.g., Amazon and e-bay in order to shed lights on consumers’ attitudes toward both websites. The findings are limited to online shoppers in Jordan. A fruitful area of research is to conduct a comparative analysis between online and offline attitudes toward online shopping behavior. Also, replications of the current study’s model in different countries would most likely strengthen and validate its findings. The design of the study is quantitative using an online survey to measure online consumers’ attitudes through a cross-sectional design. Future research is encouraged to use qualitative research design and methodology to provide a deeper understanding of consumers’ attitudes and behaviors toward online and offline shopping in Jordan and elsewhere. Practical implications – The paper supports the importance of perceived website reputation, relative advantage, trust, and perceived web image as keys drivers of attitudes toward online shopping. It further underlines the importance of relative advantage and trust as major contributors to building positive attitudes toward online shopping. In developing countries (e.g. Jordan) where individuals are generally described as risk averse, the level of trust is critical in determining the attitude of individuals toward online shopping. Moreover and given the modest economic situation in Jordan, relative advantage is another significant factor affecting consumers’ attitudes toward online shopping. Indeed, if online shopping would not add a significant value and benefits to consumers, they would have negative attitude toward this technology. This is at the heart of marketing theory and relationship marketing practice. Further, relative advantage is a key predictor of both perceived Business Process Management", "title": "" }, { "docid": "96be7a58f4aec960e2ad2273dea26adb", "text": "Because time series are a ubiquitous and increasingly prevalent type of data, there has been much research effort devoted to time series data mining recently. As with all data mining problems, the key to effective and scalable algorithms is choosing the right representation of the data. Many high level representations of time series have been proposed for data mining. In this work, we introduce a new technique based on a bit level approximation of the data. The representation has several important advantages over existing techniques. One unique advantage is that it allows raw data to be directly compared to the reduced representation, while still guaranteeing lower bounds to Euclidean distance. This fact can be exploited to produce faster exact algorithms for similarly search. In addition, we demonstrate that our new representation allows time series clustering to scale to much larger datasets.", "title": "" }, { "docid": "0acf9ef6e025805a76279d1c6c6c55e7", "text": "Android mobile devices are enjoying a lion's market share in smartphones and mobile devices. This also attracts malware writers to target the Android platform. Recently, we have discovered a new Android malware distribution channel: releasing malicious firmwares with pre-installed malware to the wild. This poses significant risk since users of mobile devices cannot change the content of the malicious firmwares. Furthermore, pre-installed applications have \" more permissions\" (i.e., silent installation) than other legitimate mobile apps, so they can download more malware or access users' confidential information. To understand and address this new form of malware distribution channel, we design and implement \"DroidRay\": a security evaluation system for customized Android firmwares. DroidRay uses both static and dynamic analyses to evaluate the firmware security on both the application and system levels. To understand the impact of this new malware distribution channel, we analyze 250 Android firmwares and 24,009 pre-installed applications. We reveal how the malicious firmware and pre-installed malware are injected, and discovered 1,947 (8.1%) pre-installed applications have signature vulnerability and 19 (7.6%) firmwares contain pre-installed malware. In addition, 142 (56.8%) firmwares have the default signature vulnerability, five (2.0%) firmwares contain malicious hosts file, at most 40 (16.0%) firmwares have the native level privilege escalation vulnerability and at least 249 (99.6%) firmwares have the Java level privilege escalation vulnerability. Lastly, we investigate a real-world case of a pre-installed zero-day malware known as CEPlugnew, which involves 348,018 infected Android smartphones, and we show its degree and geographical penetration. This shows the significance of this new malware distribution channel, and DroidRay is an effective tool to combat this new form of malware spreading.", "title": "" }, { "docid": "cab97e23b7aa291709ecf18e29f580cf", "text": "Recent findings show that coding genes are not the only targets that miRNAs interact with. In fact, there is a pool of different RNAs competing with each other to attract miRNAs for interactions, thus acting as competing endogenous RNAs (ceRNAs). The ceRNAs indirectly regulate each other via the titration mechanism, i.e. the increasing concentration of a ceRNA will decrease the number of miRNAs that are available for interacting with other targets. The cross-talks between ceRNAs, i.e. their interactions mediated by miRNAs, have been identified as the drivers in many disease conditions, including cancers. In recent years, some computational methods have emerged for identifying ceRNA-ceRNA interactions. However, there remain great challenges and opportunities for developing computational methods to provide new insights into ceRNA regulatory mechanisms.In this paper, we review the publically available databases of ceRNA-ceRNA interactions and the computational methods for identifying ceRNA-ceRNA interactions (also known as miRNA sponge interactions). We also conduct a comparison study of the methods with a breast cancer dataset. Our aim is to provide a current snapshot of the advances of the computational methods in identifying miRNA sponge interactions and to discuss the remaining challenges.", "title": "" }, { "docid": "748926afd2efcae529a58fbfa3996884", "text": "The purpose of this research was to investigate preservice teachers’ perceptions about using m-phones and laptops in education as mobile learning tools. A total of 1087 preservice teachers participated in the study. The results indicated that preservice teachers perceived laptops potentially stronger than m-phones as m-learning tools. In terms of limitations the situation was balanced for laptops and m-phones. Generally, the attitudes towards using laptops in education were not exceedingly positive but significantly more positive than m-phones. It was also found that such variables as program/department, grade, gender and possessing a laptop are neutral in causing a practically significant difference in preservice teachers’ views. The results imply an urgent need to grow awareness among participating student teachers towards the concept of m-learning, especially m-learning through m-phones. Introduction The world is becoming a mobigital virtual space where people can learn and teach digitally anywhere and anytime. Today, when timely access to information is vital, mobile devices such as cellular phones, smartphones, mp3 and mp4 players, iPods, digital cameras, data-travelers, personal digital assistance devices (PDAs), netbooks, laptops, tablets, iPads, e-readers such as the Kindle, Nook, etc have spread very rapidly and become common (El-Hussein & Cronje, 2010; Franklin, 2011; Kalinic, Arsovski, Stefanovic, Arsovski & Rankovic, 2011). Mobile devices are especially very popular among young population (Kalinic et al, 2011), particularly among university students (Cheon, Lee, Crooks & Song, 2012; Park, Nam & Cha, 2012). Thus, the idea of learning through mobile devices has gradually become a trend in the field of digital learning (Jeng, Wu, Huang, Tan & Yang, 2010). This is because learning with mobile devices promises “new opportunities and could improve the learning process” (Kalinic et al, 2011, p. 1345) and learning with mobile devices can help achieving educational goals if used through appropriate learning strategies (Jeng et al, 2010). As a matter of fact, from a technological point of view, mobile devices are getting more capable of performing all of the functions necessary in learning design (El-Hussein & Cronje, 2010). This and similar ideas have brought about the concept of mobile learning or m-learning. British Journal of Educational Technology Vol 45 No 4 2014 606–618 doi:10.1111/bjet.12064 © 2013 British Educational Research Association Although mobile learning applications are at their early days, there inevitably emerges a natural pressure by students on educators to integrate m-learning (Franklin, 2011) and so a great deal of attention has been drawn in these applications in the USA, Europe and Asia (Wang & Shen, 2012). Several universities including University of Glasgow, University of Sussex and University of Regensburg have been trying to explore and include the concept of m-learning in their learning systems (Kalinic et al, 2011). Yet, the success of m-learning integration requires some degree of awareness and positive attitudes by students towards m-learning. In this respect, in-service or preservice teachers’ perceptions about m-learning become more of an issue, since their attitudes are decisive in successful integration of m-learning (Cheon et al, 2012). Then it becomes critical whether the teachers, in-service or preservice, have favorable perceptions and attitudinal representations regarding m-learning. Theoretical framework M-learning M-learning has a recent history. When developed as the next phase of e-learning in early 2000s (Peng, Su, Chou & Tsai, 2009), its potential for education could not be envisaged (Attewell, 2005). However, recent developments in mobile and wireless technologies facilitated the departure from traditional learning models with time and space constraints, replacing them with Practitioner Notes What is already known about this topic • Mobile devices are very popular among young population, especially among university students. • Though it has a recent history, m-learning (ie, learning through mobile devices) has gradually become a trend. • M-learning brings new opportunities and can improve the learning process. Previous research on m-learning mostly presents positive outcomes in general besides some drawbacks. • The success of integrating m-learning in teaching practice requires some degree of awareness and positive attitudes by students towards m-learning. What this paper adds • Since teachers’ attitudes are decisive in successful integration of m-learning in teaching, the present paper attempts to understand whether preservice teachers have favorable perceptions and attitudes regarding m-learning. • Unlike much of the previous research on m-learning that handle perceptions about m-learning in a general sense, the present paper takes a more specific approach to distinguish and compare the perceptions about two most common m-learning tools: m-phones and laptops. • It also attempts to find out the variables that cause differences in preservice teachers’ perceptions about using these m-learning devices. Implications for practice and/or policy • Results imply an urgent need to grow awareness and further positive attitudes among participating student teachers towards m-learning, especially through m-phones. • Some action should be taken by the faculty and administration to pedagogically inform and raise awareness about m-learning among preservice teachers. Preservice teachers’ perceptions of M-learning tools 607 © 2013 British Educational Research Association models embedded into our everyday environment, and the paradigm of mobile learning emerged (Vavoula & Karagiannidis, 2005). Today it spreads rapidly and promises to be one of the efficient ways of education (El-Hussein & Cronje, 2010). Partly because it is a new concept, there is no common definition of m-learning in the literature yet (Peng et al, 2009). A good deal of literature defines m-learning as a derivation or extension of e-learning, which is performed using mobile devices such as PDA, mobile phones, laptops, etc (Jeng et al, 2010; Kalinic et al, 2011; Motiwalla, 2007; Riad & El-Ghareeb, 2008). Other definitions highlight certain characteristics of m-learning including portability through mobile devices, wireless Internet connection and ubiquity. For example, a common definition of m-learning in scholarly literature is “the use of portable devices with Internet connection capability in education contexts” (Kinash, Brand & Mathew, 2012, p. 639). In a similar vein, Park et al (2012, p. 592) defines m-learning as “any educational provision where the sole or dominant technologies are handheld or palmtop devices.” On the other hand, m-learning is likely to be simply defined stressing its property of ubiquity, referring to its ability to happen whenever and wherever needed (Peng et al, 2009). For example, Franklin (2011, p. 261) defines mobile learning as “learning that happens anywhere, anytime.” Though it is rather a new research topic and the effectiveness of m-learning in terms of learning achievements has not been fully investigated (Park et al, 2012), there is already an agreement that m-learning brings new opportunities and can improve the learning process (Kalinic et al, 2011). Moreover, the literature review by Wu et al (2012) notes that 86% of the 164 mobile learning studies present positive outcomes in general. Several perspectives of m-learning are attributed in the literature in association with these positive outcomes. The most outstanding among them is the feature of mobility. M-learning makes sense as an educational activity because the technology and its users are mobile (El-Hussein & Cronje, 2010). Hence, learning outside the classroom walls is possible (Nordin, Embi & Yunus, 2010; Şad, 2008; Saran, Seferoğlu & Çağıltay, 2009), enabling students to become an active participant, rather than a passive receiver of knowledge (Looi et al, 2010). This unique feature of m-learning brings about not only the possibility of learning anywhere without limits of classroom or library but also anytime (Çavuş & İbrahim, 2009; Hwang & Chang, 2011; Jeng et al, 2010; Kalinic et al, 2011; Motiwalla, 2007; Sha, Looi, Chen & Zhang, 2012; Sølvberg & Rismark, 2012). This especially offers learners a certain amount of “freedom and independence” (El-Hussein & Cronje, 2010, p. 19), as well as motivation and ability to “self-regulate their own learning” (Sha et al, 2012, p. 366). This idea of learning coincides with the principles of and meet the requirements of other popular paradigms in education including lifelong learning (Nordin et al, 2010), student-centeredness (Sha et al, 2012) and constructivism (Motiwalla, 2007). Beside the favorable properties referred in the m-learning literature, some drawbacks of m-learning are frequently criticized. The most pronounced one is the small screen sizes of the m-learning tools that makes learning activity difficult (El-Hussein & Cronje, 2010; Kalinic et al, 2011; Riad & El-Ghareeb, 2008; Suki & Suki, 2011). Another problem is the weight and limited battery lives of m-tools, particularly the laptops (Riad & El-Ghareeb, 2008). Lack of understanding or expertise with the technology also hinders nontechnical students’ active use of m-learning (Corbeil & Valdes-Corbeil, 2007; Franklin, 2011). Using mobile devices in classroom can cause distractions and interruptions (Cheon et al, 2012; Fried, 2008; Suki & Suki, 2011). Another concern seems to be about the challenged role of the teacher as the most learning activities take place outside the classroom (Sølvberg & Rismark, 2012). M-learning in higher education Mobile learning is becoming an increasingly promising way of delivering instruction in higher education (El-Hussein & Cronje, 2010). This is justified by the current statistics about the 608 British Journal of Educational Technology Vol 45 No 4 2014 © 2013 British Education", "title": "" }, { "docid": "ce0004549d9eec7f47a0a60e11179bba", "text": "We present in this paper a statistical framework that generates accurate and fluent product description from product attributes. Specifically, after extracting templates and learning writing knowledge from attribute-description parallel data, we use the learned knowledge to decide what to say and how to say for product description generation. To evaluate accuracy and fluency for the generated descriptions, in addition to BLEU and Recall, we propose to measure what to say (in terms of attribute coverage) and to measure how to say (by attribute-specified generation) separately. Experimental results show that our framework is effective.", "title": "" } ]
scidocsrr
cbc2c0f62b7501d1880d4f27128d399d
Salient Structure Detection by Context-Guided Visual Search
[ { "docid": "c0dbb410ebd6c84bd97b5f5e767186b3", "text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.", "title": "" } ]
[ { "docid": "b42f3575dad9615a40f491291661e7c5", "text": "Novel neural models have been proposed in recent years for learning under domain shift. Most models, however, only evaluate on a single task, on proprietary datasets, or compare to weak baselines, which makes comparison of models difficult. In this paper, we re-evaluate classic general-purpose bootstrapping approaches in the context of neural networks under domain shifts vs. recent neural approaches and propose a novel multi-task tri-training method that reduces the time and space complexity of classic tri-training. Extensive experiments on two benchmarks are negative: while our novel method establishes a new state-of-the-art for sentiment analysis, it does not fare consistently the best. More importantly, we arrive at the somewhat surprising conclusion that classic tri-training, with some additions, outperforms the state of the art. We conclude that classic approaches constitute an important and strong baseline.", "title": "" }, { "docid": "f84de5ba61de555c2d90afc2c8c2b465", "text": "Visual sensor networks have emerged as an important class of sensor-based distributed intelligent systems, with unique performance, complexity, and quality of service challenges. Consisting of a large number of low-power camera nodes, visual sensor networks support a great number of novel vision-based applications. The camera nodes provide information from a monitored site, performing distributed and collaborative processing of their collected data. Using multiple cameras in the network provides different views of the scene, which enhances the reliability of the captured events. However, the large amount of image data produced by the cameras combined with the network’s resource constraints require exploring new means for data processing, communication, and sensor management. Meeting these challenges of visual sensor networks requires interdisciplinary approaches, utilizing vision processing, communications and networking, and embedded processing. In this paper, we provide an overview of the current state-of-the-art in the field of visual sensor networks, by exploring several relevant research directions. Our goal is to provide a better understanding of current research problems in the different research fields of visual sensor networks, and to show how these different research fields should interact to solve the many challenges of visual sensor networks.", "title": "" }, { "docid": "520de9b576c112171ce0d08650a25093", "text": "Figurative language represents one of the most difficult tasks regarding natural language processing. Unlike literal language, figurative language takes advantage of linguistic devices such as irony, humor, sarcasm, metaphor, analogy, and so on, in order to communicate indirect meanings which, usually, are not interpretable by simply decoding syntactic or semantic information. Rather, figurative language reflects patterns of thought within a communicative and social framework that turns quite challenging its linguistic representation, as well as its computational processing. In this Ph. D. thesis we address the issue of developing a linguisticbased framework for figurative language processing. In particular, our efforts are focused on creating some models capable of automatically detecting instances of two independent figurative devices in social media texts: humor and irony. Our main hypothesis relies on the fact that language reflects patterns of thought; i.e. to study language is to study patterns of conceptualization. Thus, by analyzing two specific domains of figurative language, we aim to provide arguments concerning how people mentally conceive humor and irony, and how they verbalize each device in social media platforms. In this context, we focus on showing how fine-grained knowledge, which relies on shallow and deep linguistic layers, can be translated into valuable patterns to automatically identify figurative uses of language. Contrary to most researches that deal with figurative language, we do not support our arguments on prototypical examples neither of humor nor of irony. Rather, we try to find patterns in texts such as blogs, web comments, tweets, etc., whose intrinsic characteristics are quite different to the characteristics described in the specialized literature. Apart from providing a linguistic inventory for detecting humor and irony at textual level, in this investigation we stress out the importance of considering user-generated tags in order to automatically build resources for figurative language processing, such as ad hoc corpora in which human annotation is not necessary. Finally, each model is evaluated in terms of its relevance to properly identify instances of humor and irony, respectively. To this end, several experiments are carried out taking into consideration different data sets and applicability scenarios. Our findings point out that figurative language processing (especially humor and irony) can provide fine-grained knowledge in tasks as diverse as sentiment analysis, opinion mining, information retrieval, or trend discovery.", "title": "" }, { "docid": "62f4c947cae38cc7071b87597b54324a", "text": "A bugbear of uncalibrated stereo reconstruction is that cameras which deviate from the pinhole model have to be pre-calibrated in order to correct for nonlinear lens distortion. If they are not, and point correspondence is attempted using the uncorrected images, the matching constraints provided by the fundamental matrix must be set so loose that point matching is significantly hampered. This paper shows how linear estimation of the fundamental matrix from two-view point correspondences may be augmented to include one term of radial lens distortion. This is achieved by (1) changing from the standard radiallens model to another which (as we show) has equivalent power, but which takes a simpler form in homogeneous coordinates, and (2) expressing fundamental matrix estimation as a Quadratic Eigenvalue Problem (QEP), for which efficient algorithms are well known. I derive the new estimator, and compare its performance against bundle-adjusted calibration-grid data. The new estimator is fast enough to be included in a RANSAC-based matching loop, and we show cases of matching being rendered possible by its use. I show how the same lens can be calibrated in a natural scene where the lack of straight lines precludes most previous techniques. The modification when the multi-view relation is a planar homography or trifocal tensor is described.", "title": "" }, { "docid": "d061ac8a6c312c768a9dfc6e59cfe6a8", "text": "The assessment of crop yield losses is needed for the improvement of production systems that contribute to the incomes of rural families and food security worldwide. However, efforts to quantify yield losses and identify their causes are still limited, especially for perennial crops. Our objectives were to quantify primary yield losses (incurred in the current year of production) and secondary yield losses (resulting from negative impacts of the previous year) of coffee due to pests and diseases, and to identify the most important predictors of coffee yields and yield losses. We established an experimental coffee parcel with full-sun exposure that consisted of six treatments, which were defined as different sequences of pesticide applications. The trial lasted three years (2013-2015) and yield components, dead productive branches, and foliar pests and diseases were assessed as predictors of yield. First, we calculated yield losses by comparing actual yields of specific treatments with the estimated attainable yield obtained in plots which always had chemical protection. Second, we used structural equation modeling to identify the most important predictors. Results showed that pests and diseases led to high primary yield losses (26%) and even higher secondary yield losses (38%). We identified the fruiting nodes and the dead productive branches as the most important and useful predictors of yields and yield losses. These predictors could be added in existing mechanistic models of coffee, or can be used to develop new linear mixed models to estimate yield losses. Estimated yield losses can then be related to production factors to identify corrective actions that farmers can implement to reduce losses. The experimental and modeling approaches of this study could also be applied in other perennial crops to assess yield losses.", "title": "" }, { "docid": "abdc445e498c6d04e8f046e9c2610f9f", "text": "Ontologies have recently received popularity in the area of knowledge management and knowledge sharing, especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d) market status and penetration. The results of the review in ontologies are analyzed for each application area, such as transport, tourism, personal services, health and social services, natural languages and other HCI-related domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks. Although each tool provides different functionalities, most of the users just use only one, because they are not able to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different ontologies with different development and management tools. The paper is also concerns the detection of commonalities and differences between the examined ontologies, both on the same domain (application area) and among different domains.", "title": "" }, { "docid": "376911fb47b9954a35f9910326f9b97e", "text": "Immunotherapy enhances a patient’s immune system to fight disease and has recently been a source of promising new cancer treatments. Among the many immunotherapeutic strategies, immune checkpoint blockade has shown remarkable benefit in the treatment of a range of cancer types. Immune checkpoint blockade increases antitumor immunity by blocking intrinsic downregulators of immunity, such as cytotoxic T-lymphocyte antigen 4 (CTLA-4) and programmed cell death 1 (PD-1) or its ligand, programmed cell death ligand 1 (PD-L1). Several immune checkpoint–directed antibodies have increased overall survival for patients with various cancers and are approved by the Food and Drug Administration (Table 1). By increasing the activity of the immune system, immune checkpoint blockade can have inflammatory side effects, which are often termed immune-related adverse events. Although any organ system can be affected, immune-related adverse events most commonly involve the gastrointestinal tract, endocrine glands, skin, and liver.1 Less often, the central nervous system and cardiovascular, pulmonary, musculoskeletal, and hematologic systems are involved. The wide range of potential immune-related adverse events requires multidisciplinary, collaborative management by providers across the clinical spectrum (Fig. 1). No prospective trials have defined strategies for effectively managing specific immune-related adverse events; thus, clinical practice remains variable. Nevertheless, several professional organizations are working to harmonize expert consensus on managing specific immune-related adverse events. In this review, we focus on 10 essential questions practitioners will encounter while caring for the expanding population of patients with cancer who are being treated with immune checkpoint blockade (Table 2).", "title": "" }, { "docid": "cb00e564a81ace6b75e776f1fe41fb8f", "text": "INDIVIDUAL PROCESSES IN INTERGROUP BEHAVIOR ................................ 3 From Individual to Group Impressions ...................................................................... 3 GROUP MEMBERSHIP AND INTERGROUP BEHAVIOR .................................. 7 The Scope and Range of Ethnocentrism .................................................................... 8 The Development of Ethnocentrism .......................................................................... 9 Intergroup Conflict and Competition ........................................................................ 12 Interpersonal and intergroup behavior ........................................................................ 13 Intergroup conflict and group cohesion ........................................................................ 15 Power and status in intergroup behavior ...................................................................... 16 Social Categorization and Intergroup Behavior ........................................................ 20 Social categorization: cognitions, values, and groups ...................................................... 20 Social categorization a d intergroup discrimination ...................................................... 23 Social identity and social comparison .......................................................................... 24 THE REDUCTION FINTERGROUP DISCRIMINATION ................................ 27 Intergroup Cooperation and Superordinate Goals \" 28 Intergroup Contact. .... ................................................................................................ 28 Multigroup Membership and \"lndividualizat~’on\" of the Outgroup .......................... 29 SUMMARY .................................................................................................................... 30", "title": "" }, { "docid": "fb941f03dd02f1d7fc7ded54ae462afd", "text": "In this paper we discuss the development and implementation of an Arabic automatic speech recognition engine. The engine can recognize both continuous speech and isolated words. The system was developed using the Hidden Markov Model Toolkit. First, an Arabic dictionary was built by composing the words to its phones. Next, Mel Frequency Cepstral Coefficients (MFCC) of the speech samples are derived to extract the speech feature vectors. Then, the training of the engine based on triphones is developed to estimate the parameters for a Hidden Markov Model. To test the engine, the database consisting of speech utterance from thirteen Arabian native speakers is used which is divided into ten speaker-dependent and three speaker-independent samples. The experimental results showed that the overall system performance was 90.62%, 98.01 % and 97.99% for sentence correction, word correction and word accuracy respectively.", "title": "" }, { "docid": "e95fa624bb3fd7ea45650213088a43b0", "text": "In recent years, much research has been conducted on image super-resolution (SR). To the best of our knowledge, however, few SR methods were concerned with compressed images. The SR of compressed images is a challenging task due to the complicated compression artifacts, while many images suffer from them in practice. The intuitive solution for this difficult task is to decouple it into two sequential but independent subproblems, i.e., compression artifacts reduction (CAR) and SR. Nevertheless, some useful details may be removed in CAR stage, which is contrary to the goal of SR and makes the SR stage more challenging. In this paper, an end-to-end trainable deep convolutional neural network is designed to perform SR on compressed images (CISRDCNN), which reduces compression artifacts and improves image resolution jointly. Experiments on compressed images produced by JPEG (we take the JPEG as an example in this paper) demonstrate that the proposed CISRDCNN yields state-of-the-art SR performance on commonly used test images and imagesets. The results of CISRDCNN on real low quality web images are also very impressive, with obvious quality enhancement. Further, we explore the application of the proposed SR method in low bit-rate image coding, leading to better rate-distortion performance than JPEG.", "title": "" }, { "docid": "a73da9191651ae5d0330d6f64f838f67", "text": "Language selection (or control) refers to the cognitive mechanism that controls which language to use at a given moment and context. It allows bilinguals to selectively communicate in one target language while minimizing the interferences from the nontarget language. Previous studies have suggested the participation in language control of different brain areas. However, the question remains whether the selection of one language among others relies on a language-specific neural module or general executive regions that also allow switching between different competing behavioral responses including the switching between various linguistic registers. In this functional magnetic resonance imaging study, we investigated the neural correlates of language selection processes in German-French bilingual subjects during picture naming in different monolingual and bilingual selection contexts. We show that naming in the first language in the bilingual context (compared with monolingual contexts) increased activation in the left caudate and anterior cingulate cortex. Furthermore, the activation of these areas is even more extended when the subjects are using a second weaker language. These findings show that language control processes engaged in contexts during which both languages must remain active recruit the left caudate and the anterior cingulate cortex (ACC) in a manner that can be distinguished from areas engaged in intralanguage task switching.", "title": "" }, { "docid": "e67b75e11ca6dd9b4e6c77b3cb92cceb", "text": "The incidence of malignant melanoma continues to increase worldwide. This cancer can strike at any age; it is one of the leading causes of loss of life in young persons. Since this cancer is visible on the skin, it is potentially detectable at a very early stage when it is curable. New developments have converged to make fully automatic early melanoma detection a real possibility. First, the advent of dermoscopy has enabled a dramatic boost in clinical diagnostic ability to the point that melanoma can be detected in the clinic at the very earliest stages. The global adoption of this technology has allowed accumulation of large collections of dermoscopy images of melanomas and benign lesions validated by histopathology. The development of advanced technologies in the areas of image processing and machine learning have given us the ability to allow distinction of malignant melanoma from the many benign mimics that require no biopsy. These new technologies should allow not only earlier detection of melanoma, but also reduction of the large number of needless and costly biopsy procedures. Although some of the new systems reported for these technologies have shown promise in preliminary trials, widespread implementation must await further technical progress in accuracy and reproducibility. In this paper, we provide an overview of computerized detection of melanoma in dermoscopy images. First, we discuss the various aspects of lesion segmentation. Then, we provide a brief overview of clinical feature segmentation. Finally, we discuss the classification stage where machine learning algorithms are applied to the attributes generated from the segmented features to predict the existence of melanoma.", "title": "" }, { "docid": "b898d7a2da7a10ef756317bc7f44f37c", "text": "Cellulosomes are multienzyme complexes that are produced by anaerobic cellulolytic bacteria for the degradation of lignocellulosic biomass. They comprise a complex of scaffoldin, which is the structural subunit, and various enzymatic subunits. The intersubunit interactions in these multienzyme complexes are mediated by cohesin and dockerin modules. Cellulosome-producing bacteria have been isolated from a large variety of environments, which reflects their prevalence and the importance of this microbial enzymatic strategy. In a given species, cellulosomes exhibit intrinsic heterogeneity, and between species there is a broad diversity in the composition and configuration of cellulosomes. With the development of modern technologies, such as genomics and proteomics, the full protein content of cellulosomes and their expression levels can now be assessed and the regulatory mechanisms identified. Owing to their highly efficient organization and hydrolytic activity, cellulosomes hold immense potential for application in the degradation of biomass and are the focus of much effort to engineer an ideal microorganism for the conversion of lignocellulose to valuable products, such as biofuels.", "title": "" }, { "docid": "ddd353b5903f12c14cc3af1163ac617c", "text": "Unmanned Aerial Vehicles (UAVs) have recently received notable attention because of their wide range of applications in urban civilian use and in warfare. With air traffic densities increasing, it is more and more important for UAVs to be able to predict and avoid collisions. The main goal of this research effort is to adjust real-time trajectories for cooperative UAVs to avoid collisions in three-dimensional airspace. To explore potential collisions, predictive state space is utilized to present the waypoints of UAVs in the upcoming situations, which makes the proposed method generate the initial collision-free trajectories satisfying the necessary constraints in a short time. Further, a rolling optimization algorithm (ROA) can improve the initial waypoints, minimizing its total distance. Several scenarios are illustrated to verify the proposed algorithm, and the results show that our algorithm can generate initial collision-free trajectories more efficiently than other methods in the common airspace.", "title": "" }, { "docid": "cbcdc411e22786dcc1b3655c5e917fae", "text": "Local intracellular Ca(2+) transients, termed Ca(2+) sparks, are caused by the coordinated opening of a cluster of ryanodine-sensitive Ca(2+) release channels in the sarcoplasmic reticulum of smooth muscle cells. Ca(2+) sparks are activated by Ca(2+) entry through dihydropyridine-sensitive voltage-dependent Ca(2+) channels, although the precise mechanisms of communication of Ca(2+) entry to Ca(2+) spark activation are not clear in smooth muscle. Ca(2+) sparks act as a positive-feedback element to increase smooth muscle contractility, directly by contributing to the global cytoplasmic Ca(2+) concentration ([Ca(2+)]) and indirectly by increasing Ca(2+) entry through membrane potential depolarization, caused by activation of Ca(2+) spark-activated Cl(-) channels. Ca(2+) sparks also have a profound negative-feedback effect on contractility by decreasing Ca(2+) entry through membrane potential hyperpolarization, caused by activation of large-conductance, Ca(2+)-sensitive K(+) channels. In this review, the roles of Ca(2+) sparks in positive- and negative-feedback regulation of smooth muscle function are explored. We also propose that frequency and amplitude modulation of Ca(2+) sparks by contractile and relaxant agents is an important mechanism to regulate smooth muscle function.", "title": "" }, { "docid": "31e052aaf959a4c5d6f1f3af6587d6cd", "text": "We introduce a learning framework called learning using privileged information (LUPI) to the computer vision field. We focus on the prototypical computer vision problem of teaching computers to recognize objects in images. We want the computers to be able to learn faster at the expense of providing extra information during training time. As additional information about the image data, we look at several scenarios that have been studied in computer vision before: attributes, bounding boxes and image tags. The information is privileged as it is available at training time but not at test time. We explore two maximum-margin techniques that are able to make use of this additional source of information, for binary and multiclass object classification. We interpret these methods as learning easiness and hardness of the objects in the privileged space and then transferring this knowledge to train a better classifier in the original space. We provide a thorough analysis and comparison of information transfer from privileged to the original data spaces for both LUPI methods. Our experiments show that incorporating privileged information can improve the classification accuracy. Finally, we conduct user studies to understand which samples are easy and which are hard for human learning, and explore how this information is related to easy and hard samples when learning a classifier.", "title": "" }, { "docid": "72be75e973b6a843de71667566b44929", "text": "We think that hand pose estimation technologies with a camera should be developed for character conversion systems from sign languages with a not so high performance terminal. Fingernail positions can be used for getting finger information which can’t be obtained from outline information. Therefore, we decided to construct a practical fingernail detection system. The previous fingernail detection method, using distribution density of strong nail-color pixels, was not good at removing some skin areas having gloss like finger side area. Therefore, we should use additional information to remove them. We thought that previous method didn’t use boundary information and this information would be available. Color continuity information is available for getting it. In this paper, therefore, we propose a new fingernail detection method using not only distribution density but also color continuity to improve accuracy. We investigated the relationship between wrist rotation angles and percentages of correct detection. The number of users was three. As a result, we confirmed that our proposed method raised accuracy compared with previous method and could detect only fingernails with at least 85% probability from -90 to 40 degrees and from 40 to 90 degrees. Therefore, we concluded that our proposed method was effective.", "title": "" }, { "docid": "56f18b39a740dd65fc2907cdef90ac99", "text": "This paper describes a dynamic artificial neural network based mobile robot motion and path planning system. The method is able to navigate a robot car on flat surface among static and moving obstacles, from any starting point to any endpoint. The motion controlling ANN is trained online with an extended backpropagation through time algorithm, which uses potential fields for obstacle avoidance. The paths of the moving obstacles are predicted with other ANNs for better obstacle avoidance. The method is presented through the realization of the navigation system of a mobile robot.", "title": "" }, { "docid": "262c11ab9f78e5b3f43a31ad22cf23c5", "text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.", "title": "" }, { "docid": "d7a620c961341e35fc8196b331fb0e68", "text": "Software vulnerabilities have had a devastating effect on the Internet. Worms such as CodeRed and Slammer can compromise hundreds of thousands of hosts within hours or even minutes, and cause millions of dollars of damage [32, 51]. To successfully combat these fast automatic Internet attacks, we need fast automatic attack detection and filtering mechanisms. In this paper we propose dynamic taint analysis for automatic detection and analysis of overwrite attacks, which include most types of exploits. This approach does not need source code or special compilation for the monitored program, and hence works on commodity software. To demonstrate this idea, we have implemented TaintCheck, a mechanism that can perform dynamic taint analysis by performing binary rewriting at run time. We show that TaintCheck reliably detects most types of exploits. We found that TaintCheck produced no false positives for any of the many different programs that we tested. Further, we show how we can use a two-tiered approach to build a hybrid exploit detector that enjoys the same accuracy as TaintCheck but have extremely low performance overhead. Finally, we propose a new type of automatic signature generation—semanticanalysis based signature generation. We show that by backtracing the chain of tainted data structure rooted at the detection point, TaintCheck can automatically identify which original flow and which part of the original flow have caused the attack and identify important invariants of the payload that can be used as signatures. Semantic-analysis based signature generation can be more accurate, resilient against polymorphic worms, and robust to attacks exploiting polymorphism than the pattern-extraction based signature generation methods.", "title": "" } ]
scidocsrr
93fa1fa51345165911d9d7f47acff2c6
A Framework for Detection of Video Spam on YouTube
[ { "docid": "7834f32e3d6259f92f5e0beb3a53cc04", "text": "An educational institution needs to have an approximate prior knowledge of enrolled students to predict their performance in future academics. This helps them to identify promising students and also provides them an opportunity to pay attention to and improve those who would probably get lower grades. As a solution, we have developed a system which can predict the performance of students from their previous performances using concepts of data mining techniques under Classification. We have analyzed the data set containing information about students, such as gender, marks scored in the board examinations of classes X and XII, marks and rank in entrance examinations and results in first year of the previous batch of students. By applying the ID3 (Iterative Dichotomiser 3) and C4.5 classification algorithms on this data, we have predicted the general and individual performance of freshly admitted students in future examinations.", "title": "" } ]
[ { "docid": "56a4b5052e4d745e7939e2799a40bfd8", "text": "The evolution of software defined networking (SDN) has played a significant role in the development of next-generation networks (NGN). SDN as a programmable network having “service provisioning on the fly” has induced a keen interest both in academic world and industry. In this article, a comprehensive survey is presented on SDN advancement over conventional network. The paper covers historical evolution in relation to SDN, functional architecture of the SDN and its related technologies, and OpenFlow standards/protocols, including the basic concept of interfacing of OpenFlow with network elements (NEs) such as optical switches. In addition a selective architecture survey has been conducted. Our proposed architecture on software defined heterogeneous network, points towards new technology enabling the opening of new vistas in the domain of network technology, which will facilitate in handling of huge internet traffic and helps infrastructure and service providers to customize their resources dynamically. Besides, current research projects and various activities as being carried out to standardize SDN as NGN by different standard development organizations (SODs) have been duly elaborated to judge how this technology moves towards standardization.", "title": "" }, { "docid": "90eb392765c01b6166daa2a7a62944d1", "text": "Recent studies have demonstrated the potential for reducing energy consumption in integrated circuits by allowing errors during computation. While most proposed techniques for achieving this rely on voltage overscaling (VOS), this paper shows that Imprecise Hardware (IHW) with design-time structural parameters can achieve orthogonal energy-quality tradeoffs. Two IHW adders are improved and two IHW multipliers are introduced in this paper. In addition, a simulation-free error estimation technique is proposed to rapidly and accurately estimate the impact of IHW on output quality. Finally, a quality-aware energy minimization methodology is presented. To validate this methodology, experiments are conducted on two computational kernels: DOT-PRODUCT and L2-NORM -- used in three applications -- Leukocyte Tracker, SVM classification and K-means clustering. Results show that the Hellinger distance between estimated and simulated error distribution is within 0.05 and that the methodology enables designers to explore energy-quality tradeoffs with significant reduction in simulation complexity.", "title": "" }, { "docid": "b4874b03c639ee105f76266d37540a54", "text": "We tested the validity and reliability of the BioSpace InBody 320, Omron and Bod-eComm body composition devices in men and women (n 254; 21-80 years) and boys and girls (n 117; 10-17 years). We analysed percentage body fat (%BF) and compared the results with dual-energy X-ray absorptiometry (DEXA) in adults and compared the results of the InBody with underwater weighing (UW) in children. All body composition devices were correlated (r 0.54-0.97; P< or =0.010) to DEXA except the Bod-eComm in women aged 71-80 years (r 0.54; P=0.106). In girls, the InBody %BF was correlated with UW (r 0.79; P< or =0.010); however, a more moderate correlation (r 0.69; P< or =0.010) existed in boys. Bland-Altman plots indicated that all body composition devices underestimated %BF in adults (1.0-4.8 %) and overestimated %BF in children (0.3-2.3 %). Lastly, independent t tests revealed that the mean %BF assessed by the Bod-eComm in women (aged 51-60 and 71-80 years) and in the Omron (age 18-35 years) were significantly different compared with DEXA (P< or =0.010). In men, the Omron (aged 18-35 years), and the InBody (aged 36-50 years) were significantly different compared with DEXA (P=0.025; P=0.040 respectively). In addition, independent t tests indicated that the InBody mean %BF in girls aged 10-17 years was significantly different from UW (P=0.001). Pearson's correlation analyses demonstrated that the Bod-eComm (men and women) and Omron (women) had significant mean differences compared with the reference criterion; therefore, the %BF output from these two devices should be interpreted with caution. The repeatability of each body composition device was supported by small CV (<3.0 %).", "title": "" }, { "docid": "3627ee0e7be9c6d664dea1912c0b91d4", "text": "Given a set of texts discussing a particular entity (e.g., customer reviews of a smartphone), aspect based sentiment analysis (ABSA) identifies prominent aspects of the entity (e.g., battery, screen) and an average sentiment score per aspect. We focus on aspect term extraction (ATE), one of the core processing stages of ABSA that extracts terms naming aspects. We make publicly available three new ATE datasets, arguing that they are better than previously available ones. We also introduce new evaluation measures for ATE, again arguing that they are better than previously used ones. Finally, we show how a popular unsupervised ATE method can be improved by using continuous space vector representations of words and phrases.", "title": "" }, { "docid": "fca00f3dc82a45357de1e2082138a589", "text": "Preservation of food and beverages resulting from fermentation has been an effective form of extending the shelf-life of foods for millennia. Traditionally, foods were preserved through naturally occurring fermentations, however, modern large scale production generally now exploits the use of defined strain starter systems to ensure consistency and quality in the final product. This review will mainly focus on the use of lactic acid bacteria (LAB) for food improvement, given their extensive application in a wide range of fermented foods. These microorganisms can produce a wide variety of antagonistic primary and secondary metabolites including organic acids, diacetyl, CO2 and even antibiotics such as reuterocyclin produced by Lactobacillus reuteri. In addition, members of the group can also produce a wide range of bacteriocins, some of which have activity against food pathogens such as Listeria monocytogenes and Clostridium botulinum. Indeed, the bacteriocin nisin has been used as an effective biopreservative in some dairy products for decades, while a number of more recently discovered bacteriocins, such as lacticin 3147, demonstrate increasing potential in a number of food applications. Both of these lactococcal bacteriocins belong to the lantibiotic family of posttranslationally modified bacteriocins that contain lanthionine, beta-methyllanthionine and dehydrated amino acids. The exploitation of such naturally produced antagonists holds tremendous potential for extension of shelf-life and improvement of safety of a variety of foods.", "title": "" }, { "docid": "5208762a8142de095c21824b0a395b52", "text": "Battery storage (BS) systems are static energy conversion units that convert the chemical energy directly into electrical energy. They exist in our cars, laptops, electronic appliances, micro electricity generation systems and in many other mobile to stationary power supply systems. The economic advantages, partial sustainability and the portability of these units pose promising substitutes for backup power systems for hybrid vehicles and hybrid electricity generation systems. Dynamic behaviour of these systems can be analysed by using mathematical modeling and simulation software programs. Though, there have been many mathematical models presented in the literature and proved to be successful, dynamic simulation of these systems are still very exhaustive and time consuming as they do not behave according to specific mathematical models or functions. The charging and discharging of battery functions are a combination of exponential and non-linear nature. The aim of this research paper is to present a suitable convenient, dynamic battery model that can be used to model a general BS system. Proposed model is a new modified dynamic Lead-Acid battery model considering the effect of temperature and cyclic charging and discharging effects. Simulink has been used to study the characteristics of the system and the proposed system has proved to be very successful as the simulation results have been very good. Keywords—Simulink Matlab, Battery Model, Simulation, BS Lead-Acid, Dynamic modeling, Temperature effect, Hybrid Vehicles.", "title": "" }, { "docid": "ff3b0b89e05c7e2cd50c0e29c0e557f7", "text": "This paper compares two types of physical unclonable function (PUF) circuits in terms of reliability, mismatch-based PUF vs. physical-based PUF. Most previous PUF circuits utilize device mismatches for generating random responses. Although they have sufficient random features, there is a reliability issue that some portions of bits are changed over time during operation or under noisy environments. To overcome this issue, we previously proposed the differential amplifier PUF (DA-PUF) which improves the reliability by amplifying the small mismatches of the transistors and rejecting the power supply noise through differential operation. In this paper, we first report the experimental results with the fabricated chips in a 0.35 μm CMOS process. The DA-PUF shows 51.30% uniformity, 50.05% uniqueness, and 0.43% maximum BER. For 0% BER, we proposed the physical-based VIA-PUF which is based on the probability of physical connection between the electrical layers. From the experimental results with the fabricated chips in a 0.18 μm CMOS process, we found the VIA-PUF has 51.12% uniformity and 49.64% uniqueness, and 0% BER throughout 1,000-time repeated measurements. Especially, we have no bit change after the stress test at 25 and 125 °C for 96 hours.", "title": "" }, { "docid": "e790824ac08ceb82000c3cda024dc329", "text": "Cellulolytic bacteria were isolated from manure wastes (cow dung) and degrading soil (municipal solid waste). Nine bacterial strains were screened the cellulolytic activities. Six strains showed clear zone formation on Berg’s medium. CMC (carboxyl methyl cellulose) and cellulose were used as substrates for cellulase activities. Among six strains, cd3 and mw7 were observed in quantitative measurement determined by dinitrosalicylic acid (DNS) method. Maximum enzyme producing activity showed 1.702mg/ml and 1.677mg/ml from cd3 and mw7 for 1% CMC substrate. On the other hand, it was expressed 0.563mg/ml and 0.415mg/ml for 1% cellulose substrate respectively. It was also studied for cellulase enzyme producing activity optimizing with kinetic growth parameters such as different carbon source including various concentration of cellulose, incubation time, temperature, and pH. Starch substrate showed 0.909mg/ml and 0.851mg/ml in enzyme producing activity. The optimum substrate concentration of cellulose was 0.25% for cd3 but 1% for mw7 showing the amount of reducing sugar formation 0.628mg/ml and 0.669mg/ml. The optimum incubation parameters for cd3 were 84 hours, 40C and pH 6. Mw7 also had optimum parameters 60 hours, 40 C and pH6.", "title": "" }, { "docid": "861c2fed42d2e2ec53dec8a6e9812bc9", "text": "Materials and methods: An experimental in vivo study was conducted at a dermatology clinic in Riyadh in January 2016. The study included 23 female patients who ranged from 20 to 50 years and were treated with Botox injections due to excessive maxillary gingival display. The patients with short clinical crowns or long maxilla, those who were pregnant or breastfeeding, and patients with neuromuscular disorders were excluded. Patients received Botox type I, injected 3 mm lateral to the alar-fascial groove at the level of the nostril opening at the insertion of the levator labii superioris alaeque nasi muscle. Photos were taken of the patient’s smile before and after the treatment and were then uploaded to the SketchUp program to calculate improvements in gingival display. The distance from the lower margin of the upper lip to the gingival margin was calculated preand posttreatment. The amount of improvement was calculated as (pre-Botox treatment – post-Botox treatment/pre-Botox treatment × 100). The mean percentage of the total improvement was analyzed.", "title": "" }, { "docid": "1d98b5bd0c7178b39b7da0e0f9586615", "text": "TDMA has been proposed as a MAC protocol for wireless sensor networks (WSNs) due to its efficiency in high WSN load. However, TDMA is plagued with shortcomings; we present modifications to TDMA that will allow for the same efficiency of TDMA, while allowing the network to conserve energy during times of low load (when there is no activity being detected). Recognizing that aggregation plays an essential role in WSNs, TDMA-ASAP adds to TDMA: (a) transmission parallelism based on a level-by-level localized graph-coloring, (b) appropriate sleeping between transmissions (\"napping\"), (c) judicious and controlled TDMA slot stealing to avoid empty slots to be unused and (d) intelligent scheduling/ordering transmissions. Our results show that TDMA-ASAP's unique combination of TDMA, slot-stealing, napping, and message aggregation significantly outperforms other hybrid WSN MAC algorithms and has a performance that is close to optimal in terms of energy consumption and overall delay.", "title": "" }, { "docid": "e5e3cbe942723ef8e3524baf56121bf5", "text": "Requirements prioritization is recognized as an important activity in product development. In this paper, we describe the current state of requirements prioritization practices in two case companies and present the practical challenges involved. Our study showed that requirements prioritization is an ambiguous concept and current practices in the companies are informal. Requirements prioritization requires complex context-specific decision-making and must be performed iteratively in many phases during development work. Practitioners are seeking more systematic ways to prioritize requirements but they find it difficult to pay attention to all the relevant factors that have an effect on priorities and explicitly to draw different stakeholder views together. In addition, practitioners need more information about real customer preferences.", "title": "" }, { "docid": "2e8d81ba0b09bc657964d20eb17c976c", "text": "The “Internet of things” (IoT) concept nowadays is one of the hottest trends for research in any given field; since IoT is about interactions between multiple devices, things, and objects. This interaction opens different directions of enhancement and development in many fields, such as architecture, dependencies, communications, protocols, security, applications and big data. The results will be outstanding and we will be able to reach the desired change and improvements we seek in the fields that affect our lives. The critical goal of Internet of things (IoT) is to ensure effective communication between objects and build a sustained bond among them using different types of applications. The application layer is responsible for providing services and determines a set of protocols for message passing at the application level. This survey addresses a set of application layer protocols that are being used today for IoT, to affirm a reliable tie among objects and things.", "title": "" }, { "docid": "55c02b425633062f7d6dc6e3a5afff8e", "text": "This review argues for the development of a Positive Clinical Psychology, which has an integrated and equally weighted focus on both positive and negative functioning in all areas of research and practice. Positive characteristics (such as gratitude, flexibility, and positive emotions) can uniquely predict disorder beyond the predictive power of the presence of negative characteristics, and buffer the impact of negative life events, potentially preventing the development of disorder. Increased study of these characteristics can rapidly expand the knowledge base of clinical psychology and utilize the promising new interventions to treat disorder through promoting the positive. Further, positive and negative characteristics cannot logically be studied or changed in isolation as (a) they interact to predict clinical outcomes, (b) characteristics are neither \"positive\" or \"negative\", with outcomes depending on specific situation and concomitant goals and motivations, and (c) positive and negative well-being often exist on the same continuum. Responding to criticisms of the Positive Psychology movement, we do not suggest the study of positive functioning as a separate field of clinical psychology, but rather that clinical psychology itself changes to become a more integrative discipline. An agenda for research and practice is proposed including reconceptualizing well-being, forming stronger collaborations with allied disciplines, rigorously evaluating the new positive interventions, and considering a role for clinical psychologists in promoting well-being as well as treating distress.", "title": "" }, { "docid": "0f10aa71d58858ea1d8d7571a7cbfe22", "text": "We study hierarchical classification in the general case when an instance could belong to more than one class node in the underlying taxonomy. Experiments done in previous work showed that a simple hierarchy of Support Vectors Machines (SVM) with a top-down evaluation scheme has a surprisingly good performance on this kind of task. In this paper, we introduce a refined evaluation scheme which turns the hierarchical SVM classifier into an approximator of the Bayes optimal classifier with respect to a simple stochastic model for the labels. Experiments on synthetic datasets, generated according to this stochastic model, show that our refined algorithm outperforms the simple hierarchical SVM. On real-world data, however, the advantage brought by our approach is a bit less clear. We conjecture this is due to a higher noise rate for the training labels in the low levels of the taxonomy.", "title": "" }, { "docid": "8f4c4c2157623bb6e9ed91c84ef57618", "text": "Bitcoin’s innovative and distributedly maintained blockchain data structure hinges on the adequate degree of difficulty of so-called “proofs of work,” which miners have to produce in order for transactions to be inserted. Importantly, these proofs of work have to be hard enough so that miners have an opportunity to unify their views in the presence of an adversary who interferes but has bounded computational power, but easy enough to be solvable regularly and enable the miners to make progress. As such, as the miners’ population evolves over time, so should the difficulty of these proofs. Bitcoin provides this adjustment mechanism, with empirical evidence of a constant block generation rate against such population changes. In this paper we provide the first (to our knowledge) formal analysis of Bitcoin’s target (re)calculation function in the cryptographic setting, i.e., against all possible adversaries aiming to subvert the protocol’s properties. We extend the q-bounded synchronous model of the Bitcoin backbone protocol [Eurocrypt 2015], which posed the basic properties of Bitcoin’s underlying blockchain data structure and shows how a robust public transaction ledger can be built on top of them, to environments that may introduce or suspend parties in each round. We provide a set of necessary conditions with respect to the way the population evolves under which the “Bitcoin backbone with chains of variable difficulty” provides a robust transaction ledger in the presence of an actively malicious adversary controlling a fraction of the miners strictly below 50% at each instant of the execution. Our work introduces new analysis techniques and tools to the area of blockchain systems that may prove useful in analyzing other blockchain protocols. Part of this work was done while the authors were visiting the Simons Institute for the Theory of Computing, supported by the Simons Foundation and by the DIMACS/Simons Collaboration in Cryptography through NSF grant #CNS-1523467. Research partly supported by ERC project CODAMODA, No. 259152, and Horizon 2020 project PANORAMIX, No. 653497.", "title": "" }, { "docid": "151b3f80fe443b8f9b5f17c0531e0679", "text": "Pattern recognition methods using neuroimaging data for the diagnosis of Alzheimer’s disease have been the subject of extensive research in recent years. In this paper, we use deep learning methods, and in particular sparse autoencoders and 3D convolutional neural networks, to build an algorithm that can predict the disease status of a patient, based on an MRI scan of the brain. We report on experiments using the ADNI data set involving 2,265 historical scans. We demonstrate that 3D convolutional neural networks outperform several other classifiers reported in the literature and produce state-of-art results.", "title": "" }, { "docid": "44985e59d8b169b10a7c56fd31e8b199", "text": "Recently it became a hot topic to protect VMs from a compromised or even malicious hypervisor. However, most previous systems are vulnerable to rollback attack, since it is hard to distinguish from normal suspend/resume and migration operations that an IaaS platform usually offers. Some of the previous systems simply disable these features to defend rollback attack, while others heavily need user involvement. In this paper, we propose a new solution to make a balance between security and functionality. By securely logging all the suspend/resume and migration operation inside a small trusted computing base, a user can audit the log to check malicious rollback and constrain the operations on the VMs. The solution considers several practical issues including hardware limitations and minimizing user's interaction, and has been implemented on a recent VM protection system.", "title": "" }, { "docid": "39539ad490065e2a81b6c07dd11643e5", "text": "Stock prices are formed based on short and/or long-term commercial and trading activities that reflect different frequencies of trading patterns. However, these patterns are often elusive as they are affected by many uncertain political-economic factors in the real world, such as corporate performances, government policies, and even breaking news circulated across markets. Moreover, time series of stock prices are non-stationary and non-linear, making the prediction of future price trends much challenging. To address them, we propose a novel State Frequency Memory (SFM) recurrent network to capture the multi-frequency trading patterns from past market data to make long and short term predictions over time. Inspired by Discrete Fourier Transform (DFT), the SFM decomposes the hidden states of memory cells into multiple frequency components, each of which models a particular frequency of latent trading pattern underlying the fluctuation of stock price. Then the future stock prices are predicted as a nonlinear mapping of the combination of these components in an Inverse Fourier Transform (IFT) fashion. Modeling multi-frequency trading patterns can enable more accurate predictions for various time ranges: while a short-term prediction usually depends on high frequency trading patterns, a long-term prediction should focus more on the low frequency trading patterns targeting at long-term return. Unfortunately, no existing model explicitly distinguishes between various frequencies of trading patterns to make dynamic predictions in literature. The experiments on the real market data also demonstrate more competitive performance by the SFM as compared with the state-of-the-art methods.", "title": "" }, { "docid": "f9e857f9eac802b5874b583d0fcf32c0", "text": "o This paper examines the enormous pressure Chinese students must bear at home and in school in order to obtain high academic achievemento The authors look at students' lives from their own perspective and study the impact of home and school pressures on students' intellectral, psychological, and physical development. Cultural, political, and economic factors are analyzed to provide an explanation of the situation. The paper raises questions as to what is the purpose of education and argues for the importance of balancing educational goals with other aspects of students' lives. RtSUMto Cet article s'intéresse aux pressions considérables dont font l'objet les étudiants chinois à la maison et à l'école en vue de réussir sur le plan scolaire. Les auteurs étudient la vie des étudiants à la lumière de leurs propres points de vue de même que l'impact des pressions familliales et scolaires sur leur développement intellectuel, psychologique et physique. Les facteurs culturels, politiques et économiques sont analysés en vue d'expliquer la situation. L'article soulève des questions sur le but de l'éducation et insiste sur l'importance d'équilibrer les objetifs pédagogiques avec les autres aspects de la vie des", "title": "" }, { "docid": "1bb7c5d71db582329ad8e721fdddb0b3", "text": "The sharing economy is spreading rapidly worldwide in a number of industries and markets. The disruptive nature of this phenomenon has drawn mixed responses ranging from active conflict to adoption and assimilation. Yet, in spite of the growing attention to the sharing economy, we still do not know much about it. With the abundant enthusiasm about the benefits that the sharing economy can unleash and the weekly reminders about its dark side, further examination is required to determine the potential of the sharing economy while mitigating its undesirable side effects. The panel will join the ongoing debate about the sharing economy and contribute to the discourse with insights about how digital technologies are critical in shaping this turbulent ecosystem. Furthermore, we will define an agenda for future research on the sharing economy as it becomes part of the mainstream society as well as part of the IS research", "title": "" } ]
scidocsrr
58ea4d83752356d294ca50c1ff923143
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
[ { "docid": "b1e8f1b40c3a1ca34228358a2e8d8024", "text": "When the training and the test data belong to different domains, the accuracy of an object classifier is significantly reduced. Therefore, several algorithms have been proposed in the last years to diminish the so called domain shift between datasets. However, all available evaluation protocols for domain adaptation describe a closed set recognition task, where both domains, namely source and target, contain exactly the same object classes. In this work, we also explore the field of domain adaptation in open sets, which is a more realistic scenario where only a few categories of interest are shared between source and target data. Therefore, we propose a method that fits in both closed and open set scenarios. The approach learns a mapping from the source to the target domain by jointly solving an assignment problem that labels those target instances that potentially belong to the categories of interest present in the source dataset. A thorough evaluation shows that our approach outperforms the state-of-the-art.", "title": "" }, { "docid": "75b2f12152526a0fbc5648261faca1cc", "text": "Traditional automated essay scoring systems rely on carefully designed features to evaluate and score essays. The performance of such systems is tightly bound to the quality of the underlying features. However, it is laborious to manually design the most informative features for such a system. In this paper, we develop an approach based on recurrent neural networks to learn the relation between an essay and its assigned score, without any feature engineering. We explore several neural network models for the task of automated essay scoring and perform some analysis to get some insights of the models. The results show that our best system, which is based on long short-term memory networks, outperforms a strong baseline by 5.6% in terms of quadratic weighted Kappa, without requiring any feature engineering.", "title": "" }, { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "93f1ee5523f738ab861bcce86d4fc906", "text": "Semantic role labeling (SRL) is one of the basic natural language processing (NLP) problems. To this date, most of the successful SRL systems were built on top of some form of parsing results (Koomen et al., 2005; Palmer et al., 2010; Pradhan et al., 2013), where pre-defined feature templates over the syntactic structure are used. The attempts of building an end-to-end SRL learning system without using parsing were less successful (Collobert et al., 2011). In this work, we propose to use deep bi-directional recurrent network as an end-to-end system for SRL. We take only original text information as input feature, without using any syntactic knowledge. The proposed algorithm for semantic role labeling was mainly evaluated on CoNLL-2005 shared task and achieved F1 score of 81.07. This result outperforms the previous state-of-the-art system from the combination of different parsing trees or models. We also obtained the same conclusion with F1 = 81.27 on CoNLL2012 shared task. As a result of simplicity, our model is also computationally efficient that the parsing speed is 6.7k tokens per second. Our analysis shows that our model is better at handling longer sentences than traditional models. And the latent variables of our model implicitly capture the syntactic structure of a sentence.", "title": "" } ]
[ { "docid": "30999096bc27a495fa15a4e5b4e9980c", "text": "We present a new statistical pattern recognition approach for the problem of left ventricle endocardium tracking in ultrasound data. The problem is formulated as a sequential importance resampling algorithm such that the expected segmentation of the current time step is estimated based on the appearance, shape, and motion models that take into account all previous and current images and previous segmentation contours produced by the method. The new appearance and shape models decouple the affine and nonrigid segmentations of the left ventricle to reduce the running time complexity. The proposed motion model combines the systole and diastole motion patterns and an observation distribution built by a deep neural network. The functionality of our approach is evaluated using a dataset of diseased cases containing 16 sequences and another dataset of normal cases comprised of four sequences, where both sets present long axis views of the left ventricle. Using a training set comprised of diseased and healthy cases, we show that our approach produces more accurate results than current state-of-the-art endocardium tracking methods in two test sequences from healthy subjects. Using three test sequences containing different types of cardiopathies, we show that our method correlates well with interuser statistics produced by four cardiologists.", "title": "" }, { "docid": "69a0426796f46ac387f1f9d831c85e87", "text": "In this paper, a Volterra analysis built on top of a normal harmonic balance simulation is used for a comprehensive analysis of the causes of AM-PM distortion in a LDMOS RF power amplifier (PA). The analysis shows that any nonlinear capacitors cause AM-PM. In addition, varying terminal impedances may pull the matching impedances and cause phase shift. The AM-PM is also affected by the distortion that is mixed down from the second harmonic. As a sample circuit, an internally matched 30-W LDMOS RF PA is used and the results are compared to measured AM-AM, AM-PM and large-signal S11.", "title": "" }, { "docid": "349a5c840daa587aa5d42c6e584e2103", "text": "We propose a class of functional dependencies for graphs, referred to as GFDs. GFDs capture both attribute-value dependencies and topological structures of entities, and subsume conditional functional dependencies (CFDs) as a special case. We show that the satisfiability and implication problems for GFDs are coNP-complete and NP-complete, respectively, no worse than their CFD counterparts. We also show that the validation problem for GFDs is coNP-complete. Despite the intractability, we develop parallel scalable algorithms for catching violations of GFDs in large-scale graphs. Using real-life and synthetic data, we experimentally verify that GFDs provide an effective approach to detecting inconsistencies in knowledge and social graphs.", "title": "" }, { "docid": "278fd51fd028f1a4211e5f618ca3cc99", "text": "Decades ago, discussion of an impending global pandemic of obesity was thought of as heresy. But in the 1970s, diets began to shift towards increased reliance upon processed foods, increased away-from-home food intake, and increased use of edible oils and sugar-sweetened beverages. Reductions in physical activity and increases in sedentary behavior began to be seen as well. The negative effects of these changes began to be recognized in the early 1990s, primarily in low- and middle-income populations, but they did not become clearly acknowledged until diabetes, hypertension, and obesity began to dominate the globe. Now, rapid increases in the rates of obesity and overweight are widely documented, from urban and rural areas in the poorest countries of sub-Saharan Africa and South Asia to populations in countries with higher income levels. Concurrent rapid shifts in diet and activity are well documented as well. An array of large-scale programmatic and policy measures are being explored in a few countries; however, few countries are engaged in serious efforts to prevent the serious dietary challenges being faced.", "title": "" }, { "docid": "13313b27f7ead27611d5957394e79a69", "text": "Personality profiling is the task of detecting personality traits of authors based on writing style. Several personality typologies exist, however, the Myers-Briggs Type Indicator (MBTI) is particularly popular in the non-scientific community, and many people use it to analyse their own personality and talk about the results online. Therefore, large amounts of self-assessed data on MBTI are readily available on social-media platforms such as Twitter. We present a novel corpus of tweets annotated with the MBTI personality type and gender of their author for six Western European languages (Dutch, German, French, Italian, Portuguese and Spanish). We outline the corpus creation and annotation, show statistics of the obtained data distributions and present first baselines on Myers-Briggs personality profiling and gender prediction for all six languages.", "title": "" }, { "docid": "599f4afe379a877e324547e09033465d", "text": "Large-scale graph analytics is a central tool in many fields, and exemplifies the size and complexity of Big Data applications. Recent distributed graph processing frameworks utilize the venerable Bulk Synchronous Parallel (BSP) model and promise scalability for large graph analytics. This has been made popular by Google's Pregel, which provides an architecture design for BSP graph processing. Public clouds offer democratized access to medium-sized compute infrastructure with the promise of rapid provisioning with no capital investment. Evaluating BSP graph frameworks on cloud platforms with their unique constraints is less explored. Here, we present optimizations and analyses for computationally complex graph analysis algorithms such as betweenness-centrality and all-pairs shortest paths on a native BSP framework we have developed for the Microsoft Azure Cloud, modeled on the Pregel graph processing model. We propose novel heuristics for scheduling graph vertex processing in swaths to maximize resource utilization on cloud VMs that lead to a 3.5x performance improvement. We explore the effects of graph partitioning in the context of BSP, and show that even a well partitioned graph may not lead to performance improvements due to BSP's barrier synchronization. We end with a discussion on leveraging cloud elasticity for dynamically scaling the number of BSP workers to achieve a better performance than a static deployment, and at a significantly lower cost.", "title": "" }, { "docid": "91affcd02ba981189eeaf25d94657276", "text": "In this paper, we develop a 2D and 3D segmentation pipelines for fully automated cardiac MR image segmentation using Deep Convolutional Neural Networks (CNN). Our models are trained end-to-end from scratch using the ACD Challenge 2017 dataset comprising of 100 studies, each containing Cardiac MR images in End Diastole and End Systole phase. We show that both our segmentation models achieve near state-of-the-art performance scores in terms of distance metrics and have convincing accuracy in terms of clinical parameters. A comparative analysis is provided by introducing a novel dice loss function and its combination with cross entropy loss. By exploring different network structures and comprehensive experiments, we discuss several key insights to obtain optimal model performance, which also is central to the theme of this challenge.", "title": "" }, { "docid": "0ccbc904dd7623c9ef537e41ac888dd0", "text": "Big Data architectures allow to flexibly store and process heterogeneous data, from multiple sources, in its original format. The structure of those data, commonly supplied by means of REST APIs, is continuously evolving, forcing data analysts using it need to adapt their analytical processes after each release. This gets more challenging when aiming to perform an integrated or historical analysis of multiple sources. To cope with such complexity, in this paper we present the Big Data Integration ontology, the core construct for a data governance protocol that systematically annotates and integrates data from multiple sources in its original format. To cope with syntactic evolution in the sources, we present an algorithm that semi-automatically adapts the ontology upon new releases. A functional evaluation on realworld APIs is performed in order to validate our approach.", "title": "" }, { "docid": "c139f6b162c5dd9a849a28ece14ea097", "text": "Digital documents are vulnerable to being copied. Most existing copy detection prototypes employ an exhaustive sentence-based comparison method in comparing a potential plagiarized document against a repository of legal or original documents to identify plagiarism activities. This approach is not scalable due to the potentially large number of original documents and the large number of sentences in each document . Furthermore, the security level of existing mechanisms is quite weak; a plagiarized document could simply by-pass the detection mechanisms by performing a minor modification on each sentence. In this paper, we propose a copy detection mechanism that will el iminate unnecessary comparisons. This is based on the observation that comparisons between two documents addressing different subjects are not necessary. We describe the design and implementation of our exper imental proto type called CHECK. The results of some exploratory experiments will be illust rated and the security level of our mechanism will be discussed.", "title": "" }, { "docid": "127dab6b7a8a9ec31e6651736660f1d1", "text": "Key-value stores such as LevelDB and RocksDB offer excellent write throughput, but suffer high write amplification. The write amplification problem is due to the Log-Structured Merge Trees data structure that underlies these key-value stores. To remedy this problem, this paper presents a novel data structure that is inspired by Skip Lists, termed Fragmented Log-Structured Merge Trees (FLSM). FLSM introduces the notion of guards to organize logs, and avoids rewriting data in the same level. We build PebblesDB, a high-performance key-value store, by modifying HyperLevelDB to use the FLSM data structure. We evaluate PebblesDB using micro-benchmarks and show that for write-intensive workloads, PebblesDB reduces write amplification by 2.4-3x compared to RocksDB, while increasing write throughput by 6.7x. We modify two widely-used NoSQL stores, MongoDB and HyperDex, to use PebblesDB as their underlying storage engine. Evaluating these applications using the YCSB benchmark shows that throughput is increased by 18-105% when using PebblesDB (compared to their default storage engines) while write IO is decreased by 35-55%.", "title": "" }, { "docid": "68abef37fe49bb675d7a2ce22f7bf3a7", "text": "Objective: The case for exercise and health has primarily been made on its impact on diseases such coronary heart disease, obesity and diabetes. However, there is a very high cost attributed to mental disorders and illness and in the last 15 years there has been increasing research into the role of exercise a) in the treatment of mental health, and b) in improving mental well-being in the general population. There are now several hundred studies and over 30 narrative or meta-analytic reviews of research in this field. These have summarised the potential for exercise as a therapy for clinical or subclinical depression or anxiety, and the use of physical activity as a means of upgrading life quality through enhanced self-esteem, improved mood states, reduced state and trait anxiety, resilience to stress, or improved sleep. The purpose of this paper is to a) provide an updated view of this literature within the context of public health promotion and b) investigate evidence for physical activity and dietary interactions affecting mental well-being. Design: Narrative review and summary. Conclusions: Sufficient evidence now exists for the effectiveness of exercise in the treatment of clinical depression. Additionally, exercise has a moderate reducing effect on state and trait anxiety and can improve physical self-perceptions and in some cases global self-esteem. Also there is now good evidence that aerobic and resistance exercise enhances mood states, and weaker evidence that exercise can improve cognitive function (primarily assessed by reaction time) in older adults. Conversely, there is little evidence to suggest that exercise addiction is identifiable in no more than a very small percentage of exercisers. Together, this body of research suggests that moderate regular exercise should be considered as a viable means of treating depression and anxiety and improving mental well-being in the general public.", "title": "" }, { "docid": "20c309bbc6eea75fa9b57ee98b73cbc1", "text": "Chua proposed an Elementary Circuit Element Quadrangle including the three classic elements (resistor, inductor, and capacitor) and his formulated, named memristor as the fourth element. Based on an observation that this quadrangle may not be symmetric, I proposed an Elementary Circuit Element Triangle, in which memristor as well as mem-capacitor and mem-inductor lead three basic element classes, respectively. An intrinsic mathematical relationship is found to support this new classification. It is believed that this triangle is concise, mathematically sound and aesthetically beautiful, compared with Chua's quadrangle. The importance of finding a correct circuit element table is similar to that of Mendeleev's periodic table of chemical elements in chemistry and the table of 61 elementary particles in physics, in terms of categorizing the existing elements and predicting new elements. A correct circuit element table would also request to rewrite the 20th century textbooks.", "title": "" }, { "docid": "bdeed19259194f7868cf82429d042fab", "text": "It is critical in many applications to understand what features are important for a model, and why individual predictions were made. For tree ensemble methods these questions are usually answered by attributing importance values to input features, either globally or for a single prediction. Here we show that current feature attribution methods are inconsistent, which means changing the model to rely more on a given feature can actually decrease the importance assigned to that feature. To address this problem we develop fast exact solutions for SHAP (SHapley Additive exPlanation) values, which were recently shown to be the unique additive feature attribution method based on conditional expectations that is both consistent and locally accurate. We integrate these improvements into the latest version of XGBoost, demonstrate the inconsistencies of current methods, and show how using SHAP values results in significantly improved supervised clustering performance. Feature importance values are a key part of understanding widely used models such as gradient boosting trees and random forests. We believe our work improves on the state-of-the-art in important ways, and so impacts any current user of tree ensemble methods.", "title": "" }, { "docid": "1d7b7ea9f0cc284f447c11902bad6685", "text": "In the last few years the efficiency of secure multi-party computation (MPC) increased in several orders of magnitudes. However, this alone might not be enough if we want MPC protocols to be used in practice. A crucial property that is needed in many applications is that everyone can check that a given (secure) computation was performed correctly – even in the extreme case where all the parties involved in the computation are corrupted, and even if the party who wants to verify the result was not participating. This is especially relevant in the clients-servers setting, where many clients provide input to a secure computation performed by a few servers. An obvious example of this is electronic voting, but also in many types of auctions one may want independent verification of the result. Traditionally, this is achieved by using non-interactive zero-knowledge proofs during the computation. A recent trend in MPC protocols is to have a more expensive preprocessing phase followed by a very efficient online phase, e.g., the recent so-called SPDZ protocol by Damg̊ard et al. Applications such as voting and some auctions are perfect use-case for these protocols, as the parties usually know well in advance when the computation will take place, and using those protocols allows us to use only cheap information-theoretic primitives in the actual computation. Unfortunately no protocol of the SPDZ type supports an audit phase. In this paper, we show how to achieve efficient MPC with a public audit. We formalize the concept of publicly auditable secure computation and provide an enhanced version of the SPDZ protocol where, even if all the servers are corrupted, anyone with access to the transcript of the protocol can check that the output is indeed correct. Most importantly, we do so without significantly compromising the performance of SPDZ i.e. our online phase has complexity approximately twice that of SPDZ.", "title": "" }, { "docid": "75c61230b24e53bd95f526c1ff74b621", "text": "Input-series-output-parallel (ISOP) connected DC-DC converters enable low voltage rating switches to be used in high voltage input applications. In this paper, a DSP is adopted to generate digital phase-shifted PWM signals and to fulfill the closed-loop control function for ISOP connected two full-bridge DC-DC converters. Moreover, a stable output current sharing control strategy is proposed for the system, with which equal sharing of the input voltage and the load current can be achieved without any input voltage control loops. Based on small signal analysis with the state space average method, a loop gain design with the proposed scheme is made. Compared with the conventional IVS scheme, the proposed strategy leads to simplification of the output voltage regulator design and better static and dynamic responses. The effectiveness of the proposed control strategy is verified by the simulation and experimental results of an ISOP system made up of two full-bridge DC-DC converters.", "title": "" }, { "docid": "bf4c0356b53f13fc2327dcf7c3377a8f", "text": "This paper presents a new corpus and a robust deep learning architecture for a task in reading comprehension, passage completion, on multiparty dialog. Given a dialog in text and a passage containing factual descriptions about the dialog where mentions of the characters are replaced by blanks, the task is to fill the blanks with the most appropriate character names that reflect the contexts in the dialog. Since there is no dataset that challenges the task of passage completion in this genre, we create a corpus by selecting transcripts from a TV show that comprise 1,681 dialogs, generating passages for each dialog through crowdsourcing, and annotating mentions of characters in both the dialog and the passages. Given this dataset, we build a deep neural model that integrates rich feature extraction from convolutional neural networks into sequence modeling in recurrent neural networks, optimized by utterance and dialog level attentions. Our model outperforms the previous state-of-the-art model on this task in a different genre using bidirectional LSTM, showing a 13.0+% improvement for longer dialogs. Our analysis shows the effectiveness of the attention mechanisms and suggests a direction to machine comprehension on multiparty dialog.", "title": "" }, { "docid": "055faaaa14959a204ca19a4962f6e822", "text": "Data mining (also known as knowledge discovery from databases) is the process of extraction of hidden, previously unknown and potentially useful information from databases. The outcome of the extracted data can be analyzed for the future planning and development perspectives. In this paper, we have made an attempt to demonstrate how one can extract the local (district) level census, socio-economic and population related other data for knowledge discovery and their analysis using the powerful data mining tool Weka. I. DATA MINING Data mining has been defined as the nontrivial extraction of implicit, previously unknown, and potentially useful information from databases/data warehouses. It uses machine learning, statistical and visualization techniques to discover and present knowledge in a form, which is easily comprehensive to humans [1]. Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help user focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems. Data mining tools can answer business questions that traditionally were too time consuming to resolve. They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their expectations. Data mining techniques can be implemented rapidly on existing software and hardware platforms to enhance the value of existing information resources, and can be integrated with new products and systems as they are brought on-line [2]. Data mining steps in the knowledge discovery process are as follows: 1. Data cleaningThe removal of noise and inconsistent data. 2. Data integration The combination of multiple sources of data. 3. Data selection The data relevant for analysis is retrieved from the database. 4. Data transformation The consolidation and transformation of data into forms appropriate for mining. 5. Data mining The use of intelligent methods to extract patterns from data. 6. Pattern evaluation Identification of patterns that are interesting. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 36 7. Knowledge presentation Visualization and knowledge representation techniques are used to present the extracted or mined knowledge to the end user [3]. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps [7][8]. II. WEKA: Weka (Waikato Environment for Knowledge Analysis) is a popular suite of machine learning software written in Java, developed at the University of Waikato, New Zealand. Weka is free software available under the GNU General Public License. The Weka workbench contains a collection of visualization tools and algorithms for data analysis and predictive modeling, together with graphical user interfaces for easy access to this functionality [4]. Weka is a collection of machine learning algorithms for solving real-world data mining problems. It is written in Java and runs on almost any platform. The algorithms can either be applied directly to a dataset or called from your own Java code [5]. The original non-Java version of Weka was a TCL/TK front-end to (mostly third-party) modeling algorithms implemented in other programming languages, plus data preprocessing utilities in C, and a Makefile-based system for running machine learning experiments. This original version was primarily designed as a tool for analyzing data from agricultural domains, but the more recent fully Java-based version (Weka 3), for which development started in 1997, is now used in many different application areas, in particular for educational purposes and research. Advantages of Weka include: I. Free availability under the GNU General Public License II. Portability, since it is fully implemented in the Java programming language and thus runs on almost any modern computing platform III. A comprehensive collection of data preprocessing and modeling techniques IV. Ease of use due to its graphical user interfaces Weka supports several standard data mining tasks, more specifically, data preprocessing, clustering, classification, regression, visualization, and feature selection [10]. All of Weka's techniques are predicated on the assumption that the data is available as a single flat file or relation, where each data point is described by a fixed number of attributes (normally, numeric or nominal attributes, but some other attribute types are also supported). Weka provides access to SQL databases using Java Database Connectivity and can process the result returned by a database query. It is not capable of multi-relational data mining, but there is separate software for converting a collection of linked database tables into a single table that is suitable for (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 37 processing using Weka. Another important area that is currently not covered by the algorithms included in the Weka distribution is sequence modeling [4]. III. DATA PROCESSING, METHODOLOGY AND RESULTS The primary available data such as census (2001), socio-economic data, and few basic information of Latur district are collected from National Informatics Centre (NIC), Latur, which is mainly required to design and develop the database for Latur district of Maharashtra state of India. The database is designed in MS-Access 2003 database management system to store the collected data. The data is formed according to the required format and structures. Further, the data is converted to ARFF (Attribute Relation File Format) format to process in WEKA. An ARFF file is an ASCII text file that describes a list of instances sharing a set of attributes. ARFF files were developed by the Machine Learning Project at the Department of Computer Science of The University of Waikato for use with the Weka machine learning software. This document descibes the version of ARFF used with Weka versions 3.2 to 3.3; this is an extension of the ARFF format as described in the data mining book written by Ian H. Witten and Eibe Frank [6][9]. After processing the ARFF file in WEKA the list of all attributes, statistics and other parameters can be utilized as shown in Figure 1. Fig.1 Processed ARFF file in WEKA. In the above shown file, there are 729 villages data is processed with different attributes (25) like population, health, literacy, village locations etc. Among all these, few of them are preprocessed attributes generated by census data like, percent_male_literacy, total_percent_literacy, total_percent_illiteracy, sex_ratio etc. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 38 The processed data in Weka can be analyzed using different data mining techniques like, Classification, Clustering, Association rule mining, Visualization etc. algorithms. The Figure 2 shows the few processed attributes which are visualized into a 2 dimensional graphical representation. Fig. 2 Graphical visualization of processed attributes. The information can be extracted with respect to two or more associative relation of data set. In this process, we have made an attempt to visualize the impact of male and female literacy on the gender inequality. The literacy related and population data is processed and computed the percent wise male and female literacy. Accordingly we have computed the sex ratio attribute from the given male and female population data. The new attributes like, male_percent_literacy, female_percent_literacy and sex_ratio are compared each other to extract the impact of literacy on gender inequality. The Figure 3 and Figure 4 are the extracted results of sex ratio values with male and female literacy. Fig. 3 Female literacy and Sex ratio values. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 39 Fig. 4 Male literacy and Sex ratio values. On the Y-axis, the female percent literacy values are shown in Figure 3, and the male percent literacy values are shown in Figure 4. By considering both the results, the female percent literacy is poor than the male percent literacy in the district. The sex ratio values are higher in male percent literacy than the female percent literacy. The results are purely showing that the literacy is very much important to manage the gender inequality of any region. ACKNOWLEDGEMENT: Authors are grateful to the department of NIC, Latur for providing all the basic data and WEKA for providing such a strong tool to extract and analyze knowledge from database. CONCLUSION Knowledge extraction from database is becom", "title": "" }, { "docid": "efd8a99b6fac8ca416f4eb6d825a611b", "text": "A variety of theoretical frameworks predict the resemblance of behaviors between two people engaged in communication, in the form of coordination, mimicry, or alignment. However, little is known about the time course of the behavior matching, even though there is evidence that dyads synchronize oscillatory motions (e.g., postural sway). This study examined the temporal structure of nonoscillatory actions-language, facial, and gestural behaviors-produced during a route communication task. The focus was the temporal relationship between matching behaviors in the interlocutors (e.g., facial behavior in one interlocutor vs. the same facial behavior in the other interlocutor). Cross-recurrence analysis revealed that within each category tested (language, facial, gestural), interlocutors synchronized matching behaviors, at temporal lags short enough to provide imitation of one interlocutor by the other, from one conversational turn to the next. Both social and cognitive variables predicted the degree of temporal organization. These findings suggest that the temporal structure of matching behaviors provides low-level and low-cost resources for human interaction.", "title": "" }, { "docid": "ffe218d01142769cf794c1b1a4e7969f", "text": "Most neurons in the mammalian CNS encode and transmit information via action potentials. Knowledge of where these electrical events are initiated and how they propagate within neurons is therefore fundamental to an understanding of neuronal function. While work from the 1950s suggested that action potentials are initiated in the axon, many subsequent investigations have suggested that action potentials can also be initiated in the dendrites. Recently, experiments using simultaneous patch-pipette recordings from different locations on the same neuron have been used to address this issue directly. These studies show that the site of action potential initiation is in the axon, even when synaptic activation is powerful enough to elicit dendritic electrogenesis. Furthermore, these and other studies also show that following initiation, action potentials actively backpropagate into the dendrites of many neuronal types, providing a retrograde signal of neuronal output to the dendritic tree.", "title": "" }, { "docid": "b03df3dbdac7279e4fe73ef5388b570b", "text": "In this paper, we formulate the fuzzy perceptive model for discounted Markov decision processes in which the perception for transition probabilities is described by fuzzy sets. The optimal expected reward, called a fuzzy perceptive value, is characterized and calculated by a new fuzzy relation. As a numerical example, a machine maintenance problem is considered.", "title": "" } ]
scidocsrr
d13b9b82be0cc86e59f4579988430fc0
Pairs trading strategy optimization using the reinforcement learning method: a cointegration approach
[ { "docid": "f72f55da6ec2fdf9d0902648571fd9fc", "text": "Recently, numerous investigations for stock price prediction and portfolio management using machine learning have been trying to develop efficient mechanical trading systems. But these systems have a limitation in that they are mainly based on the supervised leaming which is not so adequate for leaming problems with long-term goals and delayed rewards. This paper proposes a method of applying reinforcement leaming, suitable for modeling and leaming various kinds of interactions in real situations, to the problem of stock price prediction. The stock price prediction problem is considered as Markov process which can be optimized by reinforcement learning based algorithm. TD(O), a reinforcement learning algorithm which leams only from experiences, is adopted and function approximation by artificial neural network is performed to leam the values of states each of which corresponds to a stock price trend at a given time. An experimental result based on the Korean stock market is presented to evaluate the performance of the proposed method.", "title": "" }, { "docid": "51f2ba8b460be1c9902fb265b2632232", "text": "Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.", "title": "" }, { "docid": "427796f5c37e41363c1664b47596eacf", "text": "A trading and portfolio management system called QSR is proposed. It uses Q-learning and Sharpe ratio maximization algorithm. We use absolute proot and relative risk-adjusted proot as performance function to train the system respectively, and employ a committee of two networks to do the testing. The new proposed algorithm makes use of the advantages of both parts and can be used in a more general case. We demonstrate with experimental results that the proposed approach generates appreciable proots from trading in the foreign exchange markets.", "title": "" } ]
[ { "docid": "30fda7dabb70dffbf297096671802c93", "text": "Much attention has recently been given to a printing method because they are easily designable, have a low cost, and can be mass produced. Numerous electronic devices are fabricated using printing methods because of these advantages. In paper mechatronics, attempts have been made to fabricate robots by printing on paper substrates. The robots are given structures through self-folding and functions using printed actuators. We developed a new system and device to fabricate more sophisticated printed robots. First, we successfully fabricated complex self-folding structures by applying an automatic cutting. Second, a rapidly created and low-voltage electrothermal actuator was developed using an inkjet printed circuit. Finally, a printed robot was fabricated by combining two techniques and two types of paper; a structure design paper and a circuit design paper. Gripper and conveyor robots were fabricated, and their functions were verified. These works demonstrate the possibility of paper mechatronics for rapid and low-cost prototyping as well as of printed robots.", "title": "" }, { "docid": "58c488555240ded980033111a9657be4", "text": "BACKGROUND\nThe management of opioid-induced constipation (OIC) is often complicated by the fact that clinical measures of constipation do not always correlate with patient perception. As the discomfort associated with OIC can lead to poor compliance with the opioid treatment, a shift in focus towards patient assessment is often advocated.\n\n\nSCOPE\nThe Bowel Function Index * (BFI) is a new patient-assessment scale that has been developed and validated specifically for OIC. It is a physician-administered, easy-to-use scale made up of three items (ease of defecation, feeling of incomplete bowel evacuation, and personal judgement of constipation). An extensive analysis has been performed in order to validate the BFI as reliable, stable, clinically valid, and responsive to change in patients with OIC, with a 12-point change in score constituting a clinically relevant change in constipation.\n\n\nFINDINGS\nThe results of the validation analysis were based on major clinical trials and have been further supported by data from a large open-label study and a pharmaco-epidemiological study, in which the BFI was used effectively to assess OIC in a large population of patients treated with opioids. Although other patient self-report scales exist, the BFI offers several unique advantages. First, by being physician-administered, the BFI minimizes reading and comprehension difficulties; second, by offering general and open-ended questions which capture patient perspective, the BFI is likely to detect most patients suffering from OIC; third, by being short and easy-to-use, it places little burden on the patient, thereby increasing the likelihood of gathering accurate information.\n\n\nCONCLUSION\nAltogether, the available data suggest that the BFI will be useful in clinical trials and in daily practice.", "title": "" }, { "docid": "31a2e6948a816a053d62e3748134cdc2", "text": "In model-based reinforcement learning, generative and temporal models of environments can be leveraged to boost agent performance, either by tuning the agent’s representations during training or via use as part of an explicit planning mechanism. However, their application in practice has been limited to simplistic environments, due to the difficulty of training such models in larger, potentially partially-observed and 3D environments. In this work we introduce a novel action-conditioned generative model of such challenging environments. The model features a non-parametric spatial memory system in which we store learned, disentangled representations of the environment. Low-dimensional spatial updates are computed using a state-space model that makes use of knowledge on the prior dynamics of the moving agent, and high-dimensional visual observations are modelled with a Variational Auto-Encoder. The result is a scalable architecture capable of performing coherent predictions over hundreds of time steps across a range of partially observed 2D and 3D environments.", "title": "" }, { "docid": "ba7701a94880b59bbbd49fbfaca4b8c3", "text": "Many rural roads lack sharp, smoothly curving edges and a homogeneous surface appearance, hampering traditional vision-based road-following methods. However, they often have strong texture cues parallel to the road direction in the form of ruts and tracks left by other vehicles. This paper describes an unsupervised algorithm for following ill-structured roads in which dominant texture orientations computed with Gabor wavelet filters vote for a consensus road vanishing point location. The technique is first described for estimating the direction of straight-road segments, then extended to curved and undulating roads by tracking the vanishing point indicated by a differential “strip” of voters moving up toward the nominal vanishing line. Finally, the vanishing point is used to constrain a search for the road boundaries by maximizing textureand color-based region discriminant functions. Results are shown for a variety of road scenes including gravel roads, dirt trails, and highways.", "title": "" }, { "docid": "10d380b25a03c608c11fe5dde545f4b4", "text": "The increasing complexity and diversity of technical products plus the massive amount of product-related data overwhelms humans dealing with them at all stages of the life-cycle. We present a novel architecture for building smart products that are able to interact with humans in a natural and proactive way, and assist and guide them in performing their tasks. Further, we show how communication capabilities of smart products are used to account for the limited resources of individual products by leveraging resources provided by the environment or other smart products for storage and natural interaction.", "title": "" }, { "docid": "dffb89c39f11934567f98a31a0ef157c", "text": "We present a new method for semantic role labeling in which arguments and semantic roles are jointly embedded in a shared vector space for a given predicate. These embeddings belong to a neural network, whose output represents the potential functions of a graphical model designed for the SRL task. We consider both local and structured learning methods and obtain strong results on standard PropBank and FrameNet corpora with a straightforward product-of-experts model. We further show how the model can learn jointly from PropBank and FrameNet annotations to obtain additional improvements on the smaller FrameNet dataset.", "title": "" }, { "docid": "97ba22fa685384e9dfd0402798fe7019", "text": "We consider the problems of i) using public-key encryption to enforce dynamic access control on clouds; and ii) key rotation of data stored on clouds. Historically, proxy re-encryption, ciphertext delegation, and related technologies have been advocated as tools that allow for revocation and the ability to cryptographically enforce dynamic access control on the cloud, and more recently they have suggested for key rotation of data stored on clouds. Current literature frequently assumes that data is encrypted directly with public-key encryption primitives. However, for efficiency reasons systems would need to deploy with hybrid encryption. Unfortunately, we show that if hybrid encryption is used, then schemes are susceptible to a key-scraping attack. Given a proxy re-encryption or delegation primitive, we show how to construct a new hybrid scheme that is resistant to this attack and highly efficient. The scheme only requires the modification of a small fraction of the bits of the original ciphertext. The number of modifications scales linearly with the security parameter and logarithmically with the file length: it does not require the entire symmetric-key ciphertext to be re-encrypted! Beyond the construction, we introduce new security definitions for the problem at hand, prove our construction secure, discuss use cases, and provide quantitative data showing its practical benefits and efficiency. We show the construction extends to identity-based proxy re-encryption and revocable-storage attribute-based encryption, and thus that the construction is robust, supporting most primitives of interest.", "title": "" }, { "docid": "22ab8eb2b8eaafb2ee72ea0ed7148ca4", "text": "As travel is taking more significant part in our life, route recommendation service becomes a big business and attracts many major players in IT industry. Given a pair of user-specified origin and destination, a route recommendation service aims to provide users with the routes of best travelling experience according to criteria, such as travelling distance, travelling time, traffic condition, etc. However, previous research shows that even the routes recommended by the big-thumb service providers can deviate significantly from the routes travelled by experienced drivers. It means travellers' preferences on route selection are influenced by many latent and dynamic factors that are hard to model exactly with pre-defined formulas. In this work we approach this challenging problem with a very different perspective- leveraging crowds' knowledge to improve the recommendation quality. In this light, CrowdPlanner - a novel crowd-based route recommendation system has been developed, which requests human workers to evaluate candidate routes recommended by different sources and methods, and determine the best route based on their feedbacks. In this paper, we particularly focus on two important issues that affect system performance significantly: (1) how to efficiently generate tasks which are simple to answer but possess sufficient information to derive user-preferred routes; and (2) how to quickly identify a set of appropriate domain experts to answer the questions timely and accurately. Specifically, the task generation component in our system generates a series of informative and concise questions with optimized ordering for a given candidate route set so that workers feel comfortable and easy to answer. In addition, the worker selection component utilizes a set of selection criteria and an efficient algorithm to find the most eligible workers to answer the questions with high accuracy. A prototype system has been deployed to many voluntary mobile clients and extensive tests on real-scenario queries have shown the superiority of CrowdPlanner in comparison with the results given by map services and popular route mining algorithms.", "title": "" }, { "docid": "8fa135e5d01ba2480dea4621ceb1e9f4", "text": "With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.", "title": "" }, { "docid": "493893b0eb606477b3d0a5b10ddf9ade", "text": "While new therapies for chronic hepatitis C virus infection have delivered remarkable cure rates, curative therapies for chronic hepatitis B virus (HBV) infection remain a distant goal. Although current direct antiviral therapies are very efficient in controlling viral replication and limiting the progression to cirrhosis, these treatments require lifelong administration due to the frequent viral rebound upon treatment cessation, and immune modulation with interferon is only effective in a subgroup of patients. Specific immunotherapies can offer the possibility of eliminating or at least stably maintaining low levels of HBV replication under the control of a functional host antiviral response. Here, we review the development of immune cell therapy for HBV, highlighting the potential antiviral efficiency and potential toxicities in different groups of chronically infected HBV patients. We also discuss the chronic hepatitis B patient populations that best benefit from therapeutic immune interventions.", "title": "" }, { "docid": "11ecb3df219152d33020ba1c4f8848bb", "text": "Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet, which began as a research experiment, was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, particularly for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification and to an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design-in particular, the software defined networking (SDN) paradigm-offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods and present a survey of its applications to networking.", "title": "" }, { "docid": "15ad5044900511277e0cd602b0c07c5e", "text": "Intentional facial expression of emotion is critical to healthy social interactions. Patients with neurodegenerative disease, particularly those with right temporal or prefrontal atrophy, show dramatic socioemotional impairment. This was an exploratory study examining the neural and behavioral correlates of intentional facial expression of emotion in neurodegenerative disease patients and healthy controls. One hundred and thirty three participants (45 Alzheimer's disease, 16 behavioral variant frontotemporal dementia, 8 non-fluent primary progressive aphasia, 10 progressive supranuclear palsy, 11 right-temporal frontotemporal dementia, 9 semantic variant primary progressive aphasia patients and 34 healthy controls) were video recorded while imitating static images of emotional faces and producing emotional expressions based on verbal command; the accuracy of their expression was rated by blinded raters. Participants also underwent face-to-face socioemotional testing and informants described participants' typical socioemotional behavior. Patients' performance on emotion expression tasks was correlated with gray matter volume using voxel-based morphometry (VBM) across the entire sample. We found that intentional emotional imitation scores were related to fundamental socioemotional deficits; patients with known socioemotional deficits performed worse than controls on intentional emotion imitation; and intentional emotional expression predicted caregiver ratings of empathy and interpersonal warmth. Whole brain VBMs revealed a rightward cortical atrophy pattern homologous to the left lateralized speech production network was associated with intentional emotional imitation deficits. Results point to a possible neural mechanisms underlying complex socioemotional communication deficits in neurodegenerative disease patients.", "title": "" }, { "docid": "eedcff8c2a499e644d1343b353b2a1b9", "text": "We consider the problem of finding related tables in a large corpus of heterogenous tables. Detecting related tables provides users a powerful tool for enhancing their tables with additional data and enables effective reuse of available public data. Our first contribution is a framework that captures several types of relatedness, including tables that are candidates for joins and tables that are candidates for union. Our second contribution is a set of algorithms for detecting related tables that can be either unioned or joined. We describe a set of experiments that demonstrate that our algorithms produce highly related tables. We also show that we can often improve the results of table search by pulling up tables that are ranked much lower based on their relatedness to top-ranked tables. Finally, we describe how to scale up our algorithms and show the results of running it on a corpus of over a million tables extracted from Wikipedia.", "title": "" }, { "docid": "382ac4d3ba3024d0c760cff1eef505c3", "text": "We seek to close the gap between software engineering (SE) and human-computer interaction (HCI) by indicating interdisciplinary interfaces throughout the different phases of SE and HCI lifecycles. As agile representatives of SE, Extreme Programming (XP) and Agile Modeling (AM) contribute helpful principles and practices for a common engineering approach. We present a cross-discipline user interface design lifecycle that integrates SE and HCI under the umbrella of agile development. Melting IT budgets, pressure of time and the demand to build better software in less time must be supported by traveling as light as possible. We did, therefore, choose not just to mediate both disciplines. Following our surveys, a rather radical approach best fits the demands of engineering organizations.", "title": "" }, { "docid": "4592c8f5758ccf20430dbec02644c931", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "5eb9e759ec8fc9ad63024130f753d136", "text": "A 3-10 GHz broadband CMOS T/R switch for ultra-wideband (UWB) transceiver is presented. The broadband CMOS T/R switch is fabricated based on the 0.18 mu 1P6M standard CMOS process. On-chip measurement of the CMOS T/R switch is performed. The insertion loss of the proposed CMOS T/R Switch is about 3.1plusmn1.3dB. The return losses at both input and output terminals are higher than 14 dB. It is also characterized with 25-34dB isolation and 18-20 dBm input P1dB. The broadband CMOS T/R switch shows highly linear phase and group delay of 20plusmn10 ps from 10MHz to 15GHz. It can be easily integrated with other CMOS RFICs to form on-chip transceivers for various UWB applications", "title": "" }, { "docid": "71a4399f8ccbeee4dced4d2eba3cf9ff", "text": "Generating text from structured data is important for various tasks such as question answering and dialog systems. We show that in at least one domain, without any supervision and only based on unlabeled text, we are able to build a Natural Language Generation (NLG) system with higher performance than supervised approaches. In our approach, we interpret the structured data as a corrupt representation of the desired output and use a denoising auto-encoder to reconstruct the sentence. We show how to introduce noise into training examples that do not contain structured data, and that the resulting denoising auto-encoder generalizes to generate correct sentences when given structured data.", "title": "" }, { "docid": "081b15c3dda7da72487f5a6e96e98862", "text": "The CEDAR real-time address block location system, which determines candidates for the location of the destination address from a scanned mail piece image, is described. For each candidate destination address block (DAB), the address block location (ABL) system determines the line segmentation, global orientation, block skew, an indication of whether the address appears to be handwritten or machine printed, and a value indicating the degree of confidence that the block actually contains the destination address. With 20-MHz Sparc processors, the average time per mail piece for the combined hardware and software system components is 0.210 seconds. The system located 89.0% of the addresses as the top choice. Recent developments in the system include the use of a top-down segmentation tool, address syntax analysis using only connected component data, and improvements to the segmentation refinement routines. This has increased top choice performance to 91.4%.<<ETX>>", "title": "" }, { "docid": "da36aa77b26e5966bdb271da19bcace3", "text": "We present Brian, a new clock driven simulator for spiking neural networks which is available on almost all platforms. Brian is easy to learn and use, highly flexible and easily extensible. The Brian package itself and simulations using it are all written in the Python programming language, which is very well adapted to these goals. Python is an easy, concise and highly developed language with many advanced features and development tools, excellent documentation and a large community of users providing support and extension packages. Brian allows you to write very concise, natural and readable code for simulations, and makes it quick and efficient to play with these models (for example, changing the differential equations doesn't require a recompile of the code). Figure 1 shows an example of a complete network implemented in Brian, a randomly connected network of integrate and fire neurons with exponential inhibitory and excitatory currents (the CUBA network from [1]). Defining the model, running from Seventeenth Annual Computational Neuroscience Meeting: CNS*2008 Portland, OR, USA. 19–24 July 2008", "title": "" }, { "docid": "10a6bccb77b6b94149c54c9e343ceb6c", "text": "Clone detectors find similar code fragments (i.e., instances of code clones) and report large numbers of them for industrial systems. To maintain or manage code clones, developers often have to investigate differences of multiple cloned code fragments. However,existing program differencing techniques compare only two code fragments at a time. Developers then have to manually combine several pairwise differencing results. In this paper, we present an approach to automatically detecting differences across multiple clone instances. We have implemented our approach as an Eclipse plugin and evaluated its accuracy with three Java software systems. Our evaluation shows that our algorithm has precision over 97.66% and recall over 95.63% in three open source Java projects. We also conducted a user study of 18 developers to evaluate the usefulness of our approach for eight clone-related refactoring tasks. Our study shows that our approach can significantly improve developers’performance in refactoring decisions, refactoring details, and task completion time on clone-related refactoring tasks. Automatically detecting differences across multiple clone instances also opens opportunities for building practical applications of code clones in software maintenance, such as auto-generation of application skeleton, intelligent simultaneous code editing.", "title": "" } ]
scidocsrr
f747d5351707e12c29021a9b41ca5792
Effectiveness of virtual reality-based pain control with multiple treatments.
[ { "docid": "bf6d56c2fd716802b8e2d023f86a4225", "text": "This is the first case report to demonstrate the efficacy of immersive computer-generated virtual reality (VR) and mixed reality (touching real objects which patients also saw in VR) for the treatment of spider phobia. The subject was a 37-yr-old female with severe and incapacitating fear of spiders. Twelve weekly 1-hr sessions were conducted over a 3-month period. Outcome was assessed on measures of anxiety, avoidance, and changes in behavior toward real spiders. VR graded exposure therapy was successful for reducing fear of spiders providing converging evidence for a growing literature showing the effectiveness of VR as a new medium for exposure therapy.", "title": "" } ]
[ { "docid": "750846bc27dc013bd0d392959caf3ecc", "text": "Analysis of the WinZip en ryption method Tadayoshi Kohno May 8, 2004 Abstra t WinZip is a popular ompression utility for Mi rosoft Windows omputers, the latest version of whi h is advertised as having \\easy-to-use AES en ryption to prote t your sensitive data.\" We exhibit several atta ks against WinZip's new en ryption method, dubbed \\AE-2\" or \\Advan ed En ryption, version two.\" We then dis uss se ure alternatives. Sin e at a high level the underlying WinZip en ryption method appears se ure (the ore is exa tly En ryptthen-Authenti ate using AES-CTR and HMAC-SHA1), and sin e one of our atta ks was made possible be ause of the way that WinZip Computing, In . de ided to x a di erent se urity problem with its previous en ryption method AE-1, our atta ks further unders ore the subtlety of designing ryptographi ally se ure software.", "title": "" }, { "docid": "96d8971bf4a8d18f4471019796348e1b", "text": "Most wired active electrodes reported so far have a gain of one and require at least three wires. This leads to stiff cables, large connectors and additional noise for the amplifier. The theoretical advantages of amplifying the signal on the electrodes right from the source has often been described, however, rarely implemented. This is because a difference in the gain of the electrodes due to component tolerances strongly limits the achievable common mode rejection ratio (CMRR). In this paper, we introduce an amplifier for bioelectric events where the major part of the amplification (40 dB) is achieved on the electrodes to minimize pick-up noise. The electrodes require only two wires of which one can be used for shielding, thus enabling smaller connecters and smoother cables. Saturation of the electrodes is prevented by a dc-offset cancelation scheme with an active range of /spl plusmn/250 mV. This error feedback simultaneously allows to measure the low frequency components down to dc. This enables the measurement of slow varying signals, e.g., the change of alertness or the depolarization before an epileptic seizure normally not visible in a standard electroencephalogram (EEG). The amplifier stage provides the necessary supply current for the electrodes and generates the error signal for the feedback loop. The amplifier generates a pseudodifferential signal where the amplified bioelectric event is present on one lead, but the common mode signal is present on both leads. Based on the pseudodifferential signal we were able to develop a new method to compensate for a difference in the gain of the active electrodes which is purely software based. The amplifier system is then characterized and the input referred noise as well as the CMRR are measured. For the prototype circuit the CMRR evaluated to 78 dB (without the driven-right-leg circuit). The applicability of the system is further demonstrated by the recording of an ECG.", "title": "" }, { "docid": "2006a3fd87a3d7228b2a25061f7eb06b", "text": "Thailand suffers from frequent flooding during the monsoon season and droughts in summer. In some places, severe cases of both may even occur. Managing water resources effectively requires a good information system for decision-making. There is currently a lack in knowledge sharing between organizations and researchers responsible. These are the experts in monitoring and controlling the water supply and its conditions. The knowledge owned by these experts are not captured, classified and integrated into an information system for decisionmaking. Ontologies are formal knowledge representation models. Knowledge management and artificial intelligence technology is a basic requirement for developing ontology-based semantic search on the Web. In this paper, we present ontology modeling approach that is based on the experiences of the researchers. The ontology for drought management consists of River Basin Ontology, Statistics Ontology and Task Ontology to facilitate semantic match during search. The hybrid ontology architecture can also be used for drought management", "title": "" }, { "docid": "2a987f50527c4b4501ae29493f703e32", "text": "The emergence of novel techniques for automatic anomaly detection in surveillance videos has significantly reduced the burden of manual processing of large, continuous video streams. However, existing anomaly detection systems suffer from a high false-positive rate and also, are not real-time, which makes them practically redundant. Furthermore, their predefined feature selection techniques limit their application to specific cases. To overcome these shortcomings, a dynamic anomaly detection and localization system is proposed, which uses deep learning to automatically learn relevant features. In this technique, each video is represented as a group of cubic patches for identifying local and global anomalies. A unique sparse denoising autoencoder architecture is used, that significantly reduced the computation time and the number of false positives in frame-level anomaly detection by more than 2.5%. Experimental analysis on two benchmark data sets - UMN dataset and UCSD Pedestrian dataset, show that our algorithm outperforms the state-of-the-art models in terms of false positive rate, while also showing a significant reduction in computation time.", "title": "" }, { "docid": "199d2f3d640fbb976ef27c8d129922ef", "text": "Federated learning enables resource-constrained edge compute devices, such as mobile phones and IoT devices, to learn a shared model for prediction, while keeping the training data local. This decentralized approach to train models provides privacy, security, regulatory and economic benefits. In this work, we focus on the statistical challenge of federated learning when local data is non-IID. We first show that the accuracy of federated learning reduces significantly, by up to ~55% for neural networks trained for highly skewed non-IID data, where each client device trains only on a single class of data. We further show that this accuracy reduction can be explained by the weight divergence, which can be quantified by the earth mover’s distance (EMD) between the distribution over classes on each device and the population distribution. As a solution, we propose a strategy to improve training on non-IID data by creating a small subset of data which is globally shared between all the edge devices. Experiments show that accuracy can be increased by ~30% for the CIFAR-10 dataset with only 5% globally shared data.", "title": "" }, { "docid": "651d048aaae1ce1608d3d9f0f09d4b9b", "text": "We investigate here the behavior of the standard k-means clustering algorithm and several alternatives to it: the k-harmonic means algorithm due to Zhang and colleagues, fuzzy k-means, Gaussian expectation-maximization, and two new variants of k-harmonic means. Our aim is to find which aspects of these algorithms contribute to finding good clusterings, as opposed to converging to a low-quality local optimum. We describe each algorithm in a unified framework that introduces separate cluster membership and data weight functions. We then show that the algorithms do behave very differently from each other on simple low-dimensional synthetic datasets and image segmentation tasks, and that the k-harmonic means method is superior. Having a soft membership function is essential for finding high-quality clusterings, but having a non-constant data weight function is useful also.", "title": "" }, { "docid": "90738b84c4db0a267c7213c923368e6a", "text": "Detecting overlapping communities is essential to analyzing and exploring natural networks such as social networks, biological networks, and citation networks. However, most existing approaches do not scale to the size of networks that we regularly observe in the real world. In this paper, we develop a scalable approach to community detection that discovers overlapping communities in massive real-world networks. Our approach is based on a Bayesian model of networks that allows nodes to participate in multiple communities, and a corresponding algorithm that naturally interleaves subsampling from the network and updating an estimate of its communities. We demonstrate how we can discover the hidden community structure of several real-world networks, including 3.7 million US patents, 575,000 physics articles from the arXiv preprint server, and 875,000 connected Web pages from the Internet. Furthermore, we demonstrate on large simulated networks that our algorithm accurately discovers the true community structure. This paper opens the door to using sophisticated statistical models to analyze massive networks.", "title": "" }, { "docid": "d568194d6b856243056c072c96c76115", "text": "OBJECTIVE\nTo develop an evidence-based guideline to help clinicians make decisions about when and how to safely taper and stop antipsychotics; to focus on the highest level of evidence available and seek input from primary care professionals in the guideline development, review, and endorsement processes.\n\n\nMETHODS\nThe overall team comprised 9 clinicians (1 family physician, 1 family physician specializing in long-term care, 1 geriatric psychiatrist, 2 geriatricians, 4 pharmacists) and a methodologist; members disclosed conflicts of interest. For guideline development, a systematic process was used, including the GRADE (Grading of Recommendations Assessment, Development and Evaluation) approach. Evidence was generated from a Cochrane systematic review of antipsychotic deprescribing trials for the behavioural and psychological symptoms of dementia, and a systematic review was conducted to assess the evidence behind the benefits of using antipsychotics for insomnia. A review of reviews of the harms of continued antipsychotic use was performed, as well as narrative syntheses of patient preferences and resource implications. This evidence and GRADE quality-of-evidence ratings were used to generate recommendations. The team refined guideline content and recommendation wording through consensus and synthesized clinical considerations to address common front-line clinician questions. The draft guideline was distributed to clinicians and stakeholders for review and revisions were made at each stage.\n\n\nRECOMMENDATIONS\nWe recommend deprescribing antipsychotics for adults with behavioural and psychological symptoms of dementia treated for at least 3 months (symptoms stabilized or no response to an adequate trial) and for adults with primary insomnia treated for any duration or secondary insomnia in which underlying comorbidities are managed. A decision-support algorithm was developed to accompany the guideline.\n\n\nCONCLUSION\nAntipsychotics are associated with harms and can be safely tapered. Patients and caregivers might be more amenable to deprescribing if they understand the rationale (potential for harm), are involved in developing the tapering plan, and are offered behavioural advice or management. This guideline provides recommendations for making decisions about when and how to reduce the dose of or stop antipsychotics. Recommendations are meant to assist with, not dictate, decision making in conjunction with patients and families.", "title": "" }, { "docid": "c3a8bbd853667155eee4cfb74692bd0f", "text": "The contemporary approach to database system architecture requires the complete integration of data into a single, centralized database; while multiple logical databases can be supported by current database management software, techniques for relating these databases are strictly ad hoc. This problem is aggravated by the trend toward networks of small to medium size computer systems, as opposed to large, stand-alone main-frames. Moreover, while current research on distributed databases aims to provide techniques that support the physical distribution of data items in a computer network environment, current approaches require a distributed database to be logically centralized.", "title": "" }, { "docid": "dde5eb29c02f95cbf47bb9a3895d7fd8", "text": "Text password is the most popular form of user authentication on websites due to its convenience and simplicity. However, users' passwords are prone to be stolen and compromised under different threats and vulnerabilities. Firstly, users often select weak passwords and reuse the same passwords across different websites. Routinely reusing passwords causes a domino effect; when an adversary compromises one password, she will exploit it to gain access to more websites. Second, typing passwords into untrusted computers suffers password thief threat. An adversary can launch several password stealing attacks to snatch passwords, such as phishing, keyloggers and malware. In this paper, we design a user authentication protocol named oPass which leverages a user's cellphone and short message service to thwart password stealing and password reuse attacks. oPass only requires each participating website possesses a unique phone number, and involves a telecommunication service provider in registration and recovery phases. Through oPass, users only need to remember a long-term password for login on all websites. After evaluating the oPass prototype, we believe oPass is efficient and affordable compared with the conventional web authentication mechanisms.", "title": "" }, { "docid": "ce839ea9b5cc8de275b634c920f45329", "text": "As a matter of fact, most natural structures are complex topology structures with intricate holes or irregular surface morphology. These structures can be used as lightweight infill, porous scaffold, energy absorber or micro-reactor. With the rapid advancement of 3D printing, the complex topology structures can now be efficiently and accurately fabricated by stacking layered materials. The novel manufacturing technology and application background put forward new demands and challenges to the current design methodologies of complex topology structures. In this paper, a brief review on the development of recent complex topology structure design methods was provided; meanwhile, the limitations of existing methods and future work are also discussed in the end.", "title": "" }, { "docid": "97c40f796f104587a465f5d719653181", "text": "Although some theory suggests that it is impossible to increase one’s subjective well-being (SWB), our ‘sustainable happiness model’ (Lyubomirsky, Sheldon, & Schkade, 2005) specifies conditions under which this may be accomplished. To illustrate the three classes of predictor in the model, we first review research on the demographic/circumstantial, temperament/personality, and intentional/experiential correlates of SWB. We then introduce the sustainable happiness model, which suggests that changing one’s goals and activities in life is the best route to sustainable new SWB. However, the goals and activities must be of certain positive types, must fit one’s personality and needs, must be practiced diligently and successfully, must be varied in their timing and enactment, and must provide a continued stream of fresh positive experiences. Research supporting the model is reviewed, including new research suggesting that happiness intervention effects are not just placebo effects. Everyone wants to be happy. Indeed, happiness may be the ultimate fundamental ‘goal’ that people pursue in their lives (Diener, 2000), a pursuit enshrined as an inalienable right in the US Declaration of Independence. The question of what produces happiness and wellbeing is the subject of a great deal of contemporary research, much of it falling under the rubric of ‘positive psychology’, an emerging field that also considers issues such as what makes for optimal relationships, optimal group functioning, and optimal communities. In this article, we first review some prominent definitions, theories, and research findings in the well-being literature. We then focus in particular on the question of whether it is possible to become lastingly happier in one’s life, drawing from our recent model of sustainable happiness. Finally, we discuss some recent experimental data suggesting that it is indeed possible to boost one’s happiness level, and to sustain that newfound level. A number of possible definitions of happiness exist. Let us start with the three proposed by Ed Diener in his landmark Psychological Bulletin 130 Is It Possible to Become Happier © 2007 The Authors Social and Personality Psychology Compass 1/1 (2007): 129–145, 10.1111/j.1751-9004.2007.00002.x Journal Compilation © 2007 Blackwell Publishing Ltd (1984) article. The first is ‘leading a virtuous life’, in which the person adheres to society’s vision of morality and proper conduct. This definition makes no reference to the person’s feelings or emotions, instead apparently making the implicit assumption that reasonably positive feelings will ensue if the person toes the line. A second definition of happiness involves a cognitive evaluation of life as a whole. Are you content, overall, or would you do things differently given the opportunity? This reflects a personcentered view of happiness, and necessarily taps peoples’ subjective judgments of whether they are satisfied with their lives. A third definition refers to typical moods. Are you typically in a positive mood (i.e., inspired, pleased, excited) or a negative mood (i.e., anxious, upset, depressed)? In this person-centered view, it is the balance of positive to negative mood that matters (Bradburn, 1969). Although many other conceptions of well-being exist (Lyubomirsky & Lepper, 1999; Ryan & Frederick, 1997; Ryff & Singer, 1996), ratings of life satisfaction and judgments of the frequency of positive and negative affect have received the majority of the research attention, illustrating the dominance of the second and third (person-centered) definitions of happiness in the research literature. Notably, positive affect, negative affect, and life satisfaction are presumed to be somewhat distinct. Thus, although life satisfaction typically correlates positively with positive affect and negatively with negative affect, and positive affect typically correlates negatively with negative affect, these correlations are not necessarily strong (and they also vary depending on whether one assesses a particular time or context, or the person’s experience as a whole). The generally modest correlations among the three variables means that an individual high in one indicator is not necessarily high (or low) in any other indicator. For example, a person with many positive moods might also experience many negative moods, and a person with predominantly good moods may or may not be satisfied with his or her life. As a case in point, a college student who has many friends and rewarding social interactions may be experiencing frequent pleasant affect, but, if he doubts that college is the right choice for him, he will be discontent with life. In contrast, a person experiencing many negative moods might nevertheless be satisfied with her life, if she finds her life meaningful or is suffering for a good cause. For example, a frazzled new mother may feel that all her most cherished life goals are being realized, yet she is experiencing a great deal of negative emotions on a daily basis. Still, the three quantities typically go together to an extent such that a comprehensive and reliable subjective well-being (SWB) indicator can be computed by summing positive affect and life satisfaction and subtracting negative affect. Can we trust people’s self-reports of happiness (or unhappiness)? Actually, we must: It would make little sense to claim that a person is happy if he or she does not acknowledge being happy. Still, it is possible to corroborate self-reports of well-being with reports from the respondents’ friends and", "title": "" }, { "docid": "a960ced0cd3859c037c43790a6b8436b", "text": "Ferroresonance is a widely studied phenomenon but it is still not well understood because of its complex behavior. It is “fuzzy-resonance.” A simple graphical approach using fundamental frequency phasors has been presented to elevate the readers understanding. Its occurrence and how it appears is extremely sensitive to the transformer characteristics, system parameters, transient voltages and initial conditions. More efficient transformer core material has lead to its increased occurrence and it has considerable effects on system apparatus and protection. Power system engineers should strive to recognize potential ferroresonant configurations and design solutions to prevent its occurrence.", "title": "" }, { "docid": "9db388f2564a24f58d8ea185e5b514be", "text": "Analyzing large volumes of log events without some kind of classification is undoable nowadays due to the large amount of events. Using AI to classify events make these log events usable again. With the use of the Keras Deep Learning API, which supports many Optimizing Stochastic Gradient Decent algorithms, better known as optimizers, this research project tried these algorithms in a Long Short-Term Memory (LSTM) network, which is a variant of the Recurrent Neural Networks. These algorithms have been applied to classify and update event data stored in Elastic-Search. The LSTM network consists of five layers where the output layer is a Dense layer using the Softmax function for evaluating the AI model and making the predictions. The Categorical Cross-Entropy is the algorithm used to calculate the loss. For the same AI model, different optimizers have been used to measure the accuracy and the loss. Adam was found as the best choice with an accuracy of 29,8%.", "title": "" }, { "docid": "b0bb9c4bcf666dca927d4f747bfb1ca1", "text": "Remote monitoring of animal behaviour in the environment can assist in managing both the animal and its environmental impact. GPS collars which record animal locations with high temporal frequency allow researchers to monitor both animal behaviour and interactions with the environment. These ground-based sensors can be combined with remotely-sensed satellite images to understand animal-landscape interactions. The key to combining these technologies is communication methods such as wireless sensor networks (WSNs). We explore this concept using a case-study from an extensive cattle enterprise in northern Australia and demonstrate the potential for combining GPS collars and satellite images in a WSN to monitor behavioural preferences and social behaviour of cattle.", "title": "" }, { "docid": "3601a56b6c68864da31ac5aaa67bff1a", "text": "Information asymmetry exists amongst stakeholders in the current food supply chain. Lack of standardization in data format, lack of regulations, and siloed, legacy information systems exasperate the problem. Global agriculture trade is increasing creating a greater need for traceability in the global supply chain. This paper introduces Harvest Network, a theoretical end-to-end, vis a vie “farm-to-fork”, food traceability application integrating the Ethereum blockchain and IoT devices exchanging GS1 message standards. The goal is to create a distributed ledger accessible for all stakeholders in the supply chain. Our design effort creates a basic framework (artefact) for building a prototype or simulation using existing technologies and protocols [1]. The next step is for industry practitioners and researchers to apply AGILE methods for creating working prototypes and advanced projects that bring about greater transparency.", "title": "" }, { "docid": "f92087a8e81c45cd8bedc12fddd682fc", "text": "This paper presented a novel power conversion method of realizing the galvanic isolation by dual safety capacitors (Y-cap) instead of conventional transformer. With limited capacitance of the Y capacitor, series resonant is proposed to achieve the power transfer. The basic concept is to control the power path impedance, which blocks the dominant low-frequency part of touch current and let the high-frequency power flow freely. Conceptual analysis, simulation and design considerations are mentioned in this paper. An 85W AC/AC prototype is designed and verified to substitute the isolation transformer of a CCFL LCD TV backlight system. Compared with the conventional transformer isolation, the new method is proved to meet the function and safety requirements of its specification while has higher efficiency and smaller size.", "title": "" }, { "docid": "5fde7006ec6f7cf4f945b234157e5791", "text": "In this work, we investigate the value of uncertainty modelling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use.", "title": "" }, { "docid": "2070b05100a92e883252c80666c3dde8", "text": "Visiting museums and exhibitions represented in multi-user 3D environments can be an efficient way of learning about the exhibits in an interactive manner and socialising with other visitors. The rich educational information presented in the virtual environment and the presence of remote users could also be beneficial for the visitors of the physical exhibition space. In this paper we present the design and implementation of a virtual exhibition that allowed local and remote visitors coexist in the environment, access the interactive content and communicate with each other. The virtual exhibition was accessible to the remote users from the Web and to local visitors through an installation in the physical space. The installation projected the virtual world in the exhibition environment and let users interact with it using a handheld gesture-based device. We performed an evaluation of the 3D environment with the participation of both local and remote visitors. The evaluation results indicate that the virtual world was considered exciting and easy to use by the majority of the participants. Furthermore, according to the evaluation results, virtual museums and exhibitions seem to have significant advantages for remote visitors compared to typical museum web sites, and they can also be an important aid to local visitors and enhance their experience.", "title": "" }, { "docid": "5b6d68984b4f9a6e0f94e0a68768dc8c", "text": "In this paper, we focus on a major internet problem which is a huge amount of uncategorized text. We review existing techniques used for feature selection and categorization. After reviewing the existing literature, it was found that there exist some gaps in existing algorithms, one of which is a requirement of the labeled dataset for the training of the classifier. Keywords— Bayesian; KNN; PCA; SVM; TF-IDF", "title": "" } ]
scidocsrr
3607293589205489da619f7cc6a8cc23
Deep Convolutional Neural Networks for Anomaly Event Classification on Distributed Systems
[ { "docid": "dbb9db490ae3c1bb91d22ecd8d679270", "text": "The growing computational and storage needs of several scientific applications mandate the deployment of extreme-scale parallel machines, such as IBM's BlueGene/L, which can accommodate as many as 128K processors. In this paper, we present our experiences in collecting and filtering error event logs from a 8192 processor BlueGene/L prototype at IBM Rochester, which is currently ranked #8 in the Top-500 list. We analyze the logs collected from this machine over a period of 84 days starting from August 26, 2004. We perform a three-step filtering algorithm on these logs: extracting and categorizing failure events; temporal filtering to remove duplicate reports from the same location; and finally coalescing failure reports of the same error across different locations. Using this approach, we can substantially compress these logs, removing over 99.96% of the 828,387 original entries, and more accurately portray the failure occurrences on this system.", "title": "" }, { "docid": "4dc9360837b5793a7c322f5b549fdeb1", "text": "Today, event logs contain vast amounts of data that can easily overwhelm a human. Therefore, mining patterns from event logs is an important system management task. This paper presents a novel clustering algorithm for log file data sets which helps one to detect frequent patterns from log files, to build log file profiles, and to identify anomalous log file lines. Keywords—system monitoring, data mining, data clustering", "title": "" } ]
[ { "docid": "86c0b7d49d0cecc3a2554b85ec08f3ed", "text": "Advanced driver assistance systems and the environment perception for autonomous vehicles will benefit from systems robustly tracking objects while simultaneously estimating their shape. Unlike many recent approaches that represent object shapes by approximated models such as boxes or ellipses, this paper proposes an algorithm that estimates a free-formed shape derived from raw laser measurements. For that purpose local occupancy grid maps are used to model arbitrary object shapes. Beside shape estimation the algorithm keeps a stable reference point on the object. This will be important to avoid apparent motion if the observable part of an object contour changes. The algorithm is part of a perception system and is tested with two 4-layer laser scanners.", "title": "" }, { "docid": "807dedfe0c5d71ac87bb7fed194c47be", "text": "DRAM memory is a major contributor for the total power consumption in modern computing systems. Consequently, power reduction for DRAM memory is critical to improve system-level power efficiency. Fine-grained DRAM architecture [1, 2] has been proposed to reduce the activation/ precharge power. However, those prior work either incurs significant performance degradation or introduces large area overhead. In this paper, we propose a novel memory architecture Half-DRAM, in which the DRAM array is reorganized to enable only half of a row being activated. The half-row activation can effectively reduce activation power and meanwhile sustain the full bandwidth one bank can provide. In addition, the half-row activation in Half-DRAM relaxes the power constraint in DRAM, and opens up opportunities for further performance gain. Furthermore, two half-row accesses can be issued in parallel by integrating the sub-array level parallelism to improve the memory level parallelism. The experimental results show that Half-DRAM can achieve both significant performance improvement and power reduction, with negligible design overhead", "title": "" }, { "docid": "ec87def0b881822e6a3df6c523c0eec5", "text": "OH-PBDEs have been reported to be more potent than the postulated precursor PBDEs or corresponding MeO-PBDEs. However, there are contradictory reports for transformation of these compounds in organisms, particularly, for biotransformation of OH-PBDEs and MeO-PBDEs, only one study reported transformation of 6-OH-BDE-47 and 6-MeO-BDE-47 in Japanese medaka. In present study zebrafish (Danio rerio) were exposed to BDE-47, 6-OH-BDE-47, 6-MeO-BDE-47, 2'-OH-BDE-28 and 2'-MeO-BDE-28 in the diet for 20 d. Concentrations of each exposed compound were measured in eggs collected on days 0, 5, 10, 15 or 20. After 20 d exposure, concentrations of precursor and biotransformation products in liver and liver-free residual carcass were measured by use of GC-MS/MS. Total mass of the five compounds in bodies of adults were: 2'-MeO-BDE-28 ∼ 6-MeO-BDE-47>BDE-47>2'-OH-BDE-28>6-OH-BDE-47. MeO-PBDEs were also accumulated more into parental fish body than in liver, while OH-PBDEs accumulated in liver more than in liver-free residual carcass. Concentrations in liver of males were greater than those of females. This result suggests sex-related differences in accumulation. Ratios between concentration in eggs and liver (E/L) were: 2.9, 1.7, 0.8, 0.4 and 0.1 for 6-MeO-BDE-47, BDE-47, 6-OH-BDE-47, 2'-MeO-BDE-28 and 2'-OH-BDE-28, respectively. This result suggests transfer from adult females to eggs. BDE-47 was not transformed into OH-PBDEs or MeO-PBDEs. Inter-conversions of 6-OH-BDE-47 and 6-MeO-BDE-47, 2'-OH-BDE-28 and 2'-MeO-BDE-28 were observed, with metabolite/precursor concentration ratios for 6-OH-BDE-47, 6-MeO-BDE-47, 2'-OH-BDE-28 and 2'-MeO-BDE-28 being 3.8%, 14.6%, 2.9% and 76.0%, respectively. Congener-specific differences were observed in distributions between liver and carcass, maternal transfer and transformation. The two MeO-PBDEs were accumulated into adults, transferred to eggs, and were transformed to the structural similar OH-PBDEs, which might be more toxic. BDE-47 was accumulated into adults and transferred from females to eggs, but not transformed to MeO-PBDEs and/or OH-PBDEs. Accumulation of OH-PBDEs into adults as well as rates of transformation of OH-PBDEs to MeO-PBDEs were all several orders of magnitude less. Thus, MeO-PBDEs are likely to present more of a risk in the environment.", "title": "" }, { "docid": "2c39f8c440a89f72db8814e633cb5c04", "text": "There is increasing evidence that gardening provides substantial human health benefits. However, no formal statistical assessment has been conducted to test this assertion. Here, we present the results of a meta-analysis of research examining the effects of gardening, including horticultural therapy, on health. We performed a literature search to collect studies that compared health outcomes in control (before participating in gardening or non-gardeners) and treatment groups (after participating in gardening or gardeners) in January 2016. The mean difference in health outcomes between the two groups was calculated for each study, and then the weighted effect size determined both across all and sets of subgroup studies. Twenty-two case studies (published after 2001) were included in the meta-analysis, which comprised 76 comparisons between control and treatment groups. Most studies came from the United States, followed by Europe, Asia, and the Middle East. Studies reported a wide range of health outcomes, such as reductions in depression, anxiety, and body mass index, as well as increases in life satisfaction, quality of life, and sense of community. Meta-analytic estimates showed a significant positive effect of gardening on the health outcomes both for all and sets of subgroup studies, whilst effect sizes differed among eight subgroups. Although Egger's test indicated the presence of publication bias, significant positive effects of gardening remained after adjusting for this using trim and fill analysis. This study has provided robust evidence for the positive effects of gardening on health. A regular dose of gardening can improve public health.", "title": "" }, { "docid": "094f1e41fde1392cbdc3e1956cf2fc53", "text": "This paper investigates the characteristics of the active and reactive power sharing in a parallel inverters system under different system impedance conditions. The analyses conclude that the conventional droop method cannot achieve efficient power sharing for the case of a system with complex impedance condition. To achieve the proper power balance and minimize the circulating current in the different impedance situations, a novel droop controller that considers the impact of complex impedance is proposed in this paper. This controller can simplify the coupled active and reactive power relationships, which are caused by the complex impedance in the parallel system. In addition, a virtual complex impedance loop is included in the proposed controller to minimize the fundamental and harmonic circulating current that flows in the parallel system. Compared to the other methods, the proposed controller can achieve accurate power sharing, offers efficient dynamic performance, and is more adaptive to different line impedance situations. Simulation and experimental results are presented to prove the validity and the improvements achieved by the proposed controller.", "title": "" }, { "docid": "05eaf278ed39cd6a8522f812589388c6", "text": "Several recent software systems have been designed to obtain novel annotation of cross-referencing text fragments and Wikipedia pages. Tagme is state of the art in this setting and can accurately manage short textual fragments (such as snippets of search engine results, tweets, news, or blogs) on the fly.", "title": "" }, { "docid": "2f8361f2943ff90bf98c6b8a207086c4", "text": "Real-life bugs are successful because of their unfailing ability to adapt. In particular this applies to their ability to adapt to strategies that are meant to eradicate them as a species. Software bugs have some of these same traits. We will discuss these traits, and consider what we can do about them.", "title": "" }, { "docid": "c05b6720cdfdf6170ccce6486d485dc0", "text": "The naturalness of warps is gaining extensive attention in image stitching. Recent warps, such as SPHP and AANAP, use global similarity warps to mitigate projective distortion (which enlarges regions); however, they necessarily bring in perspective distortion (which generates inconsistencies). In this paper, we propose a novel quasi-homography warp, which effectively balances the perspective distortion against the projective distortion in the non-overlapping region to create a more natural-looking panorama. Our approach formulates the warp as the solution of a bivariate system, where perspective distortion and projective distortion are characterized as slope preservation and scale linearization, respectively. Because our proposed warp only relies on a global homography, it is thus totally parameter free. A comprehensive experiment shows that a quasi-homography warp outperforms some state-of-the-art warps in urban scenes, including homography, AutoStitch and SPHP. A user study demonstrates that it wins most users’ favor, compared to homography and SPHP.", "title": "" }, { "docid": "77326d21f3bfdbf0d6c38c2cde871bf5", "text": "There have been a number of linear, feature-based models proposed by the information retrieval community recently. Although each model is presented differently, they all share a common underlying framework. In this paper, we explore and discuss the theoretical issues of this framework, including a novel look at the parameter space. We then detail supervised training algorithms that directly maximize the evaluation metric under consideration, such as mean average precision. We present results that show training models in this way can lead to significantly better test set performance compared to other training methods that do not directly maximize the metric. Finally, we show that linear feature-based models can consistently and significantly outperform current state of the art retrieval models with the correct choice of features.", "title": "" }, { "docid": "aee250663a05106c4c0fad9d0f72828c", "text": "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers.", "title": "" }, { "docid": "8fe823702191b4a56defaceee7d19db6", "text": "We propose a method of stacking multiple long short-term memory (LSTM) layers for modeling sentences. In contrast to the conventional stacked LSTMs where only hidden states are fed as input to the next layer, our architecture accepts both hidden and memory cell states of the preceding layer and fuses information from the left and the lower context using the soft gating mechanism of LSTMs. Thus the proposed stacked LSTM architecture modulates the amount of information to be delivered not only in horizontal recurrence but also in vertical connections, from which useful features extracted from lower layers are effectively conveyed to upper layers. We dub this architecture Cell-aware Stacked LSTM (CAS-LSTM) and show from experiments that our models achieve state-of-the-art results on benchmark datasets for natural language inference, paraphrase detection, and sentiment classification.", "title": "" }, { "docid": "e11c486975fb5c277f39f131de87f399", "text": "OBJECTIVES\nThere is a clinical impression of dissatisfaction with treatment for hypothyroidism among some patients. Psychometric properties of the new ThyTSQ questionnaire are evaluated. The questionnaire, measuring patients' satisfaction with their treatment for hypothyroidism, has two parts: the seven-item ThyTSQ-Present and four-item ThyTSQ-Past, measuring satisfaction with present and past treatment, respectively, on scales from 6 (very satisfied) to 0 (very dissatisfied).\n\n\nMETHODS\nThe questionnaire was completed once by 103 adults with hypothyroidism, age (mean [SD]) 55.2 [14.4], range 23-84 years (all treated with thyroxine).\n\n\nRESULTS\nCompletion rates were very high. Internal consistency reliability was excellent for both ThyTSQ-Present and ThyTSQ-Past (Cronbach's alpha = 0.91 and 0.90, respectively [N = 102 and 103]). Principal components analyses indicated that the seven items of the ThyTSQ-Present and the four items of the ThyTSQ-Past could be summed into separate Present Satisfaction and Past Satisfaction total scores. Mean Present Satisfaction was 32.5 (7.8), maximum range 0-42, and mean Past Satisfaction was 17.5 (6.1), maximum range 0-24, indicating considerable room for improvement. Patients were least satisfied with their present understanding of their condition, mean 4.2 (1.7) (maximum range 0-6), and with information provided about hypothyroidism around the time of diagnosis, mean 3.9 (1.8) (maximum range 0-6).\n\n\nCONCLUSIONS\nThe ThyTSQ is highly acceptable to patients with hypothyroidism (excellent completion rates), and has established internal consistency reliability. It will assist health professionals in considering psychological outcomes when treating people with hypothyroidism, and is suitable for clinical trials and routine clinical monitoring.", "title": "" }, { "docid": "0f325e4fe9faf6c43a68ea2721b85f58", "text": "Prosopis juliflora is characterized by distinct and profuse growth even in nutritionally poor soil and environmentally stressed conditions and is believed to harbor some novel heavy metal-resistant bacteria in the rhizosphere and endosphere. This study was performed to isolate and characterize Cr-resistant bacteria from the rhizosphere and endosphere of P. juliflora growing on the tannery effluent contaminated soil. A total of 5 and 21 bacterial strains were isolated from the rhizosphere and endosphere, respectively, and were shown to tolerate Cr up to 3000 mg l(-1). These isolates also exhibited tolerance to other toxic heavy metals such as, Cd, Cu, Pb, and Zn, and high concentration (174 g l(-1)) of NaCl. Moreover, most of the isolated bacterial strains showed one or more plant growth-promoting activities. The phylogenetic analysis of the 16S rRNA gene showed that the predominant species included Bacillus, Staphylococcus and Aerococcus. As far as we know, this is the first report analyzing rhizo- and endophytic bacterial communities associated with P. juliflora growing on the tannery effluent contaminated soil. The inoculation of three isolates to ryegrass (Lolium multiflorum L.) improved plant growth and heavy metal removal from the tannery effluent contaminated soil suggesting that these bacteria could enhance the establishment of the plant in contaminated soil and also improve the efficiency of phytoremediation of heavy metal-degraded soils.", "title": "" }, { "docid": "107960c3c2e714804133f5918ac03b74", "text": "This paper reports on a data-driven motion planning approach for interaction-aware, socially-compliant robot navigation among human agents. Autonomous mobile robots navigating in workspaces shared with human agents require motion planning techniques providing seamless integration and smooth navigation in such. Smooth integration in mixed scenarios calls for two abilities of the robot: predicting actions of others and acting predictably for them. The former requirement requests trainable models of agent behaviors in order to accurately forecast their actions in the future, taking into account their reaction on the robot's decisions. A human-like navigation style of the robot facilitates other agents-most likely not aware of the underlying planning technique applied-to predict the robot motion vice versa, resulting in smoother joint navigation. The approach presented in this paper is based on a feature-based maximum entropy model and is able to guide a robot in an unstructured, real-world environment. The model is trained to predict joint behavior of heterogeneous groups of agents from onboard data of a mobile platform. We evaluate the benefit of interaction-aware motion planning in a realistic public setting with a total distance traveled of over 4 km. Interestingly the motion models learned from human-human interaction did not hold for robot-human interaction, due to the high attention and interest of pedestrians in testing basic braking functionality of the robot.", "title": "" }, { "docid": "956771bbfb0610a28090de1678c23774", "text": "Finding data governance practices that maintain a balance between value creation and risk exposure is the new organizational imperative for unlocking competitive advantage and maximizing value from the application of big data. The first Web extra at http://youtu.be/B2RlkoNjrzA is a video in which author Paul Tallon expands on his article \"Corporate Governance of Big Data: Perspectives on Value, Risk, and Cost\" and discusses how finding data governance practices that maintain a balance between value creation and risk exposure is the new organizational imperative for unlocking competitive advantage and maximizing value from the application of big data. The second Web extra at http://youtu.be/g0RFa4swaf4 is a video in which author Paul Tallon discusses the supplementary material to his article \"Corporate Governance of Big Data: Perspectives on Value, Risk, and Cost\" and how projection models can help individuals responsible for data handling plan for and understand big data storage issues.", "title": "" }, { "docid": "3c44f2bf1c8a835fb7b86284c0b597cd", "text": "This paper explores some of the key electromagnetic design aspects of a synchronous reluctance motor that is equipped with single-tooth windings (i.e., fractional slot concentrated windings). The analyzed machine, a 6-slot 4-pole motor, utilizes a segmented stator core structure for ease of coil winding, pre-assembly, and facilitation of high slot fill factors (~60%). The impact on the motors torque producing capability and its power factor of these inter-segment air gaps between the stator segments is investigated through 2-D finite element analysis (FEA) studies where it is shown that they have a low impact. From previous studies, torque ripple is a known issue with this particular slot–pole combination of synchronous reluctance motor, and the use of two different commercially available semi-magnetic slot wedges is investigated as a method to improve torque quality. An analytical analysis of continuous rotor skewing is also investigated as an attempt to reduce the torque ripple. Finally, it is shown that through a combination of 2-D and 3-D FEA studies in conjunction with experimentally derived results on a prototype machine that axial fringing effects cannot be ignored when predicting the q-axis reactance in such machines. A comparison of measured orthogonal axis flux linkages/reactances with 3-D FEA studies is presented for the first time.", "title": "" }, { "docid": "fba109e4627d4bb580d07368e3c00cc1", "text": "-Wheeled-tracked vehicles are undoubtedly the most popular means of transportation. However, these vehicles are mainly suitable for relatively flat terrain. Legged vehicles, on the other hand, have the potential to handle wide variety of terrain. Robug IIs is a legged climbing robot designed to work in relatively unstructured and rough terrain. It has the capability of walking, climbing vertical surfaces and performing autonomous floor to wall transfer. The sensing technique used in Robug IIs is mainly tactile and ultrasonic sensing. A set of reflexive rules have been developed for the robot to react to the uncertainty of the working environment. The robot also has the intelligence to seek and verify its own foot-holds. It is envisaged that the main application of robot is for remote inspection and maintenance in hazardous environments. Keywords—Legged robot, climbing service robot, insect inspired robot, pneumatic control, fuzzy logic.", "title": "" }, { "docid": "1b625a1136bec100f459a39b9b980575", "text": "This paper considers the sparse eigenvalue problem, which is to extract dominant (largest) sparse eigenvectors with at most k non-zero components. We propose a simple yet effective solution called truncated power method that can approximately solve the underlying nonconvex optimization problem. A strong sparse recovery result is proved for the truncated power method, and this theory is our key motivation for developing the new algorithm. The proposed method is tested on applications such as sparse principal component analysis and the densest k-subgraph problem. Extensive experiments on several synthetic and real-world data sets demonstrate the competitive empirical performance of our method.", "title": "" }, { "docid": "213149a116dabdd43c51707b07bc06b4", "text": "This work introduces the Green Vehicle Routing Problem (GVRP). The GVRP is an extension of the well-known vehicle routing problem (VRP). Moreover, the GVRP includes an objective function that minimizes weighted distance. Minimizing weighted distance reduces fuel consumption and consequently CO2 emissions. Therefore, the GVRP is more environmentally friendly than traditional versions of the VRP. This work presents a Mixed Integer Linear Program formulation for the problem and a Local Search algorithm to find local optima. Also, the problem is illustrated using a small problem instance.", "title": "" }, { "docid": "52e1c2f6df368e9bed3f5532e14e75b6", "text": "Fast visual recognition in the mammalian cortex seems to be a hierarchical process by which the representation of the visual world is transformed in multiple stages from low-level retinotopic features to high-level, global and invariant features, and to object categories. Every single step in this hierarchy seems to be subject to learning. How does the visual cortex learn such hierarchical representations by just looking at the world? How could computers learn such representations from data? Computer vision models that are weakly inspired by the visual cortex will be described. A number of unsupervised learning algorithms to train these models will be presented, which are based on the sparse auto-encoder concept. The effectiveness of these algorithms for learning invariant feature hierarchies will be demonstrated with a number of practical tasks such as scene parsing, pedestrian detection, and object classification.", "title": "" } ]
scidocsrr
d666574dab00a7f6a9d30717ee302bd3
Partial Least Squares (PLS) methods for neuroimaging: A tutorial and review
[ { "docid": "8f4a0c6252586fa01133f9f9f257ec87", "text": "The pls package implements principal component regression (PCR) and partial least squares regression (PLSR) in R (R Development Core Team 2006b), and is freely available from the Comprehensive R Archive Network (CRAN), licensed under the GNU General Public License (GPL). The user interface is modelled after the traditional formula interface, as exemplified by lm. This was done so that people used to R would not have to learn yet another interface, and also because we believe the formula interface is a good way of working interactively with models. It thus has methods for generic functions like predict, update and coef. It also has more specialised functions like scores, loadings and RMSEP, and a flexible crossvalidation system. Visual inspection and assessment is important in chemometrics, and the pls package has a number of plot functions for plotting scores, loadings, predictions, coefficients and RMSEP estimates. The package implements PCR and several algorithms for PLSR. The design is modular, so that it should be easy to use the underlying algorithms in other functions. It is our hope that the package will serve well both for interactive data analysis and as a building block for other functions or packages using PLSR or PCR. We will here describe the package and how it is used for data analysis, as well as how it can be used as a part of other packages. Also included is a section about formulas and data frames, for people not used to the R modelling idioms.", "title": "" } ]
[ { "docid": "ff56bae298b25accf6cd8c2710160bad", "text": "An important difference between traditional AI systems and human intelligence is the human ability to harness commonsense knowledge gleaned from a lifetime of learning and experience to make informed decisions. This allows humans to adapt easily to novel situations where AI fails catastrophically due to a lack of situation-specific rules and generalization capabilities. Commonsense knowledge also provides background information that enables humans to successfully operate in social situations where such knowledge is typically assumed. Since commonsense consists of information that humans take for granted, gathering it is an extremely difficult task. Previous versions of SenticNet were focused on collecting this kind of knowledge for sentiment analysis but they were heavily limited by their inability to generalize. SenticNet 4 overcomes such limitations by leveraging on conceptual primitives automatically generated by means of hierarchical clustering and dimensionality reduction.", "title": "" }, { "docid": "cd71e990546785bd9ba0c89620beb8d2", "text": "Crime is one of the most predominant and alarming aspects in our society and its prevention is a vital task. Crime analysis is a systematic way of detecting and investigating patterns and trends in crime. In this work, we use various clustering approaches of data mining to analyse the crime data of Tamilnadu. The crime data is extracted from National Crime Records Bureau (NCRB) of India. It consists of crime information about six cities namely Chennai, Coimbatore, Salem, Madurai, Thirunelvelli and Thiruchirapalli from the year 2000–2014 with 1760 instances and 9 attributes to represent the instances. K-Means clustering, Agglomerative clustering and Density Based Spatial Clustering with Noise (DBSCAN) algorithms are used to cluster crime activities based on some predefined cases and the results of these clustering are compared to find the best suitable clustering algorithm for crime detection. The result of K-Means clustering algorithm is visualized using Google Map for interactive and easy understanding. The K-Nearest Neighbor (KNN) classification is used for crime prediction. The performance of each clustering algorithms are evaluated using the metrics such as precision, recall and F-measure, and the results are compared. This work helps the law enforcement agencies to predict and detect crimes in Tamilnadu with improved accuracy and thus reduces the crime rate.", "title": "" }, { "docid": "f6e791e85d8570a9f10b45e8f028683d", "text": "We present a smartphone-based system for real-time tele-monitoring of physical activity in patients with chronic heart-failure (CHF). We recently completed a pilot study with 15 subjects to evaluate the feasibility of the proposed monitoring in the real world and examine its requirements, privacy implications, usability, and other challenges encountered by the participants and healthcare providers. Our tele-monitoring system was designed to assess patient activity via minute-by-minute energy expenditure (EE) estimated from accelerometry. In addition, we tracked relative user location via global positioning system (GPS) to track outdoors activity and measure walking distance. The system also administered daily surveys to inquire about vital signs and general cardiovascular symptoms. The collected data were securely transmitted to a central server where they were analyzed in real time and were accessible to the study medical staff to monitor patient health status and provide medical intervention if needed. Although the system was designed for tele-monitoring individuals with CHF, the challenges, privacy considerations, and lessons learned from this pilot study apply to other chronic health conditions, such as diabetes and hypertension, that would benefit from continuous monitoring through mobile-health (mHealth) technologies.", "title": "" }, { "docid": "64cefd949f61afe81fbbb9ca1159dd4a", "text": "Single carrier frequency division multiple access (SC-FDMA), which utilizes single carrier modulation and frequency domain equalization is a technique that has similar performance and essentially the same overall complexity as those of OFDM, in which high peak-to-average power ratio (PAPR) is a major drawback. An outstanding advantage of SC-FDMA is its lower PAPR due to its single carrier structure. In this paper, we analyze the PAPR of SC-FDMA signals with pulse shaping. We analytically derive the time domain SC-FDMA signals and numerically compare PAPR characteristics using the complementary cumulative distribution function (CCDF) of PAPR. The results show that SC-FDMA signals indeed have lower PAPR compared to those of OFDMA. Comparing the two forms of SC-FDMA, we find that localized FDMA (LFDMA) has higher PAPR than interleaved FDMA (IFDMA) but somewhat lower PAPR than OFDMA. Also noticeable is the fact that pulse shaping increases PAPR", "title": "" }, { "docid": "1d949b64320fce803048b981ae32ce38", "text": "In the field of voice therapy, perceptual evaluation is widely used by expert listeners as a way to evaluate pathological and normal voice quality. This approach is understandably subjective as it is subject to listeners’ bias which high interand intra-listeners variability can be found. As such, research on automatic assessment of pathological voices using a combination of subjective and objective analyses emerged. The present study aimed to develop a complementary automatic assessment system for voice quality based on the well-known GRBAS scale by using a battery of multidimensional acoustical measures through Deep Neural Networks. A total of 44 dimensionality parameters including Mel-frequency Cepstral Coefficients, Smoothed Cepstral Peak Prominence and Long-Term Average Spectrum was adopted. In addition, the state-of-the-art automatic assessment system based on Modulation Spectrum (MS) features and GMM classifiers was used as comparison system. The classification results using the proposed method revealed a moderate correlation with subjective GRBAS scores of dysphonic severity, and yielded a better performance than MS-GMM system, with the best accuracy around 81.53%. The findings indicate that such assessment system can be used as an appropriate evaluation tool in determining the presence and severity of voice disorders.", "title": "" }, { "docid": "f61d5c1b0c17de6aab8a0eafedb46311", "text": "The use of social media creates the opportunity to turn organization-wide knowledge sharing in the workplace from an intermittent, centralized knowledge management process to a continuous online knowledge conversation of strangers, unexpected interpretations and re-uses, and dynamic emergence. We theorize four affordances of social media representing different ways to engage in this publicly visible knowledge conversations: metavoicing, triggered attending, network-informed associating, and generative role-taking. We further theorize mechanisms that affect how people engage in the knowledge conversation, finding that some mechanisms, when activated, will have positive effects on moving the knowledge conversation forward, but others will have adverse consequences not intended by the organization. These emergent tensions become the basis for the implications we draw.", "title": "" }, { "docid": "ea87bfc0d6086e367e8950b445529409", "text": " Queue stability (Chapter 2.1)  Scheduling for stability, capacity regions (Chapter 2.3)  Linear programs (Chapter 2.3, Chapter 3)  Energy optimality (Chapter 3.2)  Opportunistic scheduling (Chapter 2.3, Chapter 3, Chapter 4.6)  Lyapunov drift and optimization (Chapter 4.1.0-4.1.2, 4.2, 4.3)  Inequality constraints and virtual queues (Chapter 4.4)  Drift-plus-penalty algorithm (Chapter 4.5)  Performance and delay tradeoffs (Chapter 3.2, 4.5)  Backpressure routing (Ex. 4.16, Chapter 5.2, 5.3)", "title": "" }, { "docid": "b52322509c5bed43b0de04847dd947a9", "text": "Chapter 1 presented a description of the ECG in terms of its etiology and clinical features, and Chapter 2 an overview of the possible sources of error introduced in the hardware collection and data archiving stages. With this groundwork in mind, this chapter is intended to introduce the reader to the ECG using a signal processing approach. The ECG typically exhibits both persistent features (such as the average PQRS-T morphology and the short-term average heart rate, or average RR interval), and nonstationary features (such as the individual RR and QT intervals, and longterm heart rate trends). Since changes in the ECG are quasi-periodic (on a beatto-beat, daily, and perhaps even monthly basis), the frequency can be quantified in both statistical terms (mean, variance) and via spectral estimation methods. In essence, all these statistics quantify the power or degree to which an oscillation is present in a particular frequency band (or at a particular scale), often expressed as a ratio to power in another band. Even for scale-free approaches (such as wavelets), the process of feature extraction tends to have a bias for a particular scale which is appropriate for the particular data set being analyzed. ECG statistics can be evaluated directly on the ECG signal, or on features extracted from the ECG. The latter category can be broken down into either morphology-based features (such as ST level) or timing-based statistics (such as heart rate variability). Before discussing these derived statistics, an overview of the ECG itself is given.", "title": "" }, { "docid": "bc66c4c480569a21fdb593500c7e76cf", "text": "Smallholder subsistence agriculture in the rural Eastern Cape Province is recognised as one of the major contributors to food security among the resourced-poor household. However, subsistence agriculture is thought to be unsustainable in the ever changing social, economic and political environment, and climate. This has contributed greatly to stagnate and widespread poverty among smallholder farmers in the Eastern Cape. For a sustainable transition from subsistence to smallholder commercial farming, strategies like accumulated social capital through rural farmer groups/cooperatives have been employed by the government and NGOs. These strategies have yielded mixed results of failed and successful farmer groups/cooperatives. Therefore, this study was aimed at establishing the impact of social capital on farmers’ household commercialization level of maize in addition to farm/farmer characteristics. The findings of this study established that smallholders’ average household commercialization index (HCI) of maize was 45%. Household size, crop sales, source of irrigation water, and bonding social capital had a positive and significant impact on HCI of maize while off-farm incomes and social values had a negative and significant impact on the same. Thus, innovation, adoption and use of labour saving technology, improved access to irrigation water and farmers’ access to trainings in relation to strengthening group cohesion are crucial in promoting smallholder commercial farming of maize in the study area.", "title": "" }, { "docid": "10f46999738c0d47ed16326631086933", "text": "We describe JAX, a domain-specific tracing JIT compiler for generating high-performance accelerator code from pure Python and Numpy machine learning programs. JAX uses the XLA compiler infrastructure to generate optimized code for the program subroutines that are most favorable for acceleration, and these optimized subroutines can be called and orchestrated by arbitrary Python. Because the system is fully compatible with Autograd, it allows forwardand reverse-mode automatic differentiation of Python functions to arbitrary order. Because JAX supports structured control flow, it can generate code for sophisticated machine learning algorithms while maintaining high performance. We show that by combining JAX with Autograd and Numpy we get an easily programmable and highly performant ML system that targets CPUs, GPUs, and TPUs, capable of scaling to multi-core Cloud TPUs.", "title": "" }, { "docid": "9332c32039cf782d19367a9515768e42", "text": "Maternal drug use during pregnancy is associated with fetal passive addiction and neonatal withdrawal syndrome. Cigarette smoking—highly prevalent during pregnancy—is associated with addiction and withdrawal syndrome in adults. We conducted a prospective, two-group parallel study on 17 consecutive newborns of heavy-smoking mothers and 16 newborns of nonsmoking, unexposed mothers (controls). Neurologic examinations were repeated at days 1, 2, and 5. Finnegan withdrawal score was assessed every 3 h during their first 4 d. Newborns of smoking mothers had significant levels of cotinine in the cord blood (85.8 ± 3.4 ng/mL), whereas none of the controls had detectable levels. Similar findings were observed with urinary cotinine concentrations in the newborns (483.1 ± 2.5 μg/g creatinine versus 43.6 ± 1.5 μg/g creatinine; p = 0.0001). Neurologic scores were significantly lower in newborns of smokers than in control infants at days 1 (22.3 ± 2.3 versus 26.5 ± 1.1; p = 0.0001), 2 (22.4 ± 3.3 versus 26.3 ± 1.6; p = 0.0002), and 5 (24.3 ± 2.1 versus 26.5 ± 1.5; p = 0.002). Neurologic scores improved significantly from day 1 to 5 in newborns of smokers (p = 0.05), reaching values closer to control infants. Withdrawal scores were higher in newborns of smokers than in control infants at days 1 (4.5 ± 1.1 versus 3.2 ± 1.4; p = 0.05), 2 (4.7 ± 1.7 versus 3.1 ± 1.1; p = 0.002), and 4 (4.7 ± 2.1 versus 2.9 ± 1.4; p = 0.007). Significant correlations were observed between markers of nicotine exposure and neurologic-and withdrawal scores. We conclude that withdrawal symptoms occur in newborns exposed to heavy maternal smoking during pregnancy.", "title": "" }, { "docid": "ec7f20169de673cc14b31e8516937df2", "text": "Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However, use of a template does not certify that the paper has been accepted for publication in the named journal. INFORMS journal templates are for the exclusive purpose of submitting to an INFORMS journal and should not be used to distribute the papers in print or online or to submit the papers to another publication.", "title": "" }, { "docid": "e97c0bbb74534a16c41b4a717eed87d5", "text": "This paper is discussing about the road accident severity survey using data mining, where different approaches have been considered. We have collected research work carried out by different researchers based on road accidents. Article describing the review work in context of road accident case’s using data mining approach. The article is consisting of collections of methods in different scenario with the aim to resolve the road accident. Every method is somewhere seeming to productive in some ways to decrease the no of causality. It will give a better edge to different country where the no of accidents is leading to fatality of life.", "title": "" }, { "docid": "840a8befafbf6fc43d19b890431f3953", "text": "The prevalence of high hyperlipemia is increasing around the world. Our aims are to analyze the relationship of triglyceride (TG) and cholesterol (TC) with indexes of liver function and kidney function, and to develop a prediction model of TG, TC in overweight people. A total of 302 adult healthy subjects and 273 overweight subjects were enrolled in this study. The levels of fasting indexes of TG (fs-TG), TC (fs-TC), blood glucose, liver function, and kidney function were measured and analyzed by correlation analysis and multiple linear regression (MRL). The back propagation artificial neural network (BP-ANN) was applied to develop prediction models of fs-TG and fs-TC. The results showed there was significant difference in biochemical indexes between healthy people and overweight people. The correlation analysis showed fs-TG was related to weight, height, blood glucose, and indexes of liver and kidney function; while fs-TC was correlated with age, indexes of liver function (P < 0.01). The MRL analysis indicated regression equations of fs-TG and fs-TC both had statistic significant (P < 0.01) when included independent indexes. The BP-ANN model of fs-TG reached training goal at 59 epoch, while fs-TC model achieved high prediction accuracy after training 1000 epoch. In conclusions, there was high relationship of fs-TG and fs-TC with weight, height, age, blood glucose, indexes of liver function and kidney function. Based on related variables, the indexes of fs-TG and fs-TC can be predicted by BP-ANN models in overweight people.", "title": "" }, { "docid": "98ca25396ccd0e7faf0d00b46a2ab470", "text": "Smart glasses, such as Google Glass, provide always-available displays not offered by console and mobile gaming devices, and could potentially offer a pervasive gaming experience. However, research on input for games on smart glasses has been constrained by the available sensors to date. To help inform design directions, this paper explores user-defined game input for smart glasses beyond the capabilities of current sensors, and focuses on the interaction in public settings. We conducted a user-defined input study with 24 participants, each performing 17 common game control tasks using 3 classes of interaction and 2 form factors of smart glasses, for a total of 2448 trials. Results show that users significantly preferred non-touch and non-handheld interaction over using handheld input devices, such as in-air gestures. Also, for touch input without handheld devices, users preferred interacting with their palms over wearable devices (51% vs 20%). In addition, users preferred interactions that are less noticeable due to concerns with social acceptance, and preferred in-air gestures in front of the torso rather than in front of the face (63% vs 37%).", "title": "" }, { "docid": "35dda21bd1f2c06a446773b0bfff2dd7", "text": "Mobile devices and their application marketplaces drive the entire economy of the today’s mobile landscape. Android platforms alone have produced staggering revenues, exceeding five billion USD, which has attracted cybercriminals and increased malware in Android markets at an alarming rate. To better understand this slew of threats, we present CopperDroid, an automatic VMI-based dynamic analysis system to reconstruct the behaviors of Android malware. The novelty of CopperDroid lies in its agnostic approach to identify interesting OSand high-level Android-specific behaviors. It reconstructs these behaviors by observing and dissecting system calls and, therefore, is resistant to the multitude of alterations the Android runtime is subjected to over its life-cycle. CopperDroid automatically and accurately reconstructs events of interest that describe, not only well-known process-OS interactions (e.g., file and process creation), but also complex intraand inter-process communications (e.g., SMS reception), whose semantics are typically contextualized through complex Android objects. Because CopperDroid’s reconstruction mechanisms are agnostic to the underlying action invocation methods, it is able to capture actions initiated both from Java and native code execution. CopperDroid’s analysis generates detailed behavioral profiles that abstract a large stream of low-level—often uninteresting—events into concise, high-level semantics, which are well-suited to provide insightful behavioral traits and open the possibility to further research directions. We carried out an extensive evaluation to assess the capabilities and performance of CopperDroid on more than 2,900 Android malware samples. Our experiments show that CopperDroid faithfully reconstructs OSand Android-specific behaviors. Additionally, we demonstrate how CopperDroid can be leveraged to disclose additional behaviors through the use of a simple, yet effective, app stimulation technique. Using this technique, we successfully triggered and disclosed additional behaviors on more than 60% of the analyzed malware samples. This qualitatively demonstrates the versatility of CopperDroid’s ability to improve dynamic-based code coverage.", "title": "" }, { "docid": "557864265ba9fe38bb4d9e4d70e40a06", "text": "Standard word embeddings lack the possibility to distinguish senses of a word by projecting them to exactly one vector. This has a negative effect particularly when computing similarity scores between words using standard vector-based similarity measures such as cosine similarity. We argue that minor senses play an important role in word similarity computations, hence we use an unsupervised sense inventory resource to retrofit monolingual word embeddings, producing sense-aware embeddings. Using retrofitted sense-aware embeddings, we show improved word similarity and relatedness results on multiple word embeddings and multiple established word similarity tasks, sometimes up to an impressive margin of +0.15 Spearman correlation score.", "title": "" }, { "docid": "39ebc7cc1a2cb50fb362804b6ae0f768", "text": "We model a dependency graph as a book, a particular kind of topological space, for semantic dependency parsing. The spine of the book is made up of a sequence of words, and each page contains a subset of noncrossing arcs. To build a semantic graph for a given sentence, we design new Maximum Subgraph algorithms to generate noncrossing graphs on each page, and a Lagrangian Relaxation-based algorithm to combine pages into a book. Experiments demonstrate the effectiveness of the book embedding framework across a wide range of conditions. Our parser obtains comparable results with a state-of-the-art transition-based parser.", "title": "" }, { "docid": "824fbd2fe175b4b179226d249792b87a", "text": "While historically software validation focused on the functional requirements, recent approaches also encompass the validation of quality requirements; for example, system reliability, performance or usability. Application development for mobile platforms opens an additional area of qual i ty-power consumption. In PDAs or mobile phones, power consumption varies depending on the hardware resources used, making it possible to specify and validate correct or incorrect executions. Consider an application that downloads a video stream from the network and displays it on the mobile device's display. In the test scenario the viewing of the video is paused at a certain point. If the specification does not allow video prefetching, the user expects the network card activity to stop when video is paused. How can a test engineer check this expectation? Simply running a test suite or even tracing the software execution does not detect the network activity. However, the extraneous network activity can be detected by power measurements and power model application (Figure 1). Tools to find the power inconsistencies and to validate software from the energy point of view are needed.", "title": "" }, { "docid": "52d2004c762d4701ab275d9757c047fc", "text": "Somatic mosaicism — the presence of genetically distinct populations of somatic cells in a given organism — is frequently masked, but it can also result in major phenotypic changes and reveal the expression of otherwise lethal genetic mutations. Mosaicism can be caused by DNA mutations, epigenetic alterations of DNA, chromosomal abnormalities and the spontaneous reversion of inherited mutations. In this review, we discuss the human disorders that result from somatic mosaicism, as well as the molecular genetic mechanisms by which they arise. Specifically, we emphasize the role of selection in the phenotypic manifestations of mosaicism.", "title": "" } ]
scidocsrr
379bc9f0d7e44547dd6a08eb885ccc15
Anomaly Detection in Wireless Sensor Networks in a Non-Stationary Environment
[ { "docid": "60fe7f27cd6312c986b679abce3fdea7", "text": "In matters of great importance that have financial, medical, social, or other implications, we often seek a second opinion before making a decision, sometimes a third, and sometimes many more. In doing so, we weigh the individual opinions, and combine them through some thought process to reach a final decision that is presumably the most informed one. The process of consulting \"several experts\" before making a final decision is perhaps second nature to us; yet, the extensive benefits of such a process in automated decision making applications have only recently been discovered by computational intelligence community. Also known under various other names, such as multiple classifier systems, committee of classifiers, or mixture of experts, ensemble based systems have shown to produce favorable results compared to those of single-expert systems for a broad range of applications and under a variety of scenarios. Design, implementation and application of such systems are the main topics of this article. Specifically, this paper reviews conditions under which ensemble based systems may be more beneficial than their single classifier counterparts, algorithms for generating individual components of the ensemble systems, and various procedures through which the individual classifiers can be combined. We discuss popular ensemble based algorithms, such as bagging, boosting, AdaBoost, stacked generalization, and hierarchical mixture of experts; as well as commonly used combination rules, including algebraic combination of outputs, voting based techniques, behavior knowledge space, and decision templates. Finally, we look at current and future research directions for novel applications of ensemble systems. Such applications include incremental learning, data fusion, feature selection, learning with missing features, confidence estimation, and error correcting output codes; all areas in which ensemble systems have shown great promise", "title": "" }, { "docid": "3be38e070678e358e23cb81432033062", "text": "W ireless integrated network sensors (WINS) provide distributed network and Internet access to sensors, controls, and processors deeply embedded in equipment, facilities, and the environment. The WINS network represents a new monitoring and control capability for applications in such industries as transportation, manufacturing, health care, environmental oversight, and safety and security. WINS combine microsensor technology and low-power signal processing, computation, and low-cost wireless networking in a compact system. Recent advances in integrated circuit technology have enabled construction of far more capable yet inexpensive sensors, radios, and processors, allowing mass production of sophisticated systems linking the physical world to digital data networks [2–5]. Scales range from local to global for applications in medicine, security, factory automation, environmental monitoring, and condition-based maintenance. Compact geometry and low cost allow WINS to be embedded and distributed at a fraction of the cost of conventional wireline sensor and actuator systems. WINS opportunities depend on development of a scalable, low-cost, sensor-network architecture. Such applications require delivery of sensor information to the user at a low bit rate through low-power transceivers. Continuous sensor signal processing enables the constant monitoring of events in an environment in which short message packets would suffice. Future applications of distributed embedded processors and sensors will require vast numbers of devices. Conventional methods of sensor networking represent an impractical demand on cable installation and network bandwidth. Processing at the source would drastically reduce the financial, computational, and management burden on communication system", "title": "" } ]
[ { "docid": "2fa6f761f22e0484a84f83e5772bef40", "text": "We consider the problem of planning smooth paths for a vehicle in a region bounded by polygonal chains. The paths are represented as B-spline functions. A path is found by solving an optimization problem using a cost function designed to care for both the smoothness of the path and the safety of the vehicle. Smoothness is defined as small magnitude of the derivative of curvature and safety is defined as the degree of centering of the path between the polygonal chains. The polygonal chains are preprocessed in order to remove excess parts and introduce safety margins for the vehicle. The method has been implemented for use with a standard solver and tests have been made on application data provided by the Swedish mining company LKAB.", "title": "" }, { "docid": "ba0dce539f33496dedac000b61efa971", "text": "The webpage aesthetics is one of the factors that affect the way people are attracted to a site. But two questions emerge: how can we improve a webpage's aesthetics and how can we evaluate this item? In order to solve this problem, we identified some of the theory that is underlying graphic design, gestalt theory and multimedia design. Based in the literature review, we proposed principles for web site design. We also propose a tool to evaluate web design.", "title": "" }, { "docid": "e726e11f855515017de77508b79d3308", "text": "OBJECTIVES\nThis study was conducted to better understand the characteristics of chronic pain patients seeking treatment with medicinal cannabis (MC).\n\n\nDESIGN\nRetrospective chart reviews of 139 patients (87 males, median age 47 years; 52 females, median age 48 years); all were legally qualified for MC use in Washington State.\n\n\nSETTING\nRegional pain clinic staffed by university faculty.\n\n\nPARTICIPANTS\n\n\n\nINCLUSION CRITERIA\nage 18 years and older; having legally accessed MC treatment, with valid documentation in their medical records. All data were de-identified.\n\n\nMAIN OUTCOME MEASURES\nRecords were scored for multiple indicators, including time since initial MC authorization, qualifying condition(s), McGill Pain score, functional status, use of other analgesic modalities, including opioids, and patterns of use over time.\n\n\nRESULTS\nOf 139 patients, 15 (11 percent) had prior authorizations for MC before seeking care in this clinic. The sample contained 236.4 patient-years of authorized MC use. Time of authorized use ranged from 11 days to 8.31 years (median of 1.12 years). Most patients were male (63 percent) yet female patients averaged 0.18 years longer authorized use. There were no other gender-specific trends or factors. Most patients (n = 123, 88 percent) had more than one pain syndrome present. Myofascial pain syndrome was the most common diagnosis (n = 114, 82 percent), followed by neuropathic pain (n = 89, 64 percent), discogenic back pain (n = 72, 51.7 percent), and osteoarthritis (n = 37, 26.6 percent). Other diagnoses included diabetic neuropathy, central pain syndrome, phantom pain, spinal cord injury, fibromyalgia, rheumatoid arthritis, HIV neuropathy, visceral pain, and malignant pain. In 51 (37 percent) patients, there were documented instances of major hurdles related to accessing MC, including prior physicians unwilling to authorize use, legal problems related to MC use, and difficulties in finding an affordable and consistent supply of MC.\n\n\nCONCLUSIONS\nData indicate that males and females access MC at approximately the same rate, with similar median authorization times. Although the majority of patient records documented significant symptom alleviation with MC, major treatment access and delivery barriers remain.", "title": "" }, { "docid": "b6dcf2064ad7f06fd1672b1348d92737", "text": "In this paper, we propose a two-step method to recognize multiple-food images by detecting candidate regions with several methods and classifying them with various kinds of features. In the first step, we detect several candidate regions by fusing outputs of several region detectors including Felzenszwalb's deformable part model (DPM) [1], a circle detector and the JSEG region segmentation. In the second step, we apply a feature-fusion-based food recognition method for bounding boxes of the candidate regions with various kinds of visual features including bag-of-features of SIFT and CSIFT with spatial pyramid (SP-BoF), histogram of oriented gradient (HoG), and Gabor texture features. In the experiments, we estimated ten food candidates for multiple-food images in the descending order of the confidence scores. As results, we have achieved the 55.8% classification rate, which improved the baseline result in case of using only DPM by 14.3 points, for a multiple-food image data set. This demonstrates that the proposed two-step method is effective for recognition of multiple-food images.", "title": "" }, { "docid": "d47143c38598cf88eeb8be654f8a7a00", "text": "Long Short-Term Memory (LSTM) networks have yielded excellent results on handwriting recognition. This paper describes an application of bidirectional LSTM networks to the problem of machine-printed Latin and Fraktur recognition. Latin and Fraktur recognition differs significantly from handwriting recognition in both the statistical properties of the data, as well as in the required, much higher levels of accuracy. Applications of LSTM networks to handwriting recognition use two-dimensional recurrent networks, since the exact position and baseline of handwritten characters is variable. In contrast, for printed OCR, we used a one-dimensional recurrent network combined with a novel algorithm for baseline and x-height normalization. A number of databases were used for training and testing, including the UW3 database, artificially generated and degraded Fraktur text and scanned pages from a book digitization project. The LSTM architecture achieved 0.6% character-level test-set error on English text. When the artificially degraded Fraktur data set is divided into training and test sets, the system achieves an error rate of 1.64%. On specific books printed in Fraktur (not part of the training set), the system achieves error rates of 0.15% (Fontane) and 1.47% (Ersch-Gruber). These recognition accuracies were found without using any language modelling or any other post-processing techniques.", "title": "" }, { "docid": "0b0273a1e2aeb98eb4115113c8957fd2", "text": "This paper deals with the approach of integrating a bidirectional boost-converter into the drivetrain of a (hybrid) electric vehicle in order to exploit the full potential of the electric drives and the battery. Currently, the automotive norms and standards are defined based on the characteristics of the voltage source. The current technologies of batteries for automotive applications have voltage which depends on the load and the state-of charge. The aim of this paper is to provide better system performance by stabilizing the voltage without the need of redesigning any of the current components in the system. To show the added-value of the proposed electrical topology, loss estimation is developed and proved based on actual components measurements and design. The component and its modelling is then implemented in a global system simulation environment of the electric architecture to show how it contributes enhancing the performance of the system.", "title": "" }, { "docid": "affa4a43b68f8c158090df3a368fe6b6", "text": "The purpose of this study is to evaluate the impact of modulated light projections perceived through the eyes on the autonomic nervous system (ANS). Three types of light projections, each containing both specific colors and specific modulations in the brainwaves frequency range, were tested, in addition to a placebo projection consisting of non-modulated white light. Evaluation was done using a combination of physiological measures (HR, HRV, SC) and psychological tests (Amen, POMS). Significant differences were found in the ANS effects of each of the colored light projections, and also between the colored and white projections.", "title": "" }, { "docid": "49f96e96623502ffe6053cab43054edf", "text": "Background YouTube, the online video creation and sharing site, supports both video content viewing and content creation activities. For a minority of people, the time spent engaging with YouTube can be excessive and potentially problematic. Method This study analyzed the relationship between content viewing, content creation, and YouTube addiction in a survey of 410 Indian-student YouTube users. It also examined the influence of content, social, technology, and process gratifications on user inclination toward YouTube content viewing and content creation. Results The results demonstrated that content creation in YouTube had a closer relationship with YouTube addiction than content viewing. Furthermore, social gratification was found to have a significant influence on both types of YouTube activities, whereas technology gratification did not significantly influence them. Among all perceived gratifications, content gratification had the highest relationship coefficient value with YouTube content creation inclination. The model fit and variance extracted by the endogenous constructs were good, which further validated the results of the analysis. Conclusion The study facilitates new ways to explore user gratification in using YouTube and how the channel responds to it.", "title": "" }, { "docid": "21ad29105c4b6772b05156afd33ac145", "text": "High resolution Digital Surface Models (DSMs) produced from airborne laser-scanning or stereo satellite images provide a very useful source of information for automated 3D building reconstruction. In this paper an investigation is reported about extraction of 3D building models from high resolution DSMs and orthorectified images produced from Worldview-2 stereo satellite imagery. The focus is on the generation of 3D models of parametric building roofs, which is the basis for creating Level Of Detail 2 (LOD2) according to the CityGML standard. In particular the building blocks containing several connected buildings with tilted roofs are investigated and the potentials and limitations of the modeling approach are discussed. The edge information extracted from orthorectified image has been employed as additional source of information in 3D reconstruction algorithm. A model driven approach based on the analysis of the 3D points of DSMs in a 2D projection plane is proposed. Accordingly, a building block is divided into smaller parts according to the direction and number of existing ridge lines for parametric building reconstruction. The 3D model is derived for each building part, and finally, a complete parametric model is formed by merging the 3D models of the individual building parts and adjusting the nodes after the merging step. For the remaining building parts that do not contain ridge lines, a prismatic model using polygon approximation of the corresponding boundary pixels is derived and merged to the parametric models to shape the final model of the building. A qualitative and quantitative assessment of the proposed method for the automatic reconstruction of buildings with parametric roofs is then provided by comparing the final model with the existing surface model as well as some field measurements. Remote Sens. 2013, 5 1682", "title": "" }, { "docid": "c89ce1ded524ff65c1ebd3d20be155bc", "text": "Actuarial risk assessment tools are used extensively to predict future violence, but previous studies comparing their predictive accuracies have produced inconsistent findings as a result of various methodological issues. We conducted meta-analyses of the effect sizes of 9 commonly used risk assessment tools and their subscales to compare their predictive efficacies for violence. The effect sizes were extracted from 28 original reports published between 1999 and 2008, which assessed the predictive accuracy of more than one tool. We used a within-subject design to improve statistical power and multilevel regression models to disentangle random effects of variation between studies and tools and to adjust for study features. All 9 tools and their subscales predicted violence at about the same moderate level of predictive efficacy with the exception of Psychopathy Checklist--Revised (PCL-R) Factor 1, which predicted violence only at chance level among men. Approximately 25% of the total variance was due to differences between tools, whereas approximately 85% of heterogeneity between studies was explained by methodological features (age, length of follow-up, different types of violent outcome, sex, and sex-related interactions). Sex-differentiated efficacy was found for a small number of the tools. If the intention is only to predict future violence, then the 9 tools are essentially interchangeable; the selection of which tool to use in practice should depend on what other functions the tool can perform rather than on its efficacy in predicting violence. The moderate level of predictive accuracy of these tools suggests that they should not be used solely for some criminal justice decision making that requires a very high level of accuracy such as preventive detention.", "title": "" }, { "docid": "16741aac03ea1a864ddab65c8c73eb7c", "text": "This report describes a preliminary evaluation of performance of a cell-FPGA-like architecture for future hybrid \"CMOL\" circuits. Such circuits will combine a semiconduc-tor-transistor (CMOS) stack and a two-level nanowire crossbar with molecular-scale two-terminal nanodevices (program-mable diodes) formed at each crosspoint. Our cell-based architecture is based on a uniform CMOL fabric of \"tiles\". Each tile consists of 12 four-transistor basic cells and one (four times larger) latch cell. Due to high density of nanodevices, which may be used for both logic and routing functions, CMOL FPGA may be reconfigured around defective nanodevices to provide high defect tolerance. Using a semi-custom set of design automation tools we have evaluated CMOL FPGA performance for the Toronto 20 benchmark set, so far without optimization of several parameters including the power supply voltage and nanowire pitch. The results show that even without such optimization, CMOL FPGA circuits may provide a density advantage of more than two orders of magnitude over the traditional CMOS FPGA with the same CMOS design rules, at comparable time delay, acceptable power consumption and potentially high defect tolerance.", "title": "" }, { "docid": "cffce89fbb97dc1d2eb31a060a335d3c", "text": "This doctoral thesis deals with a number of challenges related to investigating and devising solutions to the Sentiment Analysis Problem, a subset of the discipline known as Natural Language Processing (NLP), following a path that differs from the most common approaches currently in-use. The majority of the research and applications building in Sentiment Analysis (SA) / Opinion Mining (OM) have been conducted and developed using Supervised Machine Learning techniques. It is our intention to prove that a hybrid approach merging fuzzy sets, a solid sentiment lexicon, traditional NLP techniques and aggregation methods will have the effect of compounding the power of all the positive aspects of these tools. In this thesis we will prove three main aspects, namely: 1. That a Hybrid Classification Model based on the techniques mentioned in the previous paragraphs will be capable of: (a) performing same or better than established Supervised Machine Learning techniques -namely, Naı̈ve Bayes and Maximum Entropy (ME)when the latter are utilised respectively as the only classification methods being applied, when calculating subjectivity polarity, and (b) computing the intensity of the polarity previously estimated. 2. That cross-ratio uninorms can be used to effectively fuse the classification outputs of several algorithms producing a compensatory effect. 3. That the Induced Ordered Weighted Averaging (IOWA) operator is a very good choice to model the opinion of the majority (consensus) when the outputs of a number of classification methods are combined together. For academic and experimental purposes we have built the proposed methods and associated prototypes in an iterative fashion: • Step 1: we start with the so-called Hybrid Standard Classification (HSC) method, responsible for subjectivity polarity determination. • Step 2: then, we have continued with the Hybrid Advanced Classification (HAC) method that computes the polarity intensity of opinions/sentiments. • Step 3: in closing, we present two methods that produce a semantic-specific aggregation of two or more classification methods, as a complement to the HSC/HAC methods when the latter cannot generate a classification value or when we are looking for an aggregation that implies consensus, respectively: ◦ the Hybrid Advanced Classification with Aggregation by Cross-ratio Uninorm (HACACU) method. ◦ the Hybrid Advanced Classification with Aggregation by Consensus (HACACO) method.", "title": "" }, { "docid": "8c853251e0fb408c829e6f99a581d4cf", "text": "We consider a simple and overarching representation for permutation-invariant functions of sequences (or set functions). Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence. This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions. If carried out naively, Janossy pooling can be computationally prohibitive. To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations. Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions. We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods.", "title": "" }, { "docid": "fb89a5aa87f1458177d6a32ef25fdf3b", "text": "The increase in population, the rapid economic growth and the rise in community living standards accelerate municipal solid waste (MSW) generation in developing cities. This problem is especially serious in Pudong New Area, Shanghai, China. The daily amount of MSW generated in Pudong was about 1.11 kg per person in 2006. According to the current population growth trend, the solid waste quantity generated will continue to increase with the city's development. In this paper, we describe a waste generation and composition analysis and provide a comprehensive review of municipal solid waste management (MSWM) in Pudong. Some of the important aspects of waste management, such as the current status of waste collection, transport and disposal in Pudong, will be illustrated. Also, the current situation will be evaluated, and its problems will be identified.", "title": "" }, { "docid": "bcd16100ca6814503e876f9f15b8c7fb", "text": "OBJECTIVE\nBrain-computer interfaces (BCIs) are devices that enable severely disabled people to communicate and interact with their environments using their brain waves. Most studies investigating BCI in humans have used scalp EEG as the source of electrical signals and focused on motor control of prostheses or computer cursors on a screen. The authors hypothesize that the use of brain signals obtained directly from the cortical surface will more effectively control a communication/spelling task compared to scalp EEG.\n\n\nMETHODS\nA total of 6 patients with medically intractable epilepsy were tested for the ability to control a visual keyboard using electrocorticographic (ECOG) signals. ECOG data collected during a P300 visual task paradigm were preprocessed and used to train a linear classifier to subsequently predict the intended target letters.\n\n\nRESULTS\nThe classifier was able to predict the intended target character at or near 100% accuracy using fewer than 15 stimulation sequences in 5 of the 6 people tested. ECOG data from electrodes outside the language cortex contributed to the classifier and enabled participants to write words on a visual keyboard.\n\n\nCONCLUSIONS\nThis is a novel finding because previous invasive BCI research in humans used signals exclusively from the motor cortex to control a computer cursor or prosthetic device. These results demonstrate that ECOG signals from electrodes both overlying and outside the language cortex can reliably control a visual keyboard to generate language output without voice or limb movements.", "title": "" }, { "docid": "8e324cf4900431593d9ebc73e7809b23", "text": "Even though there is a plethora of studies investigating the challenges of adopting ebanking services, a search through the literature indicates that prior studies have investigated either user adoption challenges or the bank implementation challenges. This study integrated both perspectives to provide a broader conceptual framework for investigating challenges banks face in marketing e-banking services in developing country such as Ghana. The results from the mixed method study indicated that institutional–based challenges as well as userbased challenges affect the marketing of e-banking products in Ghana. The strategic implications of the findings for marketing ebanking services are discussed to guide managers to implement e-banking services in Ghana.", "title": "" }, { "docid": "62166980f94bba5e75c9c6ad4a4348f1", "text": "In this paper the design and the implementation of a linear, non-uniform antenna array for a 77-GHz MIMO FMCW system that allows for the estimation of both the distance and the angular position of a target are presented. The goal is to achieve a good trade-off between the main beam width and the side lobe level. The non-uniform spacing in addition with the MIMO principle offers a superior performance compared to a classical uniform half-wavelength antenna array with an equal number of elements. However the design becomes more complicated and can not be tackled using analytical methods. Starting with elementary array factor considerations the design is approached using brute force, stepwise brute force, and particle swarm optimization. The particle swarm optimized array was also implemented. Simulation results and measurements are presented and discussed.", "title": "" }, { "docid": "eba25ae59603328f3ef84c0994d46472", "text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.", "title": "" }, { "docid": "13974867d98411b6a999374afcc5b2cb", "text": "Current best local descriptors are learned on a large dataset of matching and non-matching keypoint pairs. However, data of this kind is not always available since detailed keypoint correspondences can be hard to establish. On the other hand, we can often obtain labels for pairs of keypoint bags. For example, keypoint bags extracted from two images of the same object under different views form a matching pair, and keypoint bags extracted from images of different objects form a non-matching pair. On average, matching pairs should contain more corresponding keypoints than non-matching pairs. We describe an end-to-end differentiable architecture that enables the learning of local keypoint descriptors from such weakly-labeled data.", "title": "" }, { "docid": "bc7f80192416aa7787657aed1bda3997", "text": "In this paper we propose a deep learning technique to improve the performance of semantic segmentation tasks. Previously proposed algorithms generally suffer from the over-dependence on a single modality as well as a lack of training data. We made three contributions to improve the performance. Firstly, we adopt two models which are complementary in our framework to enrich field-of-views and features to make segmentation more reliable. Secondly, we repurpose the datasets form other tasks to the segmentation task by training the two models in our framework on different datasets. This brings the benefits of data augmentation while saving the cost of image annotation. Thirdly, the number of parameters in our framework is minimized to reduce the complexity of the framework and to avoid over- fitting. Experimental results show that our framework significantly outperforms the current state-of-the-art methods with a smaller number of parameters and better generalization ability.", "title": "" } ]
scidocsrr
8f37b402bb1ac9b58883707aee4a2b5c
RELIABILITY-BASED MANAGEMENT OF BURIED PIPELINES
[ { "docid": "150e7a6f46e93fc917e43e32dedd9424", "text": "This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.", "title": "" } ]
[ { "docid": "8abd03202f496de4bec6270946d53a9c", "text": "In this paper, we use time-series modeling to forecast taxi travel demand, in the context of a mobile application-based taxi hailing service. In particular, we model the passenger demand density at various locations in the city of Bengaluru, India. Using the data, we first shortlist time-series models that suit our application. We then analyse the performance of these models by using Mean Absolute Percentage Error (MAPE) as the performance metric. In order to improve the model performance, we employ a multi-level clustering technique where we aggregate demand over neighboring cells/geohashes. We observe that the improved model based on clustering leads to a forecast accuracy of 80% per km2. In addition, our technique obtains an accuracy of 89% per km2 for the most frequently occurring use case.", "title": "" }, { "docid": "80e9f9261397cb378920a6c897fd352a", "text": "Purpose: This study develops a comprehensive research model that can explain potential customers’ behavioral intentions to adopt and use smart home services. Methodology: This study proposes and validates a new theoretical model that extends the theory of planned behavior (TPB). Partial least squares analysis (PLS) is employed to test the research model and corresponding hypotheses on data collected from 216 survey samples. Findings: Mobility, security/privacy risk, and trust in the service provider are important factors affecting the adoption of smart home services. Practical implications: To increase potential users’ adoption rate, service providers should focus on developing mobility-related services that enable people to access smart home services while on the move using mobile devices via control and monitoring functions. Originality/Value: This study is the first empirical attempt to examine user acceptance of smart home services, as most of the prior literature has concerned technical features.", "title": "" }, { "docid": "7bd440a6c7aece364877dbb5170cfcfb", "text": "Semantic representation lies at the core of several applications in Natural Language Processing. However, most existing semantic representation techniques cannot be used effectively for the representation of individual word senses. We put forward a novel multilingual concept representation, called MUFFIN, which not only enables accurate representation of word senses in different languages, but also provides multiple advantages over existing approaches. MUFFIN represents a given concept in a unified semantic space irrespective of the language of interest, enabling cross-lingual comparison of different concepts. We evaluate our approach in two different evaluation benchmarks, semantic similarity and Word Sense Disambiguation, reporting state-of-the-art performance on several standard datasets.", "title": "" }, { "docid": "29e56287071ca1fc1bf3d83f67b3ce8d", "text": "In this paper, we seek to identify factors that might increase the likelihood of adoption and continued use of cyberinfrastructure by scientists. To do so, we review the main research on Information and Communications Technology (ICT) adoption and use by addressing research problems, theories and models used, findings, and limitations. We focus particularly on the individual user perspective. We categorize previous studies into two groups: Adoption research and post-adoption (continued use) research. In addition, we review studies specifically regarding cyberinfrastructure adoption and use by scientists and other special user groups. We identify the limitations of previous theories, models and research findings appearing in the literature related to our current interest in scientists’ adoption and continued use of cyber-infrastructure. We synthesize the previous theories and models used for ICT adoption and use, and then we develop a theoretical framework for studying scientists’ adoption and use of cyber-infrastructure. We also proposed a research design based on the research model developed. Implications for researchers and practitioners are provided.", "title": "" }, { "docid": "da9ffb00398f6aad726c247e3d1f2450", "text": "We propose noWorkflow, a tool that transparently captures provenance of scripts and enables reproducibility. Unlike existing approaches, noWorkflow is non-intrusive and does not require users to change the way they work – users need not wrap their experiments in scientific workflow systems, install version control systems, or instrument their scripts. The tool leverages Software Engineering techniques, such as abstract syntax tree analysis, reflection, and profiling, to collect different types of provenance, including detailed information about the underlying libraries. We describe how noWorkflow captures multiple kinds of provenance and the different classes of analyses it supports: graph-based visualization; differencing over provenance trails; and inference queries.", "title": "" }, { "docid": "59e02bc986876edc0ee0a97fd4d12a28", "text": "CONTEXT\nSocial anxiety disorder is thought to involve emotional hyperreactivity, cognitive distortions, and ineffective emotion regulation. While the neural bases of emotional reactivity to social stimuli have been described, the neural bases of emotional reactivity and cognitive regulation during social and physical threat, and their relationship to social anxiety symptom severity, have yet to be investigated.\n\n\nOBJECTIVE\nTo investigate behavioral and neural correlates of emotional reactivity and cognitive regulation in patients and controls during processing of social and physical threat stimuli.\n\n\nDESIGN\nParticipants were trained to implement cognitive-linguistic regulation of emotional reactivity induced by social (harsh facial expressions) and physical (violent scenes) threat while undergoing functional magnetic resonance imaging and providing behavioral ratings of negative emotion experience.\n\n\nSETTING\nAcademic psychology department.\n\n\nPARTICIPANTS\nFifteen adults with social anxiety disorder and 17 demographically matched healthy controls.\n\n\nMAIN OUTCOME MEASURES\nBlood oxygen level-dependent signal and negative emotion ratings.\n\n\nRESULTS\nBehaviorally, patients reported greater negative emotion than controls during social and physical threat but showed equivalent reduction in negative emotion following cognitive regulation. Neurally, viewing social threat resulted in greater emotion-related neural responses in patients than controls, with social anxiety symptom severity related to activity in a network of emotion- and attention-processing regions in patients only. Viewing physical threat produced no between-group differences. Regulation during social threat resulted in greater cognitive and attention regulation-related brain activation in controls compared with patients. Regulation during physical threat produced greater cognitive control-related response (ie, right dorsolateral prefrontal cortex) in patients compared with controls.\n\n\nCONCLUSIONS\nCompared with controls, patients demonstrated exaggerated negative emotion reactivity and reduced cognitive regulation-related neural activation, specifically for social threat stimuli. These findings help to elucidate potential neural mechanisms of emotion regulation that might serve as biomarkers for interventions for social anxiety disorder.", "title": "" }, { "docid": "b13c9597f8de229fb7fec3e23c0694d1", "text": "Using capture-recapture analysis we estimate the effective size of the active Amazon Mechanical Turk (MTurk) population that a typical laboratory can access to be about 7,300 workers. We also estimate that the time taken for half of the workers to leave the MTurk pool and be replaced is about 7 months. Each laboratory has its own population pool which overlaps, often extensively, with the hundreds of other laboratories using MTurk. Our estimate is based on a sample of 114,460 completed sessions from 33,408 unique participants and 689 sessions across seven laboratories in the US, Europe, and Australia from January 2012 to March 2015.", "title": "" }, { "docid": "dc33d2edcfb124af607bcb817589f6e9", "text": "In this letter, a novel coaxial line to substrate integrated waveguide (SIW) broadband transition is presented. The transition is designed by connecting the inner conductor of a coaxial line to an open-circuited SIW. The configuration directly transforms the TEM mode of a coaxial line to the fundamental TE10 mode of the SIW. A prototype back-to-back transition is fabricated for X-band operation using a 0.508 mm thick RO 4003C substrate with dielectric constant 3.55. Comparison with other reported transitions shows that the present structure provides lower passband insertion loss, wider bandwidth and most compact. The area of each transition is 0.08λg2 where λg is the guided wavelength at passband center frequency of f0 = 10.5 GHz. Measured 15 dB and 20 dB matching bandwidths are over 48% and 20%, respectively, at f0.", "title": "" }, { "docid": "a4e6b629ec4b0fdf8784ba5be1a62260", "text": "Today's real-world databases typically contain millions of items with many thousands of fields. As a result, traditional distribution-based outlier detection techniques have more and more restricted capabilities and novel k-nearest neighbors based approaches have become more and more popular. However, the problems with these k-nearest neighbors rankings for top n outliers, are very computationally expensive for large datasets, and doubts exist in general whether they would work well for high dimensional datasets. To partially circumvent these problems, we propose in this paper a new global outlier factor and a new local outlier factor and an efficient outlier detection algorithm developed upon them that is easy to implement and can provide competing performances with existing solutions. Experiments performed on both synthetic and real data sets demonstrate the efficacy of our method. & 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "494d720d5a8c7c58b795c5c6131fa8d1", "text": "The increasing emergence of pervasive information systems requires a clearer understanding of the underlying characteristics in relation to user acceptance. Based on the integration of UTAUT2 and three pervasiveness constructs, we derived a comprehensive research model to account for pervasive information systems. Data collected from 346 participants in an online survey was analyzed to test the developed model using structural equation modeling and taking into account multigroup analysis. The results confirm the applicability of the integrated UTAUT2 model to measure pervasiveness. Implications for research and practice are discussed together with future research opportunities.", "title": "" }, { "docid": "d94d49cde6878e0841c1654090062559", "text": "In previous work we described a method for compactly representing graphs with small separators, which makes use of small separators, and presented preliminary experimental results. In this paper we extend the experimental results in several ways, including extensions for dynamic insertion and deletion of edges, a comparison of a variety of coding schemes, and an implementation of two applications using the representation. The results show that the representation is quite effective for a wide variety of real-world graphs, including graphs from finite-element meshes, circuits, street maps, router connectivity, and web links. In addition to significantly reducing the memory requirements, our implementation of the representation is faster than standard representations for queries. The byte codes we introduce lead to DFT times that are a factor of 2.5 faster than our previous results with gamma codes and a factor of between 1 and 9 faster than adjacency lists, while using a factor of between 3 and 6 less space.", "title": "" }, { "docid": "0e45e57b4e799ebf7e8b55feded7e9e1", "text": "IMPORTANCE\nIt is increasingly evident that Parkinson disease (PD) is not a single entity but rather a heterogeneous neurodegenerative disorder.\n\n\nOBJECTIVE\nTo evaluate available evidence, based on findings from clinical, imaging, genetic and pathologic studies, supporting the differentiation of PD into subtypes.\n\n\nEVIDENCE REVIEW\nWe performed a systematic review of articles cited in PubMed between 1980 and 2013 using the following search terms: Parkinson disease, parkinsonism, tremor, postural instability and gait difficulty, and Parkinson disease subtypes. The final reference list was generated on the basis of originality and relevance to the broad scope of this review.\n\n\nFINDINGS\nSeveral subtypes, such as tremor-dominant PD and postural instability gait difficulty form of PD, have been found to cluster together. Other subtypes also have been identified, but validation by subtype-specific biomarkers is still lacking.\n\n\nCONCLUSIONS AND RELEVANCE\nSeveral PD subtypes have been identified, but the pathogenic mechanisms underlying the observed clinicopathologic heterogeneity in PD are still not well understood. Further research into subtype-specific diagnostic and prognostic biomarkers may provide insights into mechanisms of neurodegeneration and improve epidemiologic and therapeutic clinical trial designs.", "title": "" }, { "docid": "0218c583a8658a960085ddf813f38dbf", "text": "The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.", "title": "" }, { "docid": "1b5fc0a7b39bedcac9bdc52584fb8a22", "text": "Neem (Azadirachta indica) is a medicinal plant of containing diverse chemical active substances of several biological properties. So, the aim of the current investigation was to assess the effects of water leaf extract of neem plant on the survival and healthy status of Nile tilapia (Oreochromis niloticus), African cat fish (Clarias gariepinus) and zooplankton community. The laboratory determinations of lethal concentrations (LC 100 and LC50) through a static bioassay test were performed. The 24 h LC100 of neem leaf extract was estimated as 4 and 11 g/l, for juvenile's O. niloticus and C. gariepinus, respectively, while, the 96-h LC50 was 1.8 and 4 g/l, respectively. On the other hand, the 24 h LC100 for cladocera and copepoda were 0.25 and 0.45 g/l, respectively, while, the 96-h LC50 was 0.1 and 0.2 g/l, respectively. At the highest test concentrations, adverse effects were obvious with significant reductions in several cladoceran and copepod species. Some alterations in glucose levels, total protein, albumin, globulin as well as AST and ALT in plasma of treated O. niloticus and C. gariepinus with /2 and /10 LC50 of neem leaf water extract compared with non-treated one after 2 and 7 days of exposure were recorded and discussed. It could be concluded that the application of neem leaf extract can be used to control unwanted organisms in ponds as environment friendly material instead of deleterious pesticides. Also, extensive investigations should be established for the suitable methods of application in aquatic animal production facilities to be fully explored in future.", "title": "" }, { "docid": "cd4e2e3af17cd84d4ede35807e71e783", "text": "A proposal for saliency computation within the visual cortex is put forth based on the premise that localized saliency computation serves to maximize information sampled from one's environment. The model is built entirely on computational constraints but nevertheless results in an architecture with cells and connectivity reminiscent of that appearing in the visual cortex. It is demonstrated that a variety of visual search behaviors appear as emergent properties of the model and therefore basic principles of coding and information transmission. Experimental results demonstrate greater efficacy in predicting fixation patterns across two different data sets as compared with competing models.", "title": "" }, { "docid": "f73cd33c8dfc9791558b239aede6235b", "text": "Web clustering engines organize search results by topic, thus offering a complementary view to the flat-ranked list returned by conventional search engines. In this survey, we discuss the issues that must be addressed in the development of a Web clustering engine, including acquisition and preprocessing of search results, their clustering and visualization. Search results clustering, the core of the system, has specific requirements that cannot be addressed by classical clustering algorithms. We emphasize the role played by the quality of the cluster labels as opposed to optimizing only the clustering structure. We highlight the main characteristics of a number of existing Web clustering engines and also discuss how to evaluate their retrieval performance. Some directions for future research are finally presented.", "title": "" }, { "docid": "4dba2a9a29f58b55a6b2c3101acf2437", "text": "Clinical and neurobiological findings have reported the involvement of endocannabinoid signaling in the pathophysiology of schizophrenia. This system modulates dopaminergic and glutamatergic neurotransmission that is associated with positive, negative, and cognitive symptoms of schizophrenia. Despite neurotransmitter impairments, increasing evidence points to a role of glial cells in schizophrenia pathobiology. Glial cells encompass three main groups: oligodendrocytes, microglia, and astrocytes. These cells promote several neurobiological functions, such as myelination of axons, metabolic and structural support, and immune response in the central nervous system. Impairments in glial cells lead to disruptions in communication and in the homeostasis of neurons that play role in pathobiology of disorders such as schizophrenia. Therefore, data suggest that glial cells may be a potential pharmacological tool to treat schizophrenia and other brain disorders. In this regard, glial cells express cannabinoid receptors and synthesize endocannabinoids, and cannabinoid drugs affect some functions of these cells that can be implicated in schizophrenia pathobiology. Thus, the aim of this review is to provide data about the glial changes observed in schizophrenia, and how cannabinoids could modulate these alterations.", "title": "" }, { "docid": "e2807120a8a04a9c5f5f221e413aec4d", "text": "Background A military aircraft in a hostile environment may need to use radar jamming in order to avoid being detected or engaged by the enemy. Effective jamming can require knowledge of the number and type of enemy radars; however, the radar receiver on the aircraft will observe a single stream of pulses from all radar emitters combined. It is advantageous to separate this collection of pulses into individual streams each corresponding to a particular emitter in the environment; this process is known as pulse deinterleaving. Pulse deinterleaving is critical for effective electronic warfare (EW) signal processing such as electronic attack (EA) and electronic protection (EP) because it not only aids in the identification of enemy radars but also permits the intelligent allocation of processing resources.", "title": "" }, { "docid": "6a470404c36867a18a98fafa9df6848f", "text": "Memory links use variable-impedance drivers, feed-forward equalization (FFE) [1], on-die termination (ODT) and slew-rate control to optimize the signal integrity (SI). An asymmetric DRAM link configuration exploits the availability of a fast CMOS technology on the memory controller side to implement powerful equalization, while keeping the circuit complexity on the DRAM side relatively simple. This paper proposes the use of Tomlinson Harashima precoding (THP) [2-4] in a memory controller as replacement of the afore-mentioned SI optimization techniques. THP is a transmitter equalization technique in which post-cursor inter-symbol interference (ISI) is cancelled by means of an infinite impulse response (IIR) filter with modulo-based amplitude limitation; similar to a decision feedback equalizer (DFE) on the receive side. However, in contrast to a DFE, THP does not suffer from error propagation.", "title": "" }, { "docid": "570e48e839bd2250473d4332adf2b53f", "text": "Autologous stem cell transplant can be a curative therapy to restore normal hematopoiesis after myeloablative treatments in patients with malignancies. Aim: To evaluate the effect of rehabilitation program for caregivers about patients’ post autologous bone marrow transplantation Research Design: A quasi-experimental design was used. Setting: The study was conducted in Sheikh Zayed Specialized Hospital at Oncology Outpatient Clinic of Bone Marrow Transplantation Unit. Sample: A purposive sample comprised; a total number of 60 patients, their age ranged from 21 to 50 years, free from any other chronic disease and the caregivers are living with the patients in the same home. Tools: Two tools were used for data collection. First tool: An interviewing autologous bone marrow transplantation questionnaire for the patients and their caregivers was divided into five parts; Including: Socio-demographic data, knowledge of caregivers regarding autologous bone marrow transplant and side effect of chemotherapy, family caregivers’ practices according to their providing care related to post bone marrow transplantation, signs and symptoms, activities of daily living for patients and home environmental sanitation for the patients. Second tool: deals with physical examination assessment of the patients from head to toe. Results: 61.7% of patients aged 30˂40 years, and 68.3 % were female. Regarding the type of relationship with the patients, 48.3% were the mother, 58.3% of patients who underwent autologous bone marrow transplantation had a sanitary environment and there were highly statistically significant differences between caregivers’ knowledge and practices pre/post program. Conclusion: There were highly statistically significant differences between family caregivers' total knowledge, their practices, as well as their total caregivers’ knowledge, practices and patients’ independency level pre/post rehabilitation program. . Recommendations: Counseling for family caregivers of patients who underwent autologous bone marrow transplantation and carrying out rehabilitation program for the patients and their caregivers to be performed properly during the rehabilitation period at caner hospitals such as 57357 Hospital and The National Cancer Institute in Cairo.", "title": "" } ]
scidocsrr
38a5087591e786f4da8b636d631b6e8b
Evaluation of intrabony defects treated with platelet-rich fibrin or autogenous bone graft: A comparative analysis
[ { "docid": "e1adfaf4af1e4fb5d0101a157039ccfe", "text": "Platelet-rich fibrin (PRF) belongs to a new generation of platelet concentrates, with simplified processing and without biochemical blood handling. In this second article, we investigate the platelet-associated features of this biomaterial. During PRF processing by centrifugation, platelets are activated and their massive degranulation implies a very significant cytokine release. Concentrated platelet-rich plasma platelet cytokines have already been quantified in many technologic configurations. To carry out a comparative study, we therefore undertook to quantify PDGF-BB, TGFbeta-1, and IGF-I within PPP (platelet-poor plasma) supernatant and PRF clot exudate serum. These initial analyses revealed that slow fibrin polymerization during PRF processing leads to the intrinsic incorporation of platelet cytokines and glycanic chains in the fibrin meshes. This result would imply that PRF, unlike the other platelet concentrates, would be able to progressively release cytokines during fibrin matrix remodeling; such a mechanism might explain the clinically observed healing properties of PRF.", "title": "" } ]
[ { "docid": "7f553d57ec54b210e86e4d7abba160d7", "text": "SUMMARY\nBioIE is a rule-based system that extracts informative sentences relating to protein families, their structures, functions and diseases from the biomedical literaturE. Based on manual definition of templates and rules, it aims at precise sentence extraction rather than wide recall. After uploading source text or retrieving abstracts from MEDLINE, users can extract sentences based on predefined or user-defined template categories. BioIE also provides a brief insight into the syntactic and semantic context of the source-text by looking at word, N-gram and MeSH-term distributions. Important Applications of BioIE are in, for example, annotation of microarray data and of protein databases.\n\n\nAVAILABILITY\nhttp://umber.sbs.man.ac.uk/dbbrowser/bioie/", "title": "" }, { "docid": "381e7083535bb5f15cdece7df4e986e3", "text": "We present a hybrid deep learning method for modelling the uncertainty of camera relocalization from a single RGB image. The proposed system leverages the discriminative deep image representation from a convolutional neural networks, and uses Gaussian Process regressors to generate the probability distribution of the six degree of freedom (6DoF) camera pose in an end-to-end fashion. This results in a network that can generate uncertainties over its inferences with no need to sample many times. Furthermore we show that our objective based on KL divergence reduces the dependence on the choice of hyperparameters. The results show that compared to the state-of-the-art Bayesian camera relocalization method, our model produces comparable localization uncertainty and improves the system efficiency significantly, without loss of accuracy.", "title": "" }, { "docid": "d1eb2bf9d265017450a8a891540afa30", "text": "Air-gapped networks are isolated, separated both logically and physically from public networks. Although the feasibility of invading such systems has been demonstrated in recent years, exfiltration of data from air-gapped networks is still a challenging task. In this paper we present GSMem, a malware that can exfiltrate data through an air-gap over cellular frequencies. Rogue software on an infected target computer modulates and transmits electromagnetic signals at cellular frequencies by invoking specific memory-related instructions and utilizing the multichannel memory architecture to amplify the transmission. Furthermore, we show that the transmitted signals can be received and demodulated by a rootkit placed in the baseband firmware of a nearby cellular phone. We present crucial design issues such as signal generation and reception, data modulation, and transmission detection. We implement a prototype of GSMem consisting of a transmitter and a receiver and evaluate its performance and limitations. Our current results demonstrate its efficacy and feasibility, achieving an effective transmission distance of 1 5.5 meters with a standard mobile phone. When using a dedicated, yet affordable hardware receiver, the effective distance reached over 30 meters.", "title": "" }, { "docid": "11ca0df1121fc8a8e0ebaec58ea08a87", "text": "In real video surveillance scenarios, visual pedestrian attributes, such as gender, backpack, clothes types, are very important for pedestrian retrieval and person reidentification. Existing methods for attributes recognition have two drawbacks: (a) handcrafted features (e.g. color histograms, local binary patterns) cannot cope well with the difficulty of real video surveillance scenarios; (b) the relationship among pedestrian attributes is ignored. To address the two drawbacks, we propose two deep learning based models to recognize pedestrian attributes. On the one hand, each attribute is treated as an independent component and the deep learning based single attribute recognition model (DeepSAR) is proposed to recognize each attribute one by one. On the other hand, to exploit the relationship among attributes, the deep learning framework which recognizes multiple attributes jointly (DeepMAR) is proposed. In the DeepMAR, one attribute can contribute to the representation of other attributes. For example, the gender of woman can contribute to the representation oflong hair and wearing skirt. Experiments on recent popular pedestrian attribute datasets illustrate that our proposed models achieve the state-of-the-art results.", "title": "" }, { "docid": "f4a0738d814e540f7c208ab1e3666fb7", "text": "In this paper, we analyze a generic algorithm scheme for sequential global optimization using Gaussian processes. The upper bounds we derive on the cumulative regret for this generic algorithm improve by an exponential factor the previously known bounds for algorithms like GP-UCB. We also introduce the novel Gaussian Process Mutual Information algorithm (GP-MI), which significantly improves further these upper bounds for the cumulative regret. We confirm the efficiency of this algorithm on synthetic and real tasks against the natural competitor, GP-UCB, and also the Expected Improvement heuristic. Preprint for the 31st International Conference on Machine Learning (ICML 2014) 1 ar X iv :1 31 1. 48 25 v3 [ st at .M L ] 8 J un 2 01 5 Erratum After the publication of our article, we found an error in the proof of Lemma 1 which invalidates the main theorem. It appears that the information given to the algorithm is not sufficient for the main theorem to hold true. The theoretical guarantees would remain valid in a setting where the algorithm observes the instantaneous regret instead of noisy samples of the unknown function. We describe in this page the mistake and its consequences. Let f : X → R be the unknown function to be optimized, which is a sample from a Gaussian process. Let’s fix x, x1, . . . , xT ∈ X and the observations yt = f(xt)+ t where the noise variables t are independent Gaussian noise N (0, σ). We define the instantaneous regret rt = f(x?)− f(xt) and, MT = T ∑", "title": "" }, { "docid": "71f8aca9d325f015836033c2a46adaa6", "text": "BACKGROUND\nTwenty states currently require that women seeking abortion be counseled on possible psychological responses, with six states stressing negative responses. The majority of research finds that women whose unwanted pregnancies end in abortion do not subsequently have adverse mental health outcomes; scant research examines this relationship for young women.\n\n\nMETHODS\nFour waves of data from the National Longitudinal Study of Adolescent Health were analyzed. Population-averaged lagged logistic and linear regression models were employed to test the relationship between pregnancy resolution outcome and subsequent depressive symptoms, adjusting for prior depressive symptoms, history of traumatic experiences, and sociodemographic covariates. Depressive symptoms were measured using a nine-item version of the Center for Epidemiologic Studies Depression scale. Analyses were conducted among two subsamples of women whose unwanted first pregnancies were resolved in either abortion or live birth: (1) 856 women with an unwanted first pregnancy between Waves 2 and 3; and (2) 438 women with an unwanted first pregnancy between Waves 3 and 4 (unweighted n's).\n\n\nRESULTS\nIn unadjusted and adjusted linear and logistic regression analyses for both subsamples, there was no association between having an abortion after an unwanted first pregnancy and subsequent depressive symptoms. In fully adjusted models, the most recent measure of prior depressive symptoms was consistently associated with subsequent depressive symptoms.\n\n\nCONCLUSIONS\nIn a nationally representative, longitudinal dataset, there was no evidence that young women who had abortions were at increased risk of subsequent depressive symptoms compared with those who give birth after an unwanted first pregnancy.", "title": "" }, { "docid": "06b0708250515510b8a3fc302045fe4b", "text": "While the subject of cyberbullying of children and adolescents has begun to be addressed, less attention and research have focused on cyberbullying in the workplace. Male-dominated workplaces such as manufacturing settings are found to have an increased risk of workplace bullying, but the prevalence of cyberbullying in this sector is not known. This exploratory study investigated the prevalence and methods of face-to-face bullying and cyberbullying of males at work. One hundred three surveys (a modified version of the revised Negative Acts Questionnaire [NAQ-R]) were returned from randomly selected members of the Australian Manufacturing Workers' Union (AMWU). The results showed that 34% of respondents were bullied face-to-face, and 10.7% were cyberbullied. All victims of cyberbullying also experienced face-to-face bullying. The implications for organizations' \"duty of care\" in regard to this new form of bullying are indicated.", "title": "" }, { "docid": "3ff01763def34800cf8afb9fc5fa9c83", "text": "The emerging machine learning technique called support vector machines is proposed as a method for performing nonlinear equalization in communication systems. The support vector machine has the advantage that a smaller number of parameters for the model can be identified in a manner that does not require the extent of prior information or heuristic assumptions that some previous techniques require. Furthermore, the optimization method of a support vector machine is quadratic programming, which is a well-studied and understood mathematical programming technique. Support vector machine simulations are carried out on nonlinear problems previously studied by other researchers using neural networks. This allows initial comparison against other techniques to determine the feasibility of using the proposed method for nonlinear detection. Results show that support vector machines perform as well as neural networks on the nonlinear problems investigated. A method is then proposed to introduce decision feedback processing to support vector machines to address the fact that intersymbol interference (ISI) data generates input vectors having temporal correlation, whereas a standard support vector machine assumes independent input vectors. Presenting the problem from the viewpoint of the pattern space illustrates the utility of a bank of support vector machines. This approach yields a nonlinear processing method that is somewhat different than the nonlinear decision feedback method whereby the linear feedback filter of the decision feedback equalizer is replaced by a Volterra filter. A simulation using a linear system shows that the proposed method performs equally to a conventional decision feedback equalizer for this problem.", "title": "" }, { "docid": "a3dc6a178b7861959b992387366c2c78", "text": "Linked data and semantic web technologies are gaining impact and importance in the Architecture, Engineering, Construction and Facility Management (AEC/FM) industry. Whereas we have seen a strong technological shift with the emergence of Building Information Modeling (BIM) tools, this second technological shift to the exchange and management of building data over the web might be even stronger than the first one. In order to make this a success, the AEC/FM industry will need strong and appropriate ontologies, as they will allow industry practitioners to structure their data in a commonly agreed format and exchange the data. Herein, we look at the ontologies that are emerging in the area of Building Automation and Control Systems (BACS). We propose a BACS ontology in strong alignment with existing ontologies and evaluate how it can be used for capturing automation and control systems of a building by modeling a use case.", "title": "" }, { "docid": "5e0ac4a3957f5eba26790f54678df7fc", "text": "Recent statistics show that in 2015 more than 140 millions new malware samples have been found. Among these, a large portion is due to ransomware, the class of malware whose specific goal is to render the victim’s system unusable, in particular by encrypting important files, and then ask the user to pay a ransom to revert the damage. Several ransomware include sophisticated packing techniques, and are hence difficult to statically analyse. We present EldeRan, a machine learning approach for dynamically analysing and classifying ransomware. EldeRan monitors a set of actions performed by applications in their first phases of installation checking for characteristics signs of ransomware. Our tests over a dataset of 582 ransomware belonging to 11 families, and with 942 goodware applications, show that EldeRan achieves an area under the ROC curve of 0.995. Furthermore, EldeRan works without requiring that an entire ransomware family is available beforehand. These results suggest that dynamic analysis can support ransomware detection, since ransomware samples exhibit a set of characteristic features at run-time that are common across families, and that helps the early detection of new variants. We also outline some limitations of dynamic analysis for ransomware and propose possible solutions.", "title": "" }, { "docid": "ad903f1d8998200d89234f0244452ad4", "text": "Within last two decades, social media has emerged as almost an alternate world where people communicate with each other and express opinions about almost anything. This makes platforms like Facebook, Reddit, Twitter, Myspace etc. a rich bank of heterogeneous data, primarily expressed via text but reflecting all textual and non-textual data that human interaction can produce. We propose a novel attention based hierarchical LSTM model to classify discourse act sequences in social media conversations, aimed at mining data from online discussion using textual meanings beyond sentence level. The very uniqueness of the task is the complete categorization of possible pragmatic roles in informal textual discussions, contrary to extraction of question-answers, stance detection or sarcasm identification which are very much role specific tasks. Early attempt was made on a Reddit discussion dataset. We train our model on the same data, and present test results on two different datasets, one from Reddit and one from Facebook. Our proposed model outperformed the previous one in terms of domain independence; without using platformdependent structural features, our hierarchical LSTM with word relevance attention mechanism achieved F1-scores of 71% and 66% respectively to predict discourse roles of comments in Reddit and Facebook discussions. Efficiency of recurrent and convolutional architectures in order to learn discursive representation on the same task has been presented and analyzed, with different word and comment embedding schemes. Our attention mechanism enables us to inquire into relevance ordering of text segments according to their roles in discourse. We present a human annotator experiment to unveil important observations about modeling and data annotation. Equipped with our text-based discourse identification model, we inquire into how Subhabrata Dutta Jadavpur University, Kolkata, India, e-mail: subha0009@gmail.com Tanmoy Chakraborty IIIT Delhi, India, e-mail: tanmoy@iiitd.ac.in Dipankar Das Jadavpur University, Kolkata, India, e-mail: dipankar.dipnil2005@gmail.com 1 ar X iv :1 80 8. 02 29 0v 1 [ cs .C L ] 7 A ug 2 01 8 2 Subhabrata Dutta, Tanmoy Chakraborty and Dipankar Das heterogeneous non-textual features like location, time, leaning of information etc. play their roles in charaterizing online discussions on Facebook.", "title": "" }, { "docid": "33468c214408d645651871bd8018ed82", "text": "In this paper, we carry out two experiments on the TIMIT speech corpus with bidirectional and unidirectional Long Short Term Memory (LSTM) networks. In the first experiment (framewise phoneme classification) we find that bidirectional LSTM outperforms both unidirectional LSTM and conventional Recurrent Neural Networks (RNNs). In the second (phoneme recognition) we find that a hybrid BLSTM-HMM system improves on an equivalent traditional HMM system, as well as unidirectional LSTM-HMM.", "title": "" }, { "docid": "b0c91e6f8d1d6d41693800e1253b414f", "text": "Tightly coupling GNSS pseudorange and Doppler measurements with other sensors is known to increase the accuracy and consistency of positioning information. Nowadays, high-accuracy geo-referenced lane marking maps are seen as key information sources in autonomous vehicle navigation. When an exteroceptive sensor such as a video camera or a lidar is used to detect them, lane markings provide positioning information which can be merged with GNSS data. In this paper, measurements from a forward-looking video camera are merged with raw GNSS pseudoranges and Dopplers on visible satellites. To create a localization system that provides pose estimates with high availability, dead reckoning sensors are also integrated. The data fusion problem is then formulated as sequential filtering. A reduced-order state space modeling of the observation problem is proposed to give a real-time system that is easy to implement. A Kalman filter with measured input and correlated noises is developed using a suitable error model of the GNSS pseudoranges. Our experimental results show that this tightly coupled approach performs better, in terms of accuracy and consistency, than a loosely coupled method using GNSS fixes as inputs.", "title": "" }, { "docid": "69a6cfb649c3ccb22f7a4467f24520f3", "text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.", "title": "" }, { "docid": "024f88a24593455b532f85327d741bea", "text": "Many women suffer from excessive hair growth, often in combination with polycystic ovarian syndrome (PCOS). It is unclear how hirsutism influences such women's experiences of their bodies. Our aim is to describe and interpret women's experiences of their bodies when living with hirsutism. Interviews were conducted with 10 women with hirsutism. We used a qualitative latent content analysis. Four closely intertwined themes were disclosed: the body was experienced as a yoke, a freak, a disgrace, and as a prison. Hirsutism deeply affects women's experiences of their bodies in a negative way.", "title": "" }, { "docid": "37d1b8960dd95dfca5c307727ddfdc6c", "text": "Reasoning about the future is fundamental to intelligence. In this work, I consider the problem of reasoning about the future actions of an intelligent agent. We find the framework of learning sequential policies beneficial, which poses a set of important design decisions. The focus of this work is the exploration of various policy-learning design decisions, and how these design decisions affect the primary task of forecating agent futures. Throughout this work, I use demonstrations of agent behavior and often use rich visual data to drive learning. I developed forecasting approaches to excel in diverse, realistic, single-agent domains. These include sparse models to generalize from few demonstrations of human daily activity, adaptive models to continuously learn from demonstrations of human daily activity, and high-dimensional generative models learned from demonstrations of human driving behavior. I also explored incentivized forecasting, which encourages an artificial agent that only has access to partial observations of state to learn predictive state representations in order to perform a task better. While powerful and useful in these settings, our answers have only been tested in single agent domains. Yet, many realistic scenarios involve multiple agents undertaking complex behaviors: for instance, cars and people navigating and negotiating at intersections. Therefore, I propose to extend our generative framework to multiagent domains as the first direction of future work. This involves generalizing representations and inputs to multiple agents. Second, in the more difficult multiagent setting where we do not have access to expert demonstrations for one of the agents, our learning system should couple its forecasts of other agents with its own behavior. A third direction of future work is extension of our generative model to the online learning setting. Altogether, our answers will serve as a guiding extensible framework for further development of practical learning-based forecasting systems.", "title": "" }, { "docid": "3b1a7539000a8ddabdaa4888b8bb1adc", "text": "This paper presents evaluations among the most usual maximum power point tracking (MPPT) techniques, doing meaningful comparisons with respect to the amount of energy extracted from the photovoltaic (PV) panel [tracking factor (TF)] in relation to the available power, PV voltage ripple, dynamic response, and use of sensors. Using MatLab/Simulink and dSPACE platforms, a digitally controlled boost dc-dc converter was implemented and connected to an Agilent Solar Array E4350B simulator in order to verify the analytical procedures. The main experimental results are presented for conventional MPPT algorithms and improved MPPT algorithms named IC based on proportional-integral (PI) and perturb and observe based on PI. Moreover, the dynamic response and the TF are also evaluated using a user-friendly interface, which is capable of online program power profiles and computes the TF. Finally, a typical daily insulation is used in order to verify the experimental results for the main PV MPPT methods.", "title": "" }, { "docid": "9e3bba7a681a838fb0b32c1e06eaae93", "text": "This review focuses on the synthesis, protection, functionalization, and application of magnetic nanoparticles, as well as the magnetic properties of nanostructured systems. Substantial progress in the size and shape control of magnetic nanoparticles has been made by developing methods such as co-precipitation, thermal decomposition and/or reduction, micelle synthesis, and hydrothermal synthesis. A major challenge still is protection against corrosion, and therefore suitable protection strategies will be emphasized, for example, surfactant/polymer coating, silica coating and carbon coating of magnetic nanoparticles or embedding them in a matrix/support. Properly protected magnetic nanoparticles can be used as building blocks for the fabrication of various functional systems, and their application in catalysis and biotechnology will be briefly reviewed. Finally, some future trends and perspectives in these research areas will be outlined.", "title": "" }, { "docid": "c06e1491b0aabbbd73628c2f9f45d65d", "text": "With the integration of deep learning into the traditional field of reinforcement learning in the recent decades, the spectrum of applications that artificial intelligence caters is currently very broad. As using AI to play games is a traditional application of reinforcement learning, the project’s objective is to implement a deep reinforcement learning agent that can defeat a video game. Since it is often difficult to determine which algorithms are appropriate given the wide selection of state-of-the-art techniques in the discipline, proper comparisons and investigations of the algorithms are a prerequisite to implementing such an agent. As a result, this paper serves as a platform for exploring the possibility and effectiveness of using conventional state-of-the-art reinforcement learning methods for playing Pacman maps. In particular, this paper demonstrates that Combined DQN, a variation of Rainbow DQN, is able to attain high performance in small maps such as 506Pacman, smallGrid and mediumGrid. It was also demonstrated that the trained agents could also play Pacman maps similar to training with limited performance. Nevertheless, the algorithm suffers due to its data inefficiency and lack of human-like features, which may be remedied in the future by introducing more human-like features into the algortihm, such as intrinsic motivation and imagination.", "title": "" }, { "docid": "d516a59094e3197bce709f4414db4517", "text": "Authorship attribution deals with identifying the authors of anonymous texts. Traditionally, research in this field has focused on formal texts, such as essays and novels, but recently more attention has been given to texts generated by on-line users, such as e-mails and blogs. Authorship attribution of such on-line texts is a more challenging task than traditional authorship attribution, because such texts tend to be short, and the number of candidate authors is often larger than in traditional settings. We address this challenge by using topic models to obtain author representations. In addition to exploring novel ways of applying two popular topic models to this task, we test our new model that projects authors and documents to two disjoint topic spaces. Utilizing our model in authorship attribution yields state-of-the-art performance on several data sets, containing either formal texts written by a few authors or informal texts generated by tens to thousands of on-line users. We also present experimental results that demonstrate the applicability of topical author representations to two other problems: inferring the sentiment polarity of texts, and predicting the ratings that users would give to items such as movies.", "title": "" } ]
scidocsrr
2cc337dd5ddbf1d672bcf882343ded07
Ratings for emotion film clips.
[ { "docid": "93d8b8afe93d10e54bf4a27ba3b58220", "text": "Researchers interested in emotion have long struggled with the problem of how to elicit emotional responses in the laboratory. In this article, we summarise five years of work to develop a set of films that reliably elicit each of eight emotional states (amusement, anger, contentment, disgust, fear, neutral, sadness, and surprise). After evaluating over 250 films, we showed selected film clips to an ethnically diverse sample of 494 English-speaking subjects. We then chose the two best films for each of the eight target emotions based on the intensity and discreteness of subjects' responses to each film. We found that our set of 16 films successfully elicited amusement, anger, contentment. disgust, sadness, surprise, a relatively neutral state, and, to a lesser extent, fear. We compare this set of films with another set recently described by Philippot (1993), and indicate that detailed instructions for creating our set of film stimuli will be provided on request.", "title": "" } ]
[ { "docid": "cd36a4e57a446e25ae612cdc31f6293e", "text": "Privacy and security concerns can prevent sharing of data, derailing data mining projects. Distributed knowledge discovery, if done correctly, can alleviate this problem. The key is to obtain valid results, while providing guarantees on the (non)disclosure of data. We present a method for k-means clustering when different sites contain different attributes for a common set of entities. Each site learns the cluster of each entity, but learns nothing about the attributes at other sites.", "title": "" }, { "docid": "a8477be508fab67456c5f6b61d3642b5", "text": "Although three-phase permanent magnet (PM) motors are quite common in industry, multi-phase PM motors are used in special applications where high power and redundancy are required. Multi-phase PM motors offer higher torque/power density than conventional three-phase PM motors. In this paper, a novel multi-phase consequent pole PM (CPPM) synchronous motor is proposed. The constant power–speed range of the proposed motor is quite wide as opposed to conventional PM motors. The design and the detailed finite-element analysis of the proposed nine-phase CPPM motor and performance comparison with a nine-phase surface mounted PM motor are completed to illustrate the benefits of the proposed motor.", "title": "" }, { "docid": "c664918193470b20af2ce2ecf0c8e1c7", "text": "The exceptional electronic properties of graphene, with its charge carriers mimicking relativistic quantum particles and its formidable potential in various applications, have ensured a rapid growth of interest in this new material. We report on electron transport in quantum dot devices carved entirely from graphene. At large sizes (>100 nanometers), they behave as conventional single-electron transistors, exhibiting periodic Coulomb blockade peaks. For quantum dots smaller than 100 nanometers, the peaks become strongly nonperiodic, indicating a major contribution of quantum confinement. Random peak spacing and its statistics are well described by the theory of chaotic neutrino billiards. Short constrictions of only a few nanometers in width remain conductive and reveal a confinement gap of up to 0.5 electron volt, demonstrating the possibility of molecular-scale electronics based on graphene.", "title": "" }, { "docid": "2e6c14ef1fe5c643a19e8c0e759e086b", "text": "Deafblind people have a severe degree of combined visual and auditory impairment resulting in problems with communication, (access to) information and mobility. Moreover, in order to interact with other people, most of them need the constant presence of a caregiver who plays the role of an interpreter with an external world organized for hearing and sighted people. As a result, they usually live behind an invisible wall of silence, in a unique and inexplicable condition of isolation.\n In this paper, we describe DB-HAND, an assistive hardware/software system that supports users to autonomously interact with the environment, to establish social relationships and to gain access to information sources without an assistant. DB-HAND consists of an input/output wearable peripheral (a glove equipped with sensors and actuators) that acts as a natural interface since it enables communication using a language that is easily learned by a deafblind: Malossi method. Interaction with DB-HAND is managed by a software environment, whose purpose is to translate text into sequences of tactile stimuli (and vice-versa), to execute commands and to deliver messages to other users. It also provides multi-modal feedback on several standard output devices to support interaction with the hearing and the sighted people.", "title": "" }, { "docid": "114492ca2cef179a39b5ad5edbc80de0", "text": "We review early and recent psychological theories of dehumanization and survey the burgeoning empirical literature, focusing on six fundamental questions. First, we examine how people are dehumanized, exploring the range of ways in which perceptions of lesser humanness have been conceptualized and demonstrated. Second, we review who is dehumanized, examining the social targets that have been shown to be denied humanness and commonalities among them. Third, we investigate who dehumanizes, notably the personality, ideological, and other individual differences that increase the propensity to see others as less than human. Fourth, we explore when people dehumanize, focusing on transient situational and motivational factors that promote dehumanizing perceptions. Fifth, we examine the consequences of dehumanization, emphasizing its implications for prosocial and antisocial behavior and for moral judgment. Finally, we ask what can be done to reduce dehumanization. We conclude with a discussion of limitations of current scholarship and directions for future research.", "title": "" }, { "docid": "3394eb51b71e5def4e4637963da347ab", "text": "In this paper we present a model of e-learning suitable for teacher training sessions. The main purpose of our work is to define the components of the educational system which influences the successful adoption of e-learning in the field of education. We also present the factors of the readiness of e-learning mentioned in the literature available and classifies them into the 3 major categories that constitute the components of every organization and consequently that of education. Finally, we present an implementation model of e-learning through the use of virtual private networks, which lends an added value to the realization of e-learning.", "title": "" }, { "docid": "b34216c34f32336db67f76f1c94c255b", "text": "Exploration is still one of the crucial problems in reinforcement learning, especially for agents acting in safety-critical situations. We propose a new directed exploration method, based on a notion of state controlability. Intuitively, if an agent wants to stay safe, it should seek out states where the effects of its actions are easier to predict; we call such states more controllable. Our main contribution is a new notion of controlability, computed directly from temporaldifference errors. Unlike other existing approaches of this type, our method scales linearly with the number of state features, and is directly applicable to function approximation. Our method converges to correct values in the policy evaluation setting. We also demonstrate significantly faster learning when this exploration strategy is used in large control problems.", "title": "" }, { "docid": "76e6c05e41c4e6d3c70c8fedec5c323b", "text": "Commercial light field cameras provide spatial and angular information, but their limited resolution becomes an important problem in practical use. In this letter, we present a novel method for light field image super-resolution (SR) to simultaneously up-sample both the spatial and angular resolutions of a light field image via a deep convolutional neural network. We first augment the spatial resolution of each subaperture image by a spatial SR network, then novel views between super-resolved subaperture images are generated by three different angular SR networks according to the novel view locations. We improve both the efficiency of training and the quality of angular SR results by using weight sharing. In addition, we provide a new light field image dataset for training and validating the network. We train our whole network end-to-end, and show state-of-the-art performances on quantitative and qualitative evaluations.", "title": "" }, { "docid": "ae536a72dfba1e7eff57989c3f94ae3e", "text": "Policymakers are often interested in estimating how policy interventions affect the outcomes of those most in need of help. This concern has motivated the practice of disaggregating experimental results by groups constructed on the basis of an index of baseline characteristics that predicts the values of individual outcomes without the treatment. This paper shows that substantial biases may arise in practice if the index is estimated by regressing the outcome variable on baseline characteristics for the full sample of experimental controls. We propose alternative methods that correct this bias and show that they behave well in realistic scenarios.", "title": "" }, { "docid": "7e2bbd260e58d84a4be8b721cdf51244", "text": "Obesity is characterised by altered gut microbiota, low-grade inflammation and increased endocannabinoid (eCB) system tone; however, a clear connection between gut microbiota and eCB signalling has yet to be confirmed. Here, we report that gut microbiota modulate the intestinal eCB system tone, which in turn regulates gut permeability and plasma lipopolysaccharide (LPS) levels. The impact of the increased plasma LPS levels and eCB system tone found in obesity on adipose tissue metabolism (e.g. differentiation and lipogenesis) remains unknown. By interfering with the eCB system using CB(1) agonist and antagonist in lean and obese mouse models, we found that the eCB system controls gut permeability and adipogenesis. We also show that LPS acts as a master switch to control adipose tissue metabolism both in vivo and ex vivo by blocking cannabinoid-driven adipogenesis. These data indicate that gut microbiota determine adipose tissue physiology through LPS-eCB system regulatory loops and may have critical functions in adipose tissue plasticity during obesity.", "title": "" }, { "docid": "dfa62c69b1ab26e7e160100b69794674", "text": "Canonical correlation analysis (CCA) is a well established technique for identifying linear relationships among two variable sets. Kernel CCA (KCCA) is the most notable nonlinear extension but it lacks interpretability and robustness against irrelevant features. The aim of this article is to introduce two nonlinear CCA extensions that rely on the recently proposed Hilbert-Schmidt independence criterion and the centered kernel target alignment. These extensions determine linear projections that provide maximally dependent projected data pairs. The paper demonstrates that the use of linear projections allows removing irrelevant features, whilst extracting combinations of strongly associated features. This is exemplified through a simulation and the analysis of recorded data that are available in the literature.", "title": "" }, { "docid": "f84016570e5f9c7de7a452e88e0edb14", "text": "Requirements of enterprise applications have become much more demanding. They require the computation of complex reports on transactional data while thousands of users may read or update records of the same data. The goal of the SAP HANA database is the integration of transactional and analytical workload within the same database management system. To achieve this, a columnar engine exploits modern hardware (multiple CPU cores, large main memory, and caches), compression of database content, maximum parallelization in the database kernel, and database extensions required by enterprise applications, e.g., specialized data structures for hierarchies or support for domain specific languages. In this paper we highlight the architectural concepts employed in the SAP HANA database. We also report on insights gathered with the SAP HANA database in real-world enterprise application scenarios.", "title": "" }, { "docid": "a3f6f2e6415267bb5b9ac92c3c77e872", "text": "In recent times, the use of separable convolutions in deep convolutional neural network architectures has been explored. Several researchers, most notably and have used separable convolutions in their deep architectures and have demonstrated state of the art or close to state of the art performance. However, the underlying mechanism of action of separable convolutions is still not fully understood. Although, their mathematical definition is well understood as a depth-wise convolution followed by a point-wise convolution, “deeper” interpretations (such as the “extreme Inception”) hypothesis have failed to provide a thorough explanation of their efficacy. In this paper, we propose a hybrid interpretation that we believe is a better model for explaining the efficacy of separable convolutions.", "title": "" }, { "docid": "184da4d4589a3a9dc1f339042e6bc674", "text": "Ocular dominance plasticity has long served as a successful model for examining how cortical circuits are shaped by experience. In this paradigm, altered retinal activity caused by unilateral eye-lid closure leads to dramatic shifts in the binocular response properties of neurons in the visual cortex. Much of the recent progress in identifying the cellular and molecular mechanisms underlying ocular dominance plasticity has been achieved by using the mouse as a model system. In this species, monocular deprivation initiated in adulthood also causes robust ocular dominance shifts. Research on ocular dominance plasticity in the mouse is starting to provide insight into which factors mediate and influence cortical plasticity in juvenile and adult animals.", "title": "" }, { "docid": "f87fea9cd76d1545c34f8e813347146e", "text": "In fault detection and isolation, diagnostic test results are commonly used to compute a set of diagnoses, where each diagnosis points at a set of components which might behave abnormally. In distributed systems consisting of multiple control units, the test results in each unit can be used to compute local diagnoses while all test results in the complete system give the global diagnoses. It is an advantage for both repair and fault-tolerant control to have access to the global diagnoses in each unit since these diagnoses represent all test results in all units. However, when the diagnoses, for example, are to be used to repair a unit, only the components that are used by the unit are of interest. The reason for this is that it is only these components that could have caused the abnormal behavior. However, the global diagnoses might include components from the complete system and therefore often include components that are superfluous for the unit. Motivated by this observation, a new type of diagnosis is proposed, namely, the condensed diagnosis. Each unit has a unique set of condensed diagnoses which represents the global diagnoses. The benefit of the condensed diagnoses is that they only include components used by the unit while still representing the global diagnoses. The proposed method is applied to an automotive vehicle, and the results from the application study show the benefit of using condensed diagnoses compared to global diagnoses.", "title": "" }, { "docid": "d566e25ed5ff6e479887a350572cadad", "text": "Lorentz reciprocity is a fundamental characteristic of the vast majority of electronic and photonic structures. However, non-reciprocal components such as isolators, circulators and gyrators enable new applications ranging from radio frequencies to optical frequencies, including full-duplex wireless communication and on-chip all-optical information processing. Such components today dominantly rely on the phenomenon of Faraday rotation in magneto-optic materials. However, they are typically bulky, expensive and not suitable for insertion in a conventional integrated circuit. Here we demonstrate magnetic-free linear passive non-reciprocity based on the concept of staggered commutation. Commutation is a form of parametric modulation with very high modulation ratio. We observe that staggered commutation enables time-reversal symmetry breaking within very small dimensions (λ/1,250 × λ/1,250 in our device), resulting in a miniature radio-frequency circulator that exhibits reduced implementation complexity, very low loss, strong non-reciprocity, significantly enhanced linearity and real-time reconfigurability, and is integrated in a conventional complementary metal-oxide-semiconductor integrated circuit for the first time.", "title": "" }, { "docid": "8eafcf061e2b9cda4cd02de9bf9a31d1", "text": "Building upon recent Deep Neural Network architectures, current approaches lying in the intersection of Computer Vision and Natural Language Processing have achieved unprecedented breakthroughs in tasks like automatic captioning or image retrieval. Most of these learning methods, though, rely on large training sets of images associated with human annotations that specifically describe the visual content. In this paper we propose to go a step further and explore the more complex cases where textual descriptions are loosely related to the images. We focus on the particular domain of news articles in which the textual content often expresses connotative and ambiguous relations that are only suggested but not directly inferred from images. We introduce an adaptive CNN architecture that shares most of the structure for multiple tasks including source detection, article illustration and geolocation of articles. Deep Canonical Correlation Analysis is deployed for article illustration, and a new loss function based on Great Circle Distance is proposed for geolocation. Furthermore, we present BreakingNews, a novel dataset with approximately 100K news articles including images, text and captions, and enriched with heterogeneous meta-data (such as GPS coordinates and user comments). We show this dataset to be appropriate to explore all aforementioned problems, for which we provide a baseline performance using various Deep Learning architectures, and different representations of the textual and visual features. We report very promising results and bring to light several limitations of current state-of-the-art in this kind of domain, which we hope will help spur progress in the field.", "title": "" }, { "docid": "7d3449a6ea821d214f7d961d4c85c6a4", "text": "Collisions between automated moving equipment and human workers in job sites are one of the main sources of fatalities and accidents during the execution of construction projects. In this paper, we present a methodology to identify and assess project plans in terms of hazards before their execution. Our methodology has the following steps: 1) several potential plans are extracted from an initial activity graph; 2) plans are translated from a high-level activity graph to a discrete-event simulation model; 3) trajectories and safety policies are generated that avoid static and moving obstacles using existing motion planning algorithms; 4) safety scores and risk-based heatmaps are calculated based on the trajectories of moving equipment; and 5) managerial implications are provided to select an acceptable plan with the aid of a sensitivity analysis of different factors (cost, resources, and deadlines) that affect the safety of a plan. Finally, we present illustrative case study examples to demonstrate the usefulness of our model.Note to Practitioners—Currently, construction project planning does not explicitly consider safety due to a lack of automated tools that can identify a plan’s safety level before its execution. This paper proposes an automated construction safety assessment tool which is able to evaluate the alternate construction plans and help to choose considering safety, cost, and deadlines. Our methodology uses discrete-event modeling along with motion planning to simulate the motions of workers and equipment, which account for most of the hazards in construction sites. Our method is capable of generating safe motion trajectories and coordination policies for both humans and machines to minimize the number of collisions. We also provide safety heatmaps as a spatiotemporal visual display of construction site to identify risky zones inside the environment throughout the entire timeline of the project. Additionally, a detailed sensitivity analysis helps to choose among plans in terms of safety, cost, and deadlines.", "title": "" }, { "docid": "c75b309fc89e75cb7b6fa415175aa192", "text": "Tweets have become an increasingly popular source of fresh information. We investigate the task of Nominal Semantic Role Labeling (NSRL) for tweets, which aims to identify predicate-argument structures defined by nominals in tweets. Studies of this task can help fine-grained information extraction and retrieval from tweets. There are two main challenges in this task: 1) The lack of information in a single tweet, rooted in the short and noisy nature of tweets; and 2) recovery of implicit arguments. We propose jointly conducting NSRL on multiple similar tweets using a graphical model, leveraging the redundancy in tweets to tackle these challenges. Extensive evaluations on a human annotated data set demonstrate that our method outperforms two baselines with an absolute gain of 2.7% in F", "title": "" }, { "docid": "1512f35cd69a456a72f981577cfb068b", "text": "Recurrence and progression to higher grade lesions are key biological events and characteristic behaviors in the evolution process of glioma. Malignant astrocytic tumors such as glioblastoma (GBM) are the most lethal intracranial tumors. However, the clinical practicability and significance of molecular parameters for the diagnostic and prognostic prediction of astrocytic tumors is still limited. In this study, we detected ATRX, IDH1-R132H and Ki-67 by immunohistochemistry and observed the association of IDH1-R132H with ATRX and Ki-67 expression. There was a strong association between ATRX loss and IDH1-R132H (p<0.0001). However, Ki-67 high expression restricted in the tumors with IDH1-R132H negative (p=0.0129). Patients with IDH1-R132H positive or ATRX loss astrocytic tumors had a longer progressive- free survival (p<0.0001, p=0.0044, respectively). High Ki-67 expression was associated with shorter PFS in patients with astrocytic tumors (p=0.002). Then we characterized three prognostic subgroups of astrocytic tumors (referred to as A1, A2 and A3). The new model demonstrated a remarkable separation of the progression interval in the three molecular subgroups and the distribution of patients' age in the A1-A2-A3 model was also significant different. This model will aid predicting the overall survival and progressive time of astrocytic tumors' patients.", "title": "" } ]
scidocsrr
ab9d3b2d479121643c7f690057cbb60a
Sentiment Analysis in Social Media Texts
[ { "docid": "52a5f4c15c1992602b8fe21270582cc6", "text": "This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size. SMO’s computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On realworld sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.", "title": "" }, { "docid": "4ef6adf0021e85d9bf94079d776d686d", "text": "Recent years have brought a significant growth in the volume of research in sentiment analysis, mostly on highly subjective text types (movie or product reviews). The main difference these texts have with news articles is that their target is clearly defined and unique across the text. Following different annotation efforts and the analysis of the issues encountered, we realised that news opinion mining is different from that of other text types. We identified three subtasks that need to be addressed: definition of the target; separation of the good and bad news content from the good and bad sentiment expressed on the target; and analysis of clearly marked opinion that is expressed explicitly, not needing interpretation or the use of world knowledge. Furthermore, we distinguish three different possible views on newspaper articles – author, reader and text, which have to be addressed differently at the time of analysing sentiment. Given these definitions, we present work on mining opinions about entities in English language news, in which (a) we test the relative suitability of various sentiment dictionaries and (b) we attempt to separate positive or negative opinion from good or bad news. In the experiments described here, we tested whether or not subject domain-defining vocabulary should be ignored. Results showed that this idea is more appropriate in the context of news opinion mining and that the approaches taking this into consideration produce a better performance.", "title": "" } ]
[ { "docid": "0b117f379a32b0ba4383c71a692405c8", "text": "Today’s educational policies are largely devoted to fostering the development and implementation of computer applications in education. This paper analyses the skills and competences needed for the knowledgebased society and reveals the role and impact of using computer applications to the teaching and learning processes. Also, the aim of this paper is to reveal the outcomes of a study conducted in order to determine the impact of using computer applications in teaching and learning Management and to propose new opportunities for the process improvement. The findings of this study related to the teachers’ and students’ perceptions about using computer applications for teaching and learning could open further researches on computer applications in education and their educational and economic implications.", "title": "" }, { "docid": "656baf66e6dd638d9f48ea621593bac3", "text": "Recent evidence suggests that a particular gut microbial community may favour occurrence of the metabolic diseases. Recently, we reported that high-fat (HF) feeding was associated with higher endotoxaemia and lower Bifidobacterium species (spp.) caecal content in mice. We therefore tested whether restoration of the quantity of caecal Bifidobacterium spp. could modulate metabolic endotoxaemia, the inflammatory tone and the development of diabetes. Since bifidobacteria have been reported to reduce intestinal endotoxin levels and improve mucosal barrier function, we specifically increased the gut bifidobacterial content of HF-diet-fed mice through the use of a prebiotic (oligofructose [OFS]). Compared with normal chow-fed control mice, HF feeding significantly reduced intestinal Gram-negative and Gram-positive bacteria including levels of bifidobacteria, a dominant member of the intestinal microbiota, which is seen as physiologically positive. As expected, HF-OFS-fed mice had totally restored quantities of bifidobacteria. HF-feeding significantly increased endotoxaemia, which was normalised to control levels in HF-OFS-treated mice. Multiple-correlation analyses showed that endotoxaemia significantly and negatively correlated with Bifidobacterium spp., but no relationship was seen between endotoxaemia and any other bacterial group. Finally, in HF-OFS-treated-mice, Bifidobacterium spp. significantly and positively correlated with improved glucose tolerance, glucose-induced insulin secretion and normalised inflammatory tone (decreased endotoxaemia, plasma and adipose tissue proinflammatory cytokines). Together, these findings suggest that the gut microbiota contribute towards the pathophysiological regulation of endotoxaemia and set the tone of inflammation for occurrence of diabetes and/or obesity. Thus, it would be useful to develop specific strategies for modifying gut microbiota in favour of bifidobacteria to prevent the deleterious effect of HF-diet-induced metabolic diseases.", "title": "" }, { "docid": "b5fea029d64084089de8e17ae9debffc", "text": "While there has been increasing interest in the task of describing video with natural language, current computer vision algorithms are still severely limited in terms of the variability and complexity of the videos and their associated language that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on specific fine-grained domains with limited videos and simple descriptions. While researchers have provided several benchmark datasets for image captioning, we are not aware of any large-scale video description dataset with comprehensive categories yet diverse video content. In this paper we present MSR-VTT (standing for \"MSRVideo to Text\") which is a new large-scale video benchmark for video understanding, especially the emerging task of translating video to text. This is achieved by collecting 257 popular queries from a commercial video search engine, with 118 videos for each query. In its current version, MSR-VTT provides 10K web video clips with 41.2 hours and 200K clip-sentence pairs in total, covering the most comprehensive categories and diverse visual content, and representing the largest dataset in terms of sentence and vocabulary. Each clip is annotated with about 20 natural sentences by 1,327 AMT workers. We present a detailed analysis of MSR-VTT in comparison to a complete set of existing datasets, together with a summarization of different state-of-the-art video-to-text approaches. We also provide an extensive evaluation of these approaches on this dataset, showing that the hybrid Recurrent Neural Networkbased approach, which combines single-frame and motion representations with soft-attention pooling strategy, yields the best generalization capability on MSR-VTT.", "title": "" }, { "docid": "c089e788b5cfda6c4a7f518af668bc3a", "text": "The selection of hyper-parameters is critical in Deep Learning. Because of the long training time of complex models and the availability of compute resources in the cloud, “one-shot” optimization schemes – where the sets of hyper-parameters are selected in advance (e.g. on a grid or in a random manner) and the training is executed in parallel – are commonly used. [1] show that grid search is sub-optimal, especially when only a few critical parameters matter, and suggest to use random search instead. Yet, random search can be “unlucky” and produce sets of values that leave some part of the domain unexplored. Quasi-random methods, such as Low Discrepancy Sequences (LDS) avoid these issues. We show that such methods have theoretical properties that make them appealing for performing hyperparameter search, and demonstrate that, when applied to the selection of hyperparameters of complex Deep Learning models (such as state-of-the-art LSTM language models and image classification models), they yield suitable hyperparameters values with much fewer runs than random search. We propose a particularly simple LDS method which can be used as a drop-in replacement for grid/random search in any Deep Learning pipeline, both as a fully one-shot hyperparameter search or as an initializer in iterative batch optimization.", "title": "" }, { "docid": "d1afaada6bf5927d9676cee61d3a1d49", "text": "t-Closeness is a privacy model recently defined for data anonymization. A data set is said to satisfy t-closeness if, for each group of records sharing a combination of key attributes, the distance between the distribution of a confidential attribute in the group and the distribution of the attribute in the entire data set is no more than a threshold t. Here, we define a privacy measure in terms of information theory, similar to t-closeness. Then, we use the tools of that theory to show that our privacy measure can be achieved by the postrandomization method (PRAM) for masking in the discrete case, and by a form of noise addition in the general case.", "title": "" }, { "docid": "af6f5ef41a3737975893f95796558900", "text": "In this work, we propose a multi-task convolutional neural network learning approach that can simultaneously perform iris localization and presentation attack detection (PAD). The proposed multi-task PAD (MT-PAD) is inspired by an object detection method which directly regresses the parameters of the iris bounding box and computes the probability of presentation attack from the input ocular image. Experiments involving both intra-sensor and cross-sensor scenarios suggest that the proposed method can achieve state-of-the-art results on publicly available datasets. To the best of our knowledge, this is the first work that performs iris detection and iris presentation attack detection simultaneously.", "title": "" }, { "docid": "e7586aea8381245cfa07239158d115af", "text": "The interpolation, prediction, and feature analysis of fine-gained air quality are three important topics in the area of urban air computing. The solutions to these topics can provide extremely useful information to support air pollution control, and consequently generate great societal and technical impacts. Most of the existing work solves the three problems separately by different models. In this paper, we propose a general and effective approach to solve the three problems in one model called the Deep Air Learning (DAL). The main idea of DAL lies in embedding feature selection and semi-supervised learning in different layers of the deep learning network. The proposed approach utilizes the information pertaining to the unlabeled spatio-temporal data to improve the performance of the interpolation and the prediction, and performs feature selection and association analysis to reveal the main relevant features to the variation of the air quality. We evaluate our approach with extensive experiments based on real data sources obtained in Beijing, China. Experiments show that DAL is superior to the peer models from the recent literature when solving the topics of interpolation, prediction, and feature analysis of fine-gained air quality.", "title": "" }, { "docid": "7f75e0b789e7b2bbaa47c7fa06efb852", "text": "A significant increase in the capability for controlling motion dynamics in key frame animation is achieved through skeleton control. This technique allows an animator to develop a complex motion sequence by animating a stick figure representation of an image. This control sequence is then used to drive an image sequence through the same movement. The simplicity of the stick figure image encourages a high level of interaction during the design stage. Its compatibility with the basic key frame animation technique permits skeleton control to be applied selectively to only those components of a composite image sequence that require enhancement.", "title": "" }, { "docid": "e8a2ef4ded8ba4fa2e36588015c2c61a", "text": "The interdisciplinary character of Bio-Inspired Design (BID) has resulted in a plethora of approaches and methods that propose different types of design processes. Although sustainable, creative and complex system design processes are not mutually incompatible they do focus on different aspects of design. This research defines areas of focus for the development of computational tools to support biomimetics, technical problem solving through abstraction, transfer and application of knowledge from biological models. An overview of analysed literature is provided as well as a qualitative analysis of the main themes found in BID literature. The result is a set of recommendations for further research on Computer-Aided Biomimetics (CAB).", "title": "" }, { "docid": "d4ac52a52e780184359289ecb41e321e", "text": "Interleaving is an increasingly popular technique for evaluating information retrieval systems based on implicit user feedback. While a number of isolated studies have analyzed how this technique agrees with conventional offline evaluation approaches and other online techniques, a complete picture of its efficiency and effectiveness is still lacking. In this paper we extend and combine the body of empirical evidence regarding interleaving, and provide a comprehensive analysis of interleaving using data from two major commercial search engines and a retrieval system for scientific literature. In particular, we analyze the agreement of interleaving with manual relevance judgments and observational implicit feedback measures, estimate the statistical efficiency of interleaving, and explore the relative performance of different interleaving variants. We also show how to learn improved credit-assignment functions for clicks that further increase the sensitivity of interleaving.", "title": "" }, { "docid": "2ec973e31082953bd743dc659f417645", "text": "Object detection, including objectness detection (OD), salient object detection (SOD), and category-specific object detection (COD), is one of the most fundamental yet challenging problems in the computer vision community. Over the last several decades, great efforts have been made by researchers to tackle this problem, due to its broad range of applications for other computer vision tasks such as activity or event recognition, content-based image retrieval and scene understanding, etc. While numerous methods have been presented in recent years, a comprehensive review for the proposed high-quality object detection techniques, especially for those based on advanced deep-learning techniques, is still lacking. To this end, this article delves into the recent progress in this research field, including 1) definitions, motivations, and tasks of each subdirection; 2) modern techniques and essential research trends; 3) benchmark data sets and evaluation metrics; and 4) comparisons and analysis of the experimental results. More importantly, we will reveal the underlying relationship among OD, SOD, and COD and discuss in detail some open questions as well as point out several unsolved challenges and promising future works.", "title": "" }, { "docid": "9c38fcfcbfeaf0072e723bd7e1e7d17d", "text": "BACKGROUND\nAllicin (diallylthiosulfinate) is the major volatile- and antimicrobial substance produced by garlic cells upon wounding. We tested the hypothesis that allicin affects membrane function and investigated 1) betanine pigment leakage from beetroot (Beta vulgaris) tissue, 2) the semipermeability of the vacuolar membrane of Rhoeo discolor cells, 3) the electrophysiology of plasmalemma and tonoplast of Chara corallina and 4) electrical conductivity of artificial lipid bilayers.\n\n\nMETHODS\nGarlic juice and chemically synthesized allicin were used and betanine loss into the medium was monitored spectrophotometrically. Rhoeo cells were studied microscopically and Chara- and artificial membranes were patch clamped.\n\n\nRESULTS\nBeet cell membranes were approximately 200-fold more sensitive to allicin on a mol-for-mol basis than to dimethyl sulfoxide (DMSO) and approximately 400-fold more sensitive to allicin than to ethanol. Allicin-treated Rhoeo discolor cells lost the ability to plasmolyse in an osmoticum, confirming that their membranes had lost semipermeability after allicin treatment. Furthermore, allicin and garlic juice diluted in artificial pond water caused an immediate strong depolarization, and a decrease in membrane resistance at the plasmalemma of Chara, and caused pore formation in the tonoplast and artificial lipid bilayers.\n\n\nCONCLUSIONS\nAllicin increases the permeability of membranes.\n\n\nGENERAL SIGNIFICANCE\nSince garlic is a common foodstuff the physiological effects of its constituents are important. Allicin's ability to permeabilize cell membranes may contribute to its antimicrobial activity independently of its activity as a thiol reagent.", "title": "" }, { "docid": "f2fa4fa43c21e8c65c752d6ad1d39d06", "text": "Singing voice synthesis techniques have been proposed based on a hidden Markov model (HMM). In these approaches, the spectrum, excitation, and duration of singing voices are simultaneously modeled with context-dependent HMMs and waveforms are generated from the HMMs themselves. However, the quality of the synthesized singing voices still has not reached that of natural singing voices. Deep neural networks (DNNs) have largely improved on conventional approaches in various research areas including speech recognition, image recognition, speech synthesis, etc. The DNN-based text-to-speech (TTS) synthesis can synthesize high quality speech. In the DNN-based TTS system, a DNN is trained to represent the mapping function from contextual features to acoustic features, which are modeled by decision tree-clustered context dependent HMMs in the HMM-based TTS system. In this paper, we propose singing voice synthesis based on a DNN and evaluate its effectiveness. The relationship between the musical score and its acoustic features is modeled in frames by a DNN. For the sparseness of pitch context in a database, a musical-note-level pitch normalization and linear-interpolation techniques are used to prepare the excitation features. Subjective experimental results show that the DNN-based system outperformed the HMM-based system in terms of naturalness.", "title": "" }, { "docid": "dbc463f080610e2ec1cf1841772d1d92", "text": "Malware is one of the greatest and most rapidly growing threats to the digital world. Traditional signature-based detection is no longer adequate to detect new variants and highly targeted malware. Furthermore, dynamic detection is often circumvented with anti-VM and/or anti-debugger techniques. Recently heuristic approaches have been explored to enhance detection accuracy while maintaining the generality of a model to detect unknown malware samples. In this paper, we investigate three feature types extracted from memory images - registry activity, imported libraries, and API function calls. After evaluating the importance of the different features, different machine learning techniques are implemented to compare performances of malware detection using the three feature types, respectively. The highest accuracy achieved was 96%, and was reached using a support vector machine model, fitted on data extracted from registry activity.", "title": "" }, { "docid": "23d42976a9651203e0d4dd1c332234ae", "text": "BACKGROUND\nStatistics play a critical role in biological and clinical research. However, most reports of scientific results in the published literature make it difficult for the reader to reproduce the statistical analyses performed in achieving those results because they provide inadequate documentation of the statistical tests and algorithms applied. The Ontology of Biological and Clinical Statistics (OBCS) is put forward here as a step towards solving this problem.\n\n\nRESULTS\nThe terms in OBCS including 'data collection', 'data transformation in statistics', 'data visualization', 'statistical data analysis', and 'drawing a conclusion based on data', cover the major types of statistical processes used in basic biological research and clinical outcome studies. OBCS is aligned with the Basic Formal Ontology (BFO) and extends the Ontology of Biomedical Investigations (OBI), an OBO (Open Biological and Biomedical Ontologies) Foundry ontology supported by over 20 research communities. Currently, OBCS comprehends 878 terms, representing 20 BFO classes, 403 OBI classes, 229 OBCS specific classes, and 122 classes imported from ten other OBO ontologies. We discuss two examples illustrating how the ontology is being applied. In the first (biological) use case, we describe how OBCS was applied to represent the high throughput microarray data analysis of immunological transcriptional profiles in human subjects vaccinated with an influenza vaccine. In the second (clinical outcomes) use case, we applied OBCS to represent the processing of electronic health care data to determine the associations between hospital staffing levels and patient mortality. Our case studies were designed to show how OBCS can be used for the consistent representation of statistical analysis pipelines under two different research paradigms. Other ongoing projects using OBCS for statistical data processing are also discussed. The OBCS source code and documentation are available at: https://github.com/obcs/obcs .\n\n\nCONCLUSIONS\nThe Ontology of Biological and Clinical Statistics (OBCS) is a community-based open source ontology in the domain of biological and clinical statistics. OBCS is a timely ontology that represents statistics-related terms and their relations in a rigorous fashion, facilitates standard data analysis and integration, and supports reproducible biological and clinical research.", "title": "" }, { "docid": "9c5a32c49d3e9eff842f155f99facd08", "text": "Urdu is morphologically rich language with different nature of its characters. Urdu text tokenization and sentence boundary disambiguation is difficult as compared to the language like English. Major hurdle for tokenization is improper use of space between words, where as absence of case discrimination makes the sentence boundary detection a difficult task. In this paper some issues regarding both of these language processing tasks have been identified.", "title": "" }, { "docid": "51fc49d6196702f87e7dae215fa93108", "text": "Automatic classification of cancer lesions in tissues observed using gastroenterology imaging is a non-trivial pattern recognition task involving filtering, segmentation, feature extraction and classification. In this paper we measure the impact of a variety of segmentation algorithms (mean shift, normalized cuts, level-sets) on the automatic classification performance of gastric tissue into three classes: cancerous, pre-cancerous and normal. Classification uses a combination of color (hue-saturation histograms) and texture (local binary patterns) features, applied to two distinct imaging modalities: chromoendoscopy and narrow-band imaging. Results show that mean-shift obtains an interesting performance for both scenarios producing low classification degradations (6%), full image classification is highly inaccurate reinforcing the importance of segmentation research for Gastroenterology, and confirm that Patch Index is an interesting measure of the classification potential of small to medium segmented regions.", "title": "" }, { "docid": "db6e3742a0413ad5f44647ab1826b796", "text": "Endometrial stromal sarcoma is a rare tumor and has unique histopathologic features. Most tumors of this kind occur in the uterus; thus, the vagina is an extremely rare site. A 34-year-old woman presented with endometrial stromal sarcoma arising in the vagina. No correlative endometriosis was found. Because of the uncommon location, this tumor was differentiated from other more common neoplasms of the vagina, particularly embryonal rhabdomyosarcoma and other smooth muscle tumors. Although the pathogenesis of endometrial stromal tumors remains controversial, the most common theory of its origin is heterotopic Müllerian tissue such as endometriosis tissue. Primitive cells of the pelvis and retroperitoneum are an alternative possible origin for the tumor if endometriosis is not present. According to the literature, the tumor has a fairly good prognosis compared with other vaginal sarcomas. Surgery combined with adjuvant radiotherapy appears to be an adequate treatment.", "title": "" }, { "docid": "51743d233ec269cfa7e010d2109e10a6", "text": "Stress is a part of every life to varying degrees, but individuals differ in their stress vulnerability. Stress is usefully viewed from a biological perspective; accordingly, it involves activation of neurobiological systems that preserve viability through change or allostasis. Although they are necessary for survival, frequent neurobiological stress responses increase the risk of physical and mental health problems, perhaps particularly when experienced during periods of rapid brain development. Recently, advances in noninvasive measurement techniques have resulted in a burgeoning of human developmental stress research. Here we review the anatomy and physiology of stress responding, discuss the relevant animal literature, and briefly outline what is currently known about the psychobiology of stress in human development, the critical role of social regulation of stress neurobiology, and the importance of individual differences as a lens through which to approach questions about stress experiences during development and child outcomes.", "title": "" }, { "docid": "6ef244a7eb6a5df025e282e1cc5f90aa", "text": "Public infrastructure-as-a-service clouds, such as Amazon EC2 and Microsoft Azure allow arbitrary clients to run virtual machines (VMs) on shared physical infrastructure. This practice of multi-tenancy brings economies of scale, but also introduces the threat of malicious VMs abusing the scheduling of shared resources. Recent works have shown how to mount crossVM side-channel attacks to steal cryptographic secrets. The straightforward solution is hard isolation that dedicates hardware to each VM. However, this comes at the cost of reduced efficiency. We investigate the principle of soft isolation: reduce the risk of sharing through better scheduling. With experimental measurements, we show that a minimum run time (MRT) guarantee for VM virtual CPUs that limits the frequency of preemptions can effectively prevent existing Prime+Probe cache-based side-channel attacks. Through experimental measurements, we find that the performance impact of MRT guarantees can be very low, particularly in multi-core settings. Finally, we integrate a simple per-core CPU state cleansing mechanism, a form of hard isolation, into Xen. It provides further protection against side-channel attacks at little cost when used in conjunction with an MRT guarantee.", "title": "" } ]
scidocsrr
414da789ccfd24d93314bce839acafaa
Predicting player churn in destiny: A Hidden Markov models approach to predicting player departure in a major online game
[ { "docid": "7dfb6a3a619f7062452aa97aaa134c45", "text": "Most companies favour the creation and nurturing of long-term relationships with customers because retaining customers is more profitable than acquiring new ones. Churn prediction is a predictive analytics technique to identify churning customers ahead of their departure and enable customer relationship managers to take action to keep them. This work evaluates the development of an expert system for churn prediction and prevention using a Hidden Markov model (HMM). A HMM is implemented on unique data from a mobile application and its predictive performance is compared to other algorithms that are commonly used for churn prediction: Logistic Regression, Neural Network and Support Vector Machine. Predictive performance of the HMM is not outperformed by the other algorithms. HMM has substantial advantages for use in expert systems though due to low storage and computational requirements and output of highly relevant customer motivational states. Generic session data of the mobile app is used to train and test the models which makes the system very easy to deploy and the findings applicable to the whole ecosystem of mobile apps distributed in Apple's App and Google's Play Store.", "title": "" }, { "docid": "74959e138f7defce9bf7df2198b46a90", "text": "In the game industry, especially for free to play games, player retention and purchases are important issues. There have been several approaches investigated towards predicting them by players' behaviours during game sessions. However, most current methods are only available for specific games because the data representations utilised are usually game specific. This work intends to use frequency of game events as data representations to predict both players' disengagement from game and the decisions of their first purchases. This method is able to provide better generality because events exist in every game and no knowledge of any event but their frequency is needed. In addition, this event frequency based method will also be compared with a recent work by Runge et al. [1] in terms of disengagement prediction.", "title": "" } ]
[ { "docid": "625c5c89b9f0001a3eed1ec6fb498c23", "text": "About a 100 years ago, the Drosophila white mutant marked the birth of Drosophila genetics. The white gene turned out to encode the first well studied ABC transporter in arthropods. The ABC gene family is now recognized as one of the largest transporter families in all kingdoms of life. The majority of ABC proteins function as primary-active transporters that bind and hydrolyze ATP while transporting a large diversity of substrates across lipid membranes. Although extremely well studied in vertebrates for their role in drug resistance, less is known about the role of this family in the transport of endogenous and exogenous substances in arthropods. The ABC families of five insect species, a crustacean and a chelicerate have been annotated in some detail. We conducted a thorough phylogenetic analysis of the seven arthropod and human ABC protein subfamilies, to infer orthologous relationships that might suggest conserved function. Most orthologous relationships were found in the ABCB half transporter, ABCD, ABCE and ABCF subfamilies, but specific expansions within species and lineages are frequently observed and discussed. We next surveyed the role of ABC transporters in the transport of xenobiotics/plant allelochemicals and their involvement in insecticide resistance. The involvement of ABC transporters in xenobiotic resistance in arthropods is historically not well documented, but an increasing number of studies using unbiased differential gene expression analysis now points to their importance. We give an overview of methods that can be used to link ABC transporters to resistance. ABC proteins have also recently been implicated in the mode of action and resistance to Bt toxins in Lepidoptera. Given the enormous interest in Bt toxicology in transgenic crops, such findings will provide an impetus to further reveal the role of ABC transporters in arthropods. 2014 The Authors. Published by Elsevier Ltd. Open access under CC BY-NC-ND license.", "title": "" }, { "docid": "fb11b937a3c07fd4b76cda1ed1eadc07", "text": "Depth information plays an important role in a variety of applications, including manufacturing, medical imaging, computer vision, graphics, and virtual/augmented reality (VR/AR). Depth sensing has thus attracted sustained attention from both academia and industry communities for decades. Mainstream depth cameras can be divided into three categories: stereo, time of flight (ToF), and structured light. Stereo cameras require no active illumination and can be used outdoors, but they are fragile for homogeneous surfaces. Recently, off-the-shelf light field cameras have demonstrated improved depth estimation capability with a multiview stereo configuration. ToF cameras operate at a high frame rate and fit time-critical scenarios well, but they are susceptible to noise and limited to low resolution [3]. Structured light cameras can produce high-resolution, high-accuracy depth, provided that a number of patterns are sequentially used. Due to its promising and reliable performance, the structured light approach has been widely adopted for three-dimensional (3-D) scanning purposes. However, achieving real-time depth with structured light either requires highspeed (and thus expensive) hardware or sacrifices depth resolution and accuracy by using a single pattern instead.", "title": "" }, { "docid": "1819af3b3d96c182b7ea8a0e89ba5bbe", "text": "The fingerprint is one of the oldest and most widely used biometric modality for person identification. Existing automatic fingerprint matching systems perform well when the same sensor is used for both enrollment and verification (regular matching). However, their performance significantly deteriorates when different sensors are used (cross-matching, fingerprint sensor interoperability problem). We propose an automatic fingerprint verification method to solve this problem. It was observed that the discriminative characteristics among fingerprints captured with sensors of different technology and interaction types are ridge orientations, minutiae, and local multi-scale ridge structures around minutiae. To encode this information, we propose two minutiae-based descriptors: histograms of gradients obtained using a bank of Gabor filters and binary gradient pattern descriptors, which encode multi-scale local ridge patterns around minutiae. In addition, an orientation descriptor is proposed, which compensates for the spurious and missing minutiae problem. The scores from the three descriptors are fused using a weighted sum rule, which scales each score according to its verification performance. Extensive experiments were conducted using two public domain benchmark databases (FingerPass and Multi-Sensor Optical and Latent Fingerprint) to show the effectiveness of the proposed system. The results showed that the proposed system significantly outperforms the state-of-the-art methods based on minutia cylinder-code (MCC), MCC with scale, VeriFinger—a commercial SDK, and a thin-plate spline model.", "title": "" }, { "docid": "58164220c13b39eb5d2ca48139d45401", "text": "There is general agreement that structural similarity — a match in relational structure — is crucial in analogical processing. However, theories differ in their definitions of structural similarity: in particular, in whether there must be conceptual similarity between the relations in the two domains or whether parallel graph structure is sufficient. In two studies, we demonstrate, first, that people draw analogical correspondences based on matches in conceptual relations, rather than on purely structural graph matches; and, second, that people draw analogical inferences between passages that have matching conceptual relations, but not between passages with purely structural graph matches.", "title": "" }, { "docid": "a0eae0ebbec4dc6ee339b25286a8492a", "text": "We present a visual recognition system for fine-grained visual categorization. The system is composed of a human and a machine working together and combines the complementary strengths of computer vision algorithms and (non-expert) human users. The human users provide two heterogeneous forms of information object part clicks and answers to multiple choice questions. The machine intelligently selects the most informative question to pose to the user in order to identify the object class as quickly as possible. By leveraging computer vision and analyzing the user responses, the overall amount of human effort required, measured in seconds, is minimized. Our formalism shows how to incorporate many different types of computer vision algorithms into a human-in-the-loop framework, including standard multiclass methods, part-based methods, and localized multiclass and attribute methods. We explore our ideas by building a field guide for bird identification. The experimental results demonstrate the strength of combining ignorant humans with poor-sighted machines the hybrid system achieves quick and accurate bird identification on a dataset containing 200 bird species.", "title": "" }, { "docid": "7ea3d3002506e0ea6f91f4bdab09c2d5", "text": "We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.", "title": "" }, { "docid": "ef2e7ca89c1b52b4a462a2d38b60fa02", "text": "Candidate phylum OD1 bacteria (also referred to as Parcubacteria) have been identified in a broad range of anoxic environments through community survey analysis. Although none of these species have been isolated in the laboratory, several genome sequences have been reconstructed from metagenomic sequence data and single-cell sequencing. The organisms have small (generally <1 Mb) genomes with severely reduced metabolic capabilities. We have reconstructed 8 partial to near-complete OD1 genomes from oxic groundwater samples, and compared them against existing genomic data. The conserved core gene set comprises 202 genes, or ~28% of the genomic complement. \"Housekeeping\" genes and genes for biosynthesis of peptidoglycan and Type IV pilus production are conserved. Gene sets for biosynthesis of cofactors, amino acids, nucleotides, and fatty acids are absent entirely or greatly reduced. The only aspects of energy metabolism conserved are the non-oxidative branch of the pentose-phosphate shunt and central glycolysis. These organisms also lack some activities conserved in almost all other known bacterial genomes, including signal recognition particle, pseudouridine synthase A, and FAD synthase. Pan-genome analysis indicates a broad genotypic diversity and perhaps a highly fluid gene complement, indicating historical adaptation to a wide range of growth environments and a high degree of specialization. The genomes were examined for signatures suggesting either a free-living, streamlined lifestyle, or a symbiotic lifestyle. The lack of biosynthetic capabilities and DNA repair, along with the presence of potential attachment and adhesion proteins suggest that the Parcubacteria are ectosymbionts or parasites of other organisms. The wide diversity of genes that potentially mediate cell-cell contact suggests a broad range of partner/prey organisms across the phylum.", "title": "" }, { "docid": "316dfc9683a98e39a08481622acccf1a", "text": "A wearable probe-fed microstrip antenna manufactured from conductive textile fabric designed for multiple Industrial-Scientific-Medical (ISM) band communications is presented in this paper. The proposed antenna operating at 2.450 GHz, 4.725 Hz and 5.800 GHz consists of a patch and ground plane made of silver fabric mounted on a substrate of flexible low-permittivity foam. For verification, a reference prototype is manufactured from copper. The measurement of both antennas demonstrates the expected resonances, with some unexpected loss especially in the higher frequency range. Simulation results for the antenna in various bending condition indicate the robustness of the design with deviations of resonant frequencies in an acceptable range.", "title": "" }, { "docid": "1871c42e7656c7cef2a7fb042e2f5582", "text": "The emergence and ubiquity of online social networks have enriched web data with evolving interactions and communities both at mega-scale and in real-time. This data offers an unprecedented opportunity for studying the interaction between society and disease outbreaks. The challenge we describe in this data paper is how to extract and leverage epidemic outbreak insights from massive amounts of social media data and how this exercise can benefit medical professionals, patients, and policymakers alike. We attempt to prepare the research community for this challenge with four datasets. Publishing the four datasets will commoditize the data infrastructure to allow a higher and more efficient focal point for the research community.", "title": "" }, { "docid": "1ab272c668743c0873081160571aa462", "text": "Monodisperse hollow and core-shell calcium alginate microcapsules are successfully prepared via internal gelation in microfluidic-generated double emulsions. Microfluidic emulsification is introduced to generate monodisperse oil-in-water-in-oil (O/W/O) double emulsion templates, which contain Na-alginate, CaCO3 nanoparticles, and photoacid generator in the middle aqueous phase, for synthesizing Ca-alginate microcapsules. The internal gelation of the aqueous middle layer of O/W/O double emulsions is induced by crosslinking alginate polymers with Ca(2+) ions that are released from CaCO3 nanoparticles upon UV exposure of the photoacid generator. The as-prepared hollow and core-shell calcium alginate microcapsules are highly monodisperse and spherical in water. Model proteins Bovine serum albumin (BSA) molecules can be encapsulated into the Ca-alginate microcapsules after the capsule preparation, which demonstrates an alternative route for loading active drugs or chemicals into carriers to avoid the inactivation during the carrier preparation. The proposed technique in this study provides an efficient approach for synthesis of monodisperse hollow or core-shell calcium alginate microcapsules with large cavity or encapsulated lipophilic drugs, chemicals, and nutrients.", "title": "" }, { "docid": "402bf66ab180944e8f3068bef64fbc77", "text": "EvolView is a web application for visualizing, annotating and managing phylogenetic trees. First, EvolView is a phylogenetic tree viewer and customization tool; it visualizes trees in various formats, customizes them through built-in functions that can link information from external datasets, and exports the customized results to publication-ready figures. Second, EvolView is a tree and dataset management tool: users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets. To make EvolView easy to use, it is equipped with an intuitive user interface. With a free account, users can save data and manipulations on the EvolView server. EvolView is freely available at: http://www.evolgenius.info/evolview.html.", "title": "" }, { "docid": "a34e04069b232309b39994d21bb0f89a", "text": "In the near future, i.e., beyond 4G, some of the prime objectives or demands that need to be addressed are increased capacity, improved data rate, decreased latency, and better quality of service. To meet these demands, drastic improvements need to be made in cellular network architecture. This paper presents the results of a detailed survey on the fifth generation (5G) cellular network architecture and some of the key emerging technologies that are helpful in improving the architecture and meeting the demands of users. In this detailed survey, the prime focus is on the 5G cellular network architecture, massive multiple input multiple output technology, and device-to-device communication (D2D). Along with this, some of the emerging technologies that are addressed in this paper include interference management, spectrum sharing with cognitive radio, ultra-dense networks, multi-radio access technology association, full duplex radios, millimeter wave solutions for 5G cellular networks, and cloud technologies for 5G radio access networks and software defined networks. In this paper, a general probable 5G cellular network architecture is proposed, which shows that D2D, small cell access points, network cloud, and the Internet of Things can be a part of 5G cellular network architecture. A detailed survey is included regarding current research projects being conducted in different countries by research groups and institutions that are working on 5G technologies.", "title": "" }, { "docid": "43977abf063f974689065fe29945297a", "text": "In this short paper we propose several objective and subjective metrics and present a comparison between two “commodity” VR systems: HTC Vive and Oculus Rift. Objective assessment focuses on frame rate, impact of ambiance light, and impact of sensors' line of sight obstruction. Subjective study aims at evaluating and comparing the pick-and-place task performance in a virtual world. We collected user ratings of overall quality, perceived ease of use, and perceived intuitiveness, with results indicating that HTC Vive slightly outperforms the Oculus Rift for the pick-and-place task under test.", "title": "" }, { "docid": "93d80e2015de513a689a41f33d74c45d", "text": "A horizontally polarized omnidirectional antenna with enhanced impedance bandwidth is presented in this letter. The proposed antenna consists of a feeding network, four printed dipole elements with etched slots, parasitic strips, and director elements. Four identically curved and printed dipole elements are placed in a square array and fed by a feeding network with uniform magnitude and phase; thus, the proposed antenna can achieve an omnidirectional radiation. To enhance the impedance bandwidth, parasitic strips and etched slots are introduced to produce additional lower and upper resonant frequencies, respectively. By utilizing four director elements, the gain variation in the horizontal plane can be improved, especially for the upper frequency band. With the structure, a reduced size of <inline-formula> <tex-math notation=\"LaTeX\">$0.63\\,\\lambda _{L} \\times 0.63\\,\\lambda _{L} \\times 0.01\\,\\lambda _{L}$</tex-math> </inline-formula> (<inline-formula><tex-math notation=\"LaTeX\">$\\lambda _{L}$</tex-math></inline-formula> is the free-space wavelength at the lowest frequency) is obtained. The proposed antenna is designed and fabricated. Measurement results reveal that the proposed antenna can provide an impedance bandwidth of 84.2% (1.58–3.88 GHz). Additionally, the gain variation in the horizontal plane is less than 1.5 dB over the frequency band 1.58–3.50 GHz, and increased to 2.2 dB at 3.80 GHz. Within the impedance bandwidth, the cross-polarization level is less than –23 dB in the horizontal plane.", "title": "" }, { "docid": "31bb74eb5b217909d46782430375c5be", "text": "Recent studies of upper limb movements have provided insights into the computations, mechanisms, and taxonomy of human sensorimotor learning. Motor tasks differ with respect to how they weight different learning processes. These include adaptation, an internal-model based process that reduces sensory-prediction errors in order to return performance to pre-perturbation levels, use-dependent plasticity, and operant reinforcement. Visuomotor rotation and force-field tasks impose systematic errors and thereby emphasize adaptation. In skill learning tasks, which for the most part do not involve a perturbation, improved performance is manifest as reduced motor variability and probably depends less on adaptation and more on success-based exploration. Explicit awareness and declarative memory contribute, to varying degrees, to motor learning. The modularity of motor learning processes maps, at least to some extent, onto distinct brain structures.", "title": "" }, { "docid": "c22d64723df5233bfa5e41b8eb10e1d5", "text": "State-of-the-art millimeter wave (MMW) multiple-input, multiple-output (MIMO) frequency-modulated continuous-wave (FMCW) radars allow high precision direction of arrival (DOA) estimation with an optimized antenna aperture size [1]. Typically, these systems operate using a single polarization. Fully polarimetric radars on the other hand are used to obtain the polarimetric scattering matrix (S-matrix) and extract polari-metric scattering information that otherwise remains concealed [2]. Combining both approaches by assembly of a dual-polarized waveguide antenna and a 77 GHz MIMO FMCW radar system results in the fully polarimetric MIMO radar system presented in this paper. By applying a MIMO-adapted version of the isolated antenna calibration technique (IACT) from [3], the radar system is calibrated and laboratory measurements of different canonical objects such as spheres, plates, dihedrals and trihedrals are performed. A statistical evaluation of these measurement results demonstrates the usability of the approach and shows that basic polarimetric scattering phenomena are reliably identified.", "title": "" }, { "docid": "0a3598013927cb5728362f5f6e0c321d", "text": "Some postfire annuals with dormant seeds use heat or chemical cues from charred wood to synchronize their germination with the postfire environment. We report that wood smoke and polar extracts of wood smoke, but not the ash of burned wood, contain potent cue(s) that stimulate germination in the postfire annual plant,Nicotiana attenuata. We examined the responses of seeds from six populations of plants from southwest Utah to extracts of smoke and found the proportion of viable seeds that germinated in the presence of smoke cues to vary between populations but to be consistent between generations. With the most dormant genotypes, we examine three mechanisms by which smoke-derived chemical cues may stimulate germination (chemical scarification of the seed coat and nutritive- and signal-mediated stimulation of germination) and report that the response is consistent with the signal-mediated mechanism. The germination cue(s) found in smoke are produced by the burning of hay, hardwood branches, leaves, and, to a lesser degree, cellulose. Moreover, the cues are found in the common food condiment, “liquid smoke,” and we find no significant differences between brands. With a bioassay-driven fractionation of liquid smoke, we identified 71 compounds in active fractions by GC-MS and AA spectrometry. However, when these compounds were tested in pure form or in combinations that mimicked the composition of active fractions over a range of concentrations, they failed to stimulate germination to the same degree that smoke fractions did. Moreover, enzymatic oxidation of some of these compounds also failed to stimulate germination. In addition, we tested 43 additional compounds also reported from smoke, 85 compounds that were structurally similar to those reported from smoke and 34 compounds reported to influence germination in other species. Of the 233 compounds tested, 16 proved to inhibit germination at the concentrations tested, and none reproduced the activity of wood smoke. By thermally desorbing smoke produced by cellulose combustions that was trapped on Chromosorb 101, we demonstrate that the cue is desorbed between 125 and 150°C. We estimate that the germination cues are active at concentrations of less than 1 pg/seed and, due to their chromatographic behavior, infer that a number of different chemical structures are active. In separate experiments, we demonstrate that cues remain active for at least 53 days in soil under greenhouse conditions and that the application of aqucous extracts of smoke to soil containing seeds results in dramatic increases in germination of artificial seed banks. Hence, although the chemical nature of the germination cue remains elusive, the stability of the germination cues, their water-solubility, and their activity in low concentrations suggest that these cues could serve as powerful tools for the examination of dormant seed banks and the selective factors thought to be important in the evolution of postfire plant communities.", "title": "" }, { "docid": "ac5f518cbd783060af1cf6700b994469", "text": "Scalable evolutionary computation has. become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithmnamely elitism, niching, and restricted mating are not significantly improving the scalability problems.", "title": "" }, { "docid": "1778e5f82da9e90cbddfa498d68e461e", "text": "Today’s business environment is characterized by fast and unexpected changes, many of which are driven by technological advancement. In such environment, the ability to respond effectively and adapt to the new requirements is not only desirable but essential to survive. Comprehensive and quick understanding of intricacies of market changes facilitates firm’s faster and better response. Two concepts contribute to the success of this scenario; organizational agility and business intelligence (BI). As of today, despite BI’s capabilities to foster organizational agility and consequently improve organizational performance, a clear link between BI and organizational agility has not been established. In this paper we argue that BI solutions have the potential to be facilitators for achieving agility. We aim at showing how BI capabilities can help achieve agility at operational, portfolio, and strategic levels.", "title": "" }, { "docid": "89460f94140b9471b120674ddd904948", "text": "Cross-disciplinary research on collective intelligence considers that groups, like individuals, have a certain level of intelligence. For example, the study by Woolley et al. (2010) indicates that groups which perform well on one type of task will perform well on others. In a pair of empirical studies of groups interacting face-to-face, they found evidence of a collective intelligence factor, a measure of consistent group performance across a series of tasks, which was highly predictive of performance on a subsequent, more complex task. This collective intelligence factor differed from the individual intelligence of group members, and was significantly predicted by members’ social sensitivity – the ability to understand the emotions of others based on visual facial cues (Baron-Cohen et al. 2001).", "title": "" } ]
scidocsrr
ccba46f6feea5bbb3fb3fc700b51ebd0
Credit Scoring Models Using Soft Computing Methods: A Survey
[ { "docid": "5b9baa6587bc70c17da2b0512545c268", "text": "Credit scoring models have been widely studied in the areas of statistics, machine learning, and artificial intelligence (AI). Many novel approaches such as artificial neural networks (ANNs), rough sets, or decision trees have been proposed to increase the accuracy of credit scoring models. Since an improvement in accuracy of a fraction of a percent might translate into significant savings, a more sophisticated model should be proposed to significantly improving the accuracy of the credit scoring mode. In this paper, genetic programming (GP) is used to build credit scoring models. Two numerical examples will be employed here to compare the error rate to other credit scoring models including the ANN, decision trees, rough sets, and logistic regression. On the basis of the results, we can conclude that GP can provide better performance than other models. q 2005 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "537076966f77631a3e915eccc8223d2b", "text": "Finding domain invariant features is critical for successful domain adaptation and transfer learning. However, in the case of unsupervised adaptation, there is a significant risk of overfitting on source training data. Recently, a regularization for domain adaptation was proposed for deep models by (Ganin and Lempitsky, 2015). We build on their work by suggesting a more appropriate regularization for denoising autoencoders. Our model remains unsupervised and can be computed in a closed form. On standard text classification adaptation tasks, our approach yields the state of the art results, with an important reduction of the learning cost.", "title": "" }, { "docid": "7f2857c1bd23c7114d58c290f21bf7bd", "text": "Many contemporary organizations are placing a greater emphasis on their performance management systems as a means of generating higher levels of job performance. We suggest that producing performance increments may be best achieved by orienting the performance management system to promote employee engagement. To this end, we describe a new approach to the performance management process that includes employee engagement and the key drivers of employee engagement at each stage. We present a model of engagement management that incorporates the main ideas of the paper and suggests a new perspective for thinking about how to foster and manage employee engagement to achieve high levels of job", "title": "" }, { "docid": "4c6efebdf08a3c1c4cefc9cdd8950bab", "text": "Four patients are presented with the Goldenhar syndrome (GS) and cranial defects consisting of plagiocephaly, microcephaly, skull defects, or intracranial dermoid cysts. Twelve cases from the literature add hydrocephalus, encephalocele, and arhinencephaly to a growing list of brain anomalies in GS. As a group, these patients emphasize the variability of GS and the increased risk for developmental retardation with multiple, severe, or unusual manifestations. The temporal relation of proposed teratogenic events in GS provides an opportunity to reconstruct biological relationships within the 3-5-week human embryo.", "title": "" }, { "docid": "8ccbf0f95df6d4d3c8eba33befc0f6b7", "text": "Tactile graphics play an essential role in knowledge transfer for blind people. The tactile exploration of these graphics is often challenging because of the cognitive load caused by physiological constraints and their complexity. The coupling of physical tactile graphics with electronic devices offers to support the tactile exploration by auditory feedback. Often, these systems have strict constraints regarding their mobility or the process of coupling both components. Additionally, visually impaired people cannot appropriately benefit from their residual vision. This article presents a concept for 3D printed tactile graphics, which offers to use audio-tactile graphics with usual smartphones or tablet-computers. By using capacitive markers, the coupling of the tactile graphics with the mobile device is simplified. These tactile graphics integrating these markers can be printed in one turn by off-the-shelf 3D printers without any post-processing and allows us to use multiple elevation levels for graphical elements. Based on the developed generic concept on visually augmented audio-tactile graphics, we presented a case study for maps. A prototypical implementation was tested by a user study with visually impaired people. All the participants were able to interact with the 3D printed tactile maps using a standard tablet computer. To study the effect of visual augmentation of graphical elements, we conducted another comprehensive user study. We tested multiple types of graphics and obtained evidence that visual augmentation may offer clear advantages for the exploration of tactile graphics. Even participants with a minor residual vision could solve the tasks with visual augmentation more quickly and accurately.", "title": "" }, { "docid": "296e9204869a3a453dd304fc3b4b8c4b", "text": "Today, travelers are provided large amount information which includes Web sites and tourist magazines about introduction of tourist spot. However, it is not easy for users to process the information in a short time. Therefore travelers prefer to receive pertinent information easier and have that information presented in a clear and concise manner. This paper proposes a personalization method for tourist Point of Interest (POI) Recommendation.", "title": "" }, { "docid": "13e84c1160fbffd1d8f91d5274c4d8cc", "text": "This paper presents and demonstrates a class of 3-D integration platforms of substrate-integrated waveguide (SIW). The proposed right angle E-plane corner based on SIW technology enables the implementation of various 3-D architectures of planar circuits with the printed circuit board and other similar processes. This design scheme brings up attractive advantages in terms of cost, flexibility, and integration. Two circuit prototypes with both 0- and 45° vertical rotated arms are demonstrated. The straight version of the prototypes shows 0.5 dB of insertion loss from 30 to 40 GHz, while the rotated version gives 0.7 dB over the same frequency range. With this H-to-E-plane interconnect, a T-junction is studied and designed. Simulated results show 20-dB return loss over 19.25% of bandwidth. Measured results suggest an excellent performance within the experimental frequency range of 32-37.4 GHz, with 10-dB return loss and less than ±4° phase imbalance. An optimized wideband magic-T structure is demonstrated and fabricated. Both simulated and measured results show a very promising performance with very good isolation and power equality. With two 45° vertical rotated arm bends, two antennas are used to build up a dual polarization system. An isolation of 20 dB is shown over 32-40 GHz and the radiation patterns of the antenna are also given.", "title": "" }, { "docid": "309e14c07a3a340f7da15abeb527231d", "text": "The random forest algorithm, proposed by L. Breiman in 2001, has been extremely successful as a general-purpose classification and regression method. The approach, which combines several randomized decision trees and aggregates their predictions by averaging, has shown excellent performance in settings where the number of variables is much larger than the number of observations. Moreover, it is versatile enough to be applied to large-scale problems, is easily adapted to various ad-hoc learning tasks, and returns measures of variable importance. The present article reviews the most recent theoretical and methodological developments for random forests. Emphasis is placed on the mathematical forces driving the algorithm, with special attention given to the selection of parameters, the resampling mechanism, and variable importance measures. This review is intended to provide non-experts easy access to the main ideas.", "title": "" }, { "docid": "7f4701d8c9f651c3a551a91d19fd28d9", "text": "Road extraction from aerial images has been a hot research topic in the field of remote sensing image analysis. In this letter, a semantic segmentation neural network, which combines the strengths of residual learning and U-Net, is proposed for road area extraction. The network is built with residual units and has similar architecture to that of U-Net. The benefits of this model are twofold: first, residual units ease training of deep networks. Second, the rich skip connections within the network could facilitate information propagation, allowing us to design networks with fewer parameters, however, better performance. We test our network on a public road data set and compare it with U-Net and other two state-of-the-art deep-learning-based road extraction methods. The proposed approach outperforms all the comparing methods, which demonstrates its superiority over recently developed state of the arts.", "title": "" }, { "docid": "66b680500240631b9a4b682b33a5bafa", "text": "Multichannel customer management is “the design, deployment, and evaluation of channels to enhance customer value through effective customer acquisition, retention, and development” (Neslin, Scott A., D. Grewal, R. Leghorn, V. Shankar, M. L. Teerling, J. S. Thomas, P. C. Verhoef (2006), Challenges and Opportunities in Multichannel Management. Journal of Service Research 9(2) 95–113). Channels typically include the store, the Web, catalog, sales force, third party agency, call center and the like. In recent years, multichannel marketing has grown tremendously and is anticipated to grow even further. While we have developed a good understanding of certain issues such as the relative value of a multichannel customer over a single channel customer, several research and managerial questions still remain. We offer an overview of these emerging issues, present our future outlook, and suggest important avenues for future research. © 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a4099a526548c6d00a91ea21b9f2291d", "text": "The robust principal component analysis (robust PCA) problem has been considered in many machine learning applications, where the goal is to decompose the data matrix to a low rank part plus a sparse residual. While current approaches are developed by only considering the low rank plus sparse structure, in many applications, side information of row and/or column entities may also be given, and it is still unclear to what extent could such information help robust PCA. Thus, in this paper, we study the problem of robust PCA with side information, where both prior structure and features of entities are exploited for recovery. We propose a convex problem to incorporate side information in robust PCA and show that the low rank matrix can be exactly recovered via the proposed method under certain conditions. In particular, our guarantee suggests that a substantial amount of low rank matrices, which cannot be recovered by standard robust PCA, become recoverable by our proposed method. The result theoretically justifies the effectiveness of features in robust PCA. In addition, we conduct synthetic experiments as well as a real application on noisy image classification to show that our method also improves the performance in practice by exploiting side information.", "title": "" }, { "docid": "40c5f333d037f1e9a26e186d823b336e", "text": "We present a simple, prepackaged solution to generating paraphrases of English sentences. We use the Paraphrase Database (PPDB) for monolingual sentence rewriting and provide machine translation language packs: prepackaged, tuned models that can be downloaded and used to generate paraphrases on a standard Unix environment. The language packs can be treated as a black box or customized to specific tasks. In this demonstration, we will explain how to use the included interactive webbased tool to generate sentential paraphrases.", "title": "" }, { "docid": "c2b1bb55522213987573b22fa407c937", "text": "We present a system for producing 3D animations using physical objects (i.e., puppets) as input. Puppeteers can load 3D models of familiar rigid objects, including toys, into our system and use them as puppets for an animation. During a performance, the puppeteer physically manipulates these puppets in front of a Kinect depth sensor. Our system uses a combination of image-feature matching and 3D shape matching to identify and track the physical puppets. It then renders the corresponding 3D models into a virtual set. Our system operates in real time so that the puppeteer can immediately see the resulting animation and make adjustments on the fly. It also provides 6D virtual camera \\\\rev{and lighting} controls, which the puppeteer can adjust before, during, or after a performance. Finally our system supports layered animations to help puppeteers produce animations in which several characters move at the same time. We demonstrate the accessibility of our system with a variety of animations created by puppeteers with no prior animation experience.", "title": "" }, { "docid": "0a4392285df7ddb92458ffa390f36867", "text": "A good model of object shape is essential in applications such as segmentation, detection, inpainting and graphics. For example, when performing segmentation, local constraints on the shapes can help where object boundaries are noisy or unclear, and global constraints can resolve ambiguities where background clutter looks similar to parts of the objects. In general, the stronger the model of shape, the more performance is improved. In this paper, we use a type of deep Boltzmann machine (Salakhutdinov and Hinton, International Conference on Artificial Intelligence and Statistics, 2009) that we call a Shape Boltzmann Machine (SBM) for the task of modeling foreground/background (binary) and parts-based (categorical) shape images. We show that the SBM characterizes a strong model of shape, in that samples from the model look realistic and it can generalize to generate samples that differ from training examples. We find that the SBM learns distributions that are qualitatively and quantitatively better than existing models for this task.", "title": "" }, { "docid": "f465475eb7bb52d455e3ed77b4808d26", "text": "Background Long-term dieting has been reported to reduce resting energy expenditure (REE) leading to weight regain once the diet has been curtailed. Diets are also difficult to follow for a significant length of time. The purpose of this preliminary proof of concept study was to examine the effects of short-term intermittent dieting during exercise training on REE and weight loss in overweight women.", "title": "" }, { "docid": "3c13399d0c869e58830a7efb8f6832a8", "text": "The use of supply frequencies above 50-60 Hz allows for an increase in the power density applied to the ozonizer electrode surface and an increase in ozone production for a given surface area, while decreasing the necessary peak voltage. Parallel-resonant converters are well suited for supplying the high capacitive load of ozonizers. Therefore, in this paper the current-fed parallel-resonant push-pull inverter is proposed as a good option to implement high-voltage high-frequency power supplies for ozone generators. The proposed converter is analyzed and some important characteristics are obtained. The design and implementation of the complete power supply are also shown. The UC3872 integrated circuit is proposed in order to operate the converter at resonance, allowing us to maintain a good response disregarding the changes in electric parameters of the transformer-ozonizer pair. Experimental results for a 50-W prototype are also provided.", "title": "" }, { "docid": "b76d5cfc22d0c39649ca093111864926", "text": "Runtime verification is the process of observing a sequence of events generated by a running system and comparing it to some formal specification for potential violations. We show how the use of a runtime monitor can greatly speed up the testing phase of a video game under development by automating the detection of bugs when the game is being played. We take advantage of the fact that a video game, contrarily to generic software, follows a special structure that contains a “game loop.” This game loop can be used to centralize the instrumentation and generate events based on the game's internal state. We report on experiments made on a sample of six real-world video games of various genres and sizes by successfully instrumenting and efficiently monitoring various temporal properties over their execution, including actual bugs reported in the games' bug tracking database in the course of their development.", "title": "" }, { "docid": "d34d8dd7ba59741bb5e28bba3e870ac4", "text": "Among those who have recently lost a job, social networks in general and online ones in particular may be useful to cope with stress and find new employment. This study focuses on the psychological and practical consequences of Facebook use following job loss. By pairing longitudinal surveys of Facebook users with logs of their online behavior, we examine how communication with different kinds of ties predicts improvements in stress, social support, bridging social capital, and whether they find new jobs. Losing a job is associated with increases in stress, while talking with strong ties is generally associated with improvements in stress and social support. Weak ties do not provide these benefits. Bridging social capital comes from both strong and weak ties. Surprisingly, individuals who have lost a job feel greater stress after talking with strong ties. Contrary to the \"strength of weak ties\" hypothesis, communication with strong ties is more predictive of finding employment within three months.", "title": "" }, { "docid": "337a738d386fa66725fe9be620365d5f", "text": "Change in a software is crucial to incorporate defect correction and continuous evolution of requirements and technology. Thus, development of quality models to predict the change proneness attribute of a software is important to effectively utilize and plan the finite resources during maintenance and testing phase of a software. In the current scenario, a variety of techniques like the statistical techniques, the Machine Learning (ML) techniques and the Search-based techniques (SBT) are available to develop models to predict software quality attributes. In this work, we assess the performance of ten machine learning and search-based techniques using data collected from three open source software. We first develop a change prediction model using one data set and then we perform inter-project validation using two other data sets in order to obtain unbiased and generalized results. The results of the study indicate comparable performance of SBT with other employed statistical and ML techniques. This study also supports inter project validation as we successfully applied the model created using the training data of one project on other similar projects and yield good results.", "title": "" }, { "docid": "c6a649a1eed332be8fc39bfa238f4214", "text": "The Internet of things (IoT), which integrates a variety of devices into networks to provide advanced and intelligent services, has to protect user privacy and address attacks such as spoofing attacks, denial of service (DoS) attacks, jamming, and eavesdropping. We investigate the attack model for IoT systems and review the IoT security solutions based on machine-learning (ML) techniques including supervised learning, unsupervised learning, and reinforcement learning (RL). ML-based IoT authentication, access control, secure offloading, and malware detection schemes to protect data privacy are the focus of this article. We also discuss the challenges that need to be addressed to implement these ML-based security schemes in practical IoT systems.", "title": "" }, { "docid": "9975e61afd0bf521c3ffbf29d0f39533", "text": "Computer security depends largely on passwords to authenticate human users. However, users have difficulty remembering passwords over time if they choose a secure password, i.e. a password that is long and random. Therefore, they tend to choose short and insecure passwords. Graphical passwords, which consist of clicking on images rather than typing alphanumeric strings, may help to overcome the problem of creating secure and memorable passwords. In this paper we describe PassPoints, a new and more secure graphical password system. We report an empirical study comparing the use of PassPoints to alphanumeric passwords. Participants created and practiced either an alphanumeric or graphical password. The participants subsequently carried out three longitudinal trials to input their password over the course of 6 weeks. The results show that the graphical password users created a valid password with fewer difficulties than the alphanumeric users. However, the graphical users took longer and made more invalid password inputs than the alphanumeric users while practicing their passwords. In the longitudinal trials the two groups performed similarly on memory of their password, but the graphical group took more time to input a password. r 2005 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
79de92bde0c38515918923ff0f2451aa
An improved GA and a novel PSO-GA-based hybrid algorithm
[ { "docid": "d8780989fc125b69beb456986819d624", "text": "The particle swarm optimization algorithm is analyzed using standard results from the dynamic system theory. Graphical parameter selection guidelines are derived. The exploration–exploitation tradeoff is discussed and illustrated. Examples of performance on benchmark functions superior to previously published results are given.  2002 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "ea1e84dfb1889826b0356dcd85182ec4", "text": "With the support of the wearable devices, healthcare services started a new phase in serving patients need. The new technology adds more facilities and luxury to the healthcare services, Also changes patients' lifestyles from the traditional way of monitoring to the remote home monitoring. Such new approach faces many challenges related to security as sensitive data get transferred through different type of channels. They are four main dimensions in terms of security scope such as trusted sensing, computation, communication, privacy and digital forensics. In this paper we will try to focus on the security challenges of the wearable devices and IoT and their advantages in healthcare sectors.", "title": "" }, { "docid": "894164566e284f0e4318d94cc6768871", "text": "This paper investigates the problems of signal reconstruction and blind deconvolution for graph signals that have been generated by an originally sparse input diffused through the network via the application of a graph filter operator. Assuming that the support of the sparse input signal is unknown, and that the diffused signal is observed only at a subset of nodes, we address the related problems of: 1) identifying the input and 2) interpolating the values of the diffused signal at the non-sampled nodes. We first consider the more tractable case where the coefficients of the diffusing graph filter are known and then address the problem of joint input and filter identification. The corresponding blind identification problems are formulated, novel convex relaxations are discussed, and modifications to incorporate a priori information on the sparse inputs are provided.", "title": "" }, { "docid": "c0d4068fa86fd14b0170a2acf1fdd252", "text": "This paper presents a 15-bit digital power amplifier (DPA) with 1.6W saturated output power. The topology of the polar switched-current DPA is discussed together with the architecture of the output transformer which is implemented in BEOL as well as in WLCSP metal layers. The chip is fabricated in a standard 28nm CMOS process and exhibits an EVM of 3.6%, E-UTRA ACLR of 34.1dB, output noise of −145.7dBc/Hz at 45 MHz offset and 22.4% DPA efficiency when generating a 26.8dBm LTE-1.4 output signal at 2.3GHz. The total area of the DPA is 0.5mm2.", "title": "" }, { "docid": "23ca24a7920f98796cf9ac695be3ffae", "text": "As software systems become more complex and configurable, failures due to misconfigurations are becoming a critical problem. Such failures often have serious functionality, security and financial consequences. Further, diagnosis and remediation for such failures require reasoning across the software stack and its operating environment, making it difficult and costly. We present a framework and tool called EnCore to automatically detect software misconfigurations. EnCore takes into account two important factors that are unexploited before: the interaction between the configuration settings and the executing environment, as well as the rich correlations between configuration entries. We embrace the emerging trend of viewing systems as data, and exploit this to extract information about the execution environment in which a configuration setting is used. EnCore learns configuration rules from a given set of sample configurations. With training data enriched with the execution context of configurations, EnCore is able to learn a broad set of configuration anomalies that spans the entire system. EnCore is effective in detecting both injected errors and known real-world problems - it finds 37 new misconfigurations in Amazon EC2 public images and 24 new configuration problems in a commercial private cloud. By systematically exploiting environment information and by learning correlation rules across multiple configuration settings, EnCore detects 1.6x to 3.5x more misconfiguration anomalies than previous approaches.", "title": "" }, { "docid": "8da0bdec21267924d16f9a04e6d9a7ef", "text": "Traffic light timing optimization is still an active line of research despite the wealth of scientific literature on the topic, and the problem remains unsolved for any non-toy scenario. One of the key issues with traffic light optimization is the large scale of the input information that is available for the controlling agent, namely all the traffic data that is continually sampled by the traffic detectors that cover the urban network. This issue has in the past forced researchers to focus on agents that work on localized parts of the traffic network, typically on individual intersections, and to coordinate every individual agent in a multi-agent setup. In order to overcome the large scale of the available state information, we propose to rely on the ability of deep Learning approaches to handle large input spaces, in the form of Deep Deterministic Policy Gradient (DDPG) algorithm. We performed several experiments with a range of models, from the very simple one (one intersection) to the more complex one (a big city section).", "title": "" }, { "docid": "fea12b3870cdb978b33e480482124cfd", "text": "The activity of labeling of documents according to their content is known as text categorization. Many experiments have been carried out to enhance text categorization by adding background knowledge to the document using knowledge repositories like Word Net, Open Project Directory (OPD), Wikipedia and Wikitology. In our previous work, we have carried out intensive experiments by extracting knowledge from Wikitology and evaluating the experiment on Support Vector Machine with 10- fold cross-validations. The results clearly indicate Wikitology is far better than other knowledge bases. In this paper we are comparing Support Vector Machine (SVM) and Naïve Bayes (NB) classifiers under text enrichment through Wikitology. We validated results with 10-fold cross validation and shown that NB gives an improvement of +28.78%, on the other hand SVM gives an improvement of +636% when compared with baseline results. Naïve Bayes classifier is better choice when external enriching is used through any external knowledge base.", "title": "" }, { "docid": "ecabde376c5611240e35d3eb574b1979", "text": "For high precision Synthetic Aperture Radar (SAR) processing, the determination of the Doppler centroid is indispensable. The Doppler frequency estimated from azimuth spectra, however, suffers from the fact that the data are sampled with the pulse repetition frequency (PRF) and an ambiguity about the correct PRF band remains. A new algorithm to resolve this ambiguity is proposed. It uses the fact that the Doppler centroid depends linearly on the transmitted radar frequency for a given antenna squint angle. This dependence is not subject to PRF ambiguities. It can be measured by Fourier transforming the SAR data in the range direction and estimating the Doppler centroid at each range frequency. The achievable accuracy is derived theoretically and verified with Seasat data of different scene content. The algorithm works best with low contrast scenes, where the conventional look correlation technique fails. It needs no iterative processing of the SAR data and causes only low computational load.", "title": "" }, { "docid": "645e69205aea3887d954f825306a1052", "text": "Continuous outlier detection in data streams has important applications in fraud detection, network security, and public health. The arrival and departure of data objects in a streaming manner impose new challenges for outlier detection algorithms, especially in time and space efficiency. In the past decade, several studies have been performed to address the problem of distance-based outlier detection in data streams (DODDS), which adopts an unsupervised definition and does not have any distributional assumptions on data values. Our work is motivated by the lack of comparative evaluation among the state-of-the-art algorithms using the same datasets on the same platform. We systematically evaluate the most recent algorithms for DODDS under various stream settings and outlier rates. Our extensive results show that in most settings, the MCOD algorithm offers the superior performance among all the algorithms, including the most recent algorithm Thresh LEAP.", "title": "" }, { "docid": "254f2ef4608ea3c959e049073ad063f8", "text": "Recently, the long-term evolution (LTE) is considered as one of the most promising 4th generation (4G) mobile standards to increase the capacity and speed of mobile handset networks [1]. In order to realize the LTE wireless communication system, the diversity and multiple-input multiple-output (MIMO) systems have been introduced [2]. In a MIMO mobile user terminal such as handset or USB dongle, at least two uncorrelated antennas should be placed within an extremely restricted space. This task becomes especially difficult when a MIMO planar antenna is designed for LTE band 13 (the corresponding wavelength is 390 mm). Due to the limited space available for antenna elements, the antennas are strongly coupled with each other and have narrow bandwidth.", "title": "" }, { "docid": "c4b6df3abf37409d6a6a19646334bffb", "text": "Classification in imbalanced domains is a recent challenge in data mining. We refer to imbalanced classification when data presents many examples from one class and few from the other class, and the less representative class is the one which has more interest from the point of view of the learning task. One of the most used techniques to tackle this problem consists in preprocessing the data previously to the learning process. This preprocessing could be done through under-sampling; removing examples, mainly belonging to the majority class; and over-sampling, by means of replicating or generating new minority examples. In this paper, we propose an under-sampling procedure guided by evolutionary algorithms to perform a training set selection for enhancing the decision trees obtained by the C4.5 algorithm and the rule sets obtained by PART rule induction algorithm. The proposal has been compared with other under-sampling and over-sampling techniques and the results indicate that the new approach is very competitive in terms of accuracy when comparing with over-sampling and it outperforms standard under-sampling. Moreover, the obtained models are smaller in terms of number of leaves or rules generated and they can considered more interpretable. The results have been contrasted through non-parametric statistical tests over multiple data sets. Crown Copyright 2009 Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "1027ce2c8e3a231fe8ab3f469a857f82", "text": "There are two major challenges for a high-performance remote-sensing database. First, it must provide low-latency retrieval of very large volumes of spatio-temporal data. This requires effective declustering and placement of a multidimensional dataset onto a large disk farm. Second, the order of magnitude reduction in data-size due to postprocessing makes it imperative, from a performance perspective, that the postprocessing be done on the machine that holds the data. This requires careful coordination of computation and data retrieval. This paper describes the design, implementation and evaluation of Titan, a parallel shared-nothing database designed for handling remotesensing data. The computational platform for Titan is a 16-processor IBM SP-2 with four fast disks attached to each processor. Titan is currently operational and contains about 24 GB of AVHRR data from the NOAA-7 satellite. The experimental results show that Titan provides good performance for global queries and interactive response times for local queries.", "title": "" }, { "docid": "638336dba1dd589b0f708a9426483827", "text": "Girard's linear logic can be used to model programming languages in which each bound variable name has exactly one \"occurrence\"---i.e., no variable can have implicit \"fan-out\"; multiple uses require explicit duplication. Among other nice properties, \"linear\" languages need no garbage collector, yet have no dangling reference problems. We show a natural equivalence between a \"linear\" programming language and a stack machine in which the top items can undergo arbitrary permutations. Such permutation stack machines can be considered combinator abstractions of Moore's Forth programming language.", "title": "" }, { "docid": "dcee2be83eba32476268e1e4383b570d", "text": "Recent advances in the field of nanotechnology have led to the synthesis and characterization of an assortment of quasi-one-dimensional (Q1D) structures, such as nanowires, nanoneedles, nanobelts and nanotubes. These fascinating materials exhibit novel physical properties owing to their unique geometry with high aspect ratio. They are the potential building blocks for a wide range of nanoscale electronics, optoelectronics, magnetoelectronics, and sensing devices. Many techniques have been developed to grow these nanostructures with various compositions. Parallel to the success with group IV and groups III–V compounds semiconductor nanostructures, semiconducting metal oxide materials with typically wide band gaps are attracting increasing attention. This article provides a comprehensive review of the state-of-the-art research activities that focus on the Q1D metal oxide systems and their physical property characterizations. It begins with the synthetic mechanisms and methods that have been exploited to form these structures. A range of remarkable characteristics are then presented, organized into sections covering a number of metal oxides, such as ZnO, In2O3, SnO2, Ga2O3, and TiO2, etc., describing their electrical, optical, magnetic, mechanical and chemical sensing properties. These studies constitute the basis for developing versatile applications based on metal oxide Q1D systems, and the current progress in device development will be highlighted. # 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "33ba3582dc7873a7e14949775a9b26c1", "text": "Few conservation projects consider climate impacts or have a process for developing adaptation strategies. To advance climate adaptation for biodiversity conservation, we tested a step-by-step approach to developing adaptation strategies with 20 projects from diverse geographies. Project teams assessed likely climate impacts using historical climate data, future climate predictions, expert input, and scientific literature. They then developed adaptation strategies that considered ecosystems and species of concern, project goals, climate impacts, and indicators of progress. Project teams identified 176 likely climate impacts and developed adaptation strategies to address 42 of these impacts. The most common impacts were to habitat quantity or quality, and to hydrologic regimes. Nearly half of expected impacts were temperature-mediated. Twelve projects indicated that the project focus, either focal ecosystems and species or project boundaries, need to change as a result of considering climate impacts. More than half of the adaptation strategies were resistance strategies aimed at preserving the status quo. The rest aimed to make ecosystems and species more resilient in the face of expected changes. All projects altered strategies in some way, either by adding new actions, or by adjusting existing actions. Habitat restoration and enactment of policies and regulations were the most frequently prescribed, though every adaptation strategy required a unique combination of actions. While the effectiveness of these adaptation strategies remains to be evaluated, the application of consistent guidance has yielded important early lessons about how, when, and how often conservation projects may need to be modified to adapt to climate change.", "title": "" }, { "docid": "082630a33c0cc0de0e60a549fc57d8e8", "text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.", "title": "" }, { "docid": "a0ee42eabf32de3b0307e9fbdfbaf857", "text": "To leverage modern hardware platforms to their fullest, more and more database systems embrace compilation of query plans to native code. In the research community, there is an ongoing debate about the best way to architect such query compilers. This is perceived to be a difficult task, requiring techniques fundamentally different from traditional interpreted query execution. \n We aim to contribute to this discussion by drawing attention to an old but underappreciated idea known as Futamura projections, which fundamentally link interpreters and compilers. Guided by this idea, we demonstrate that efficient query compilation can actually be very simple, using techniques that are no more difficult than writing a query interpreter in a high-level language. Moreover, we demonstrate how intricate compilation patterns that were previously used to justify multiple compiler passes can be realized in one single, straightforward, generation pass. Key examples are injection of specialized index structures, data representation changes such as string dictionaries, and various kinds of code motion to reduce the amount of work on the critical path.\n We present LB2: a high-level query compiler developed in this style that performs on par with, and sometimes beats, the best compiled query engines on the standard TPC-H benchmark.", "title": "" }, { "docid": "861f76c061b9eb52ed5033bdeb9a3ce5", "text": "2007S. Robson Walton Chair in Accounting, University of Arkansas 2007-2014; 2015-2016 Accounting Department Chair, University of Arkansas 2014Distinguished Professor, University of Arkansas 2005-2014 Professor, University of Arkansas 2005-2008 Ralph L. McQueen Chair in Accounting, University of Arkansas 2002-2005 Associate Professor, University of Kansas 1997-2002 Assistant Professor, University of Kansas", "title": "" }, { "docid": "243b03a37b5950f69ab5df937268592b", "text": "Now-a-days synthesis and characterization of silver nanoparticles (AgNPs) through biological entity is quite interesting to employ AgNPs for various biomedical applications in general and treatment of cancer in particular. This paper presents the green synthesis of AgNPs using leaf extract of Podophyllum hexandrum Royle and optimized with various parameters such as pH, temperature, reaction time, volume of extract and metal ion concentration for synthesis of AgNPs. TEM, XRD and FTIR were adopted for characterization. The synthesized nanoparticles were found to be spherical shaped with average size of 14 nm. Effects of AgNPs were analyzed against human cervical carcinoma cells by MTT Assay, quantification of ROS, RT-PCR and western blotting techniques. The overall result indicates that AgNPs can selectively inhibit the cellular mechanism of HeLa by DNA damage and caspase mediated cell death. This biological procedure for synthesis of AgNPs and selective inhibition of cancerous cells gives an alternative avenue to treat human cancer effectively.", "title": "" }, { "docid": "c2957e7378650911a09b3c605951ff38", "text": "Vehicular networking is at the corner from early research to final deployment. This phase requires more field testing and real-world experimentation. Most Field Operational Tests (FOTs) are based on proprietary commercial hardware that only allows for marginal modifications of the protocol stack. Furthermore, the roll-out of updated implementations for new or changing protocol standards often takes a prohibitively long time. We developed one of the first complete Open Source experimental and prototyping platform for vehicular networking solutions. Our system supports most of the ETSI ITS-G5 features and runs on standard Linux. New protocol features and updates could now easily be done by and shared with the vehicular networking R&D community.", "title": "" }, { "docid": "84f7b499cd608de1ee7443fcd7194f19", "text": "In this paper, we present a new computationally efficient numerical scheme for the minimizing flow approach for optimal mass transport (OMT) with applications to non-rigid 3D image registration. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. Our implementation also employs multigrid, and parallel methodologies on a consumer graphics processing unit (GPU) for fast computation. Although computing the optimal map has been shown to be computationally expensive in the past, we show that our approach is orders of magnitude faster then previous work and is capable of finding transport maps with optimality measures (mean curl) previously unattainable by other works (which directly influences the accuracy of registration). We give results where the algorithm was used to compute non-rigid registrations of 3D synthetic data as well as intra-patient pre-operative and post-operative 3D brain MRI datasets.", "title": "" } ]
scidocsrr
729d8f4ffe692fc53091e27534b97394
Effective Pattern Discovery for Text Mining
[ { "docid": "c698f7d6b487cc7c87d7ff215d7f12b2", "text": "This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).", "title": "" }, { "docid": "ac25761de97d9aec895d1b8a92a44be3", "text": "Most research in text classification to date has used a “bag of words” representation in which each feature corresponds to a single word. This paper examines some alternative ways to represent text based on syntactic and semantic relationships between words (phrases, synonyms and hypernyms). We describe the new representations and try to justify our hypothesis that they could improve the performance of a rule-based learner. The representations are evaluated using the RIPPER learning algorithm on the Reuters-21578 and DigiTrad test corpora. On their own the new representations are not found to produce significant performance improvements. We also try combining classifiers based on different representations using a majority voting technique, and this improves performance on both test collections. In our opinion, more sophisticated Natural Language Processing techniques need to be developed before better text representations can be produced for classification.", "title": "" } ]
[ { "docid": "6c6eb7e817e210808018506953af1031", "text": "BACKGROUND\nNurses constitute the largest human resource element and have a great impact on quality of care and patient outcomes in health care organizations. The objective of this study was to examine the relationship between rewards and nurse motivation on public hospitals administrated by Addis Ababa health bureau.\n\n\nMETHODS\nA cross-sectional survey was conducted from June to December 2010 in 5 public hospitals in Addis Ababa. Among 794 nurses, 259 were selected as sample. Data was collected using self-administered questionnaire. After the data was collected, it was analysed using SPSS version 16.0 statistical software. The results were analysed in terms of descriptive statistics followed by inferential statistics on the variables.\n\n\nRESULTS\nA total of 230 questionnaires were returned from 259 questionnaires distributed to respondents. Results of the study revealed that nurses are not motivated and there is a statistical significant relationship between rewards and the nurse work motivation and a payment is the most important and more influential variable. Furthermore, there is significant difference in nurse work motivation based on age, educational qualification and work experience while there is no significant difference in nurse work motivation based on gender.\n\n\nCONCLUSION\nThe study shows that nurses are less motivated by rewards they received while rewards have significant and positive contribution for nurse motivation. Therefore, both hospital administrators' and Addis Ababa health bureau should revise the existing nurse motivation strategy.", "title": "" }, { "docid": "dd14f9eb9a9e0e4e0d24527cf80d04f4", "text": "The growing popularity of microblogging websites has transformed these into rich resources for sentiment mining. Even though opinion mining has more than a decade of research to boost about, it is mostly confined to the exploration of formal text patterns like online reviews, news articles etc. Exploration of the challenges offered by informal and crisp microblogging have taken roots but there is scope for a large way ahead. The proposed work aims at developing a hybrid model for sentiment classification that explores the tweet specific features and uses domain independent and domain specific lexicons to offer a domain oriented approach and hence analyze and extract the consumer sentiment towards popular smart phone brands over the past few years. The experiments have proved that the results improve by around 2 points on an average over the unigram baseline.", "title": "" }, { "docid": "2ce4d585edd54cede6172f74cf9ab8bb", "text": "Enterprise resource planning (ERP) systems have been widely implemented by numerous firms throughout the industrial world. While success stories of ERP implementation abound due to its potential in resolving the problem of fragmented information, a substantial number of these implementations fail to meet the goals of the organization. Some are abandoned altogether and others contribute to the failure of an organization. This article seeks to identify the critical factors of ERP implementation and uses statistical analysis to further delineate the patterns of adoption of the various concepts. A cross-sectional mail survey was mailed to business executives who have experience in the implementation of ERP systems. The results of this study provide empirical evidence that the theoretical constructs of ERP implementation are followed at varying levels. It offers some fresh insights into the current practice of ERP implementation. In addition, this study fills the need for ERP implementation constructs that can be utilized for further study of this important topic.", "title": "" }, { "docid": "123b35d403447a29eaf509fa707eddaa", "text": "Technology is the vital criteria to boosting the quality of life for everyone from new-borns to senior citizens. Thus, any technology to enhance the quality of life society has a value that is priceless. Nowadays Smart Wearable Technology (SWTs) innovation has been coming up to different sectors and is gaining momentum to be implemented in everyday objects. The successful adoption of SWTs by consumers will allow the production of new generations of innovative and high value-added products. The study attempts to predict the dynamics that play a role in the process through which consumers accept wearable technology. The research build an integrated model based on UTAUT2 and some external variables in order to investigate the direct and moderating effects of human expectation and behaviour on the awareness and adoption of smart products such as watch and wristband fitness. Survey will be chosen in order to test our model based on consumers. In addition, our study focus on different rate of adoption and expectation differences between early adopters and early majority in order to explore those differences and propose techniques to successfully cross the chasm between these two groups according to “Chasm theory”. For this aim and due to lack of prior research, Semi-structured focus groups will be used to obtain qualitative data for our research. Originality/value: To date, a few research exists addressing the adoption of smart wearable technologies. Therefore, the examination of consumers behaviour towards SWTs may provide orientations into the future that are useful for managers who can monitor how consumers make choices, how manufacturers should design successful market strategies, and how regulators can proscribe manipulative behaviour in this industry.", "title": "" }, { "docid": "15f51cbbb75d236a5669f613855312e0", "text": "The recent work of Gatys et al., who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems. While their image generation technique uses a slow optimization process, recently several authors have proposed to learn generator neural networks that can produce similar outputs in one quick forward pass. While generator networks are promising, they are still inferior in visual quality and diversity compared to generation-by-optimization. In this work, we advance them in two significant ways. First, we introduce an instance normalization module to replace batch normalization with significant improvements to the quality of image stylization. Second, we improve diversity by introducing a new learning formulation that encourages generators to sample unbiasedly from the Julesz texture ensemble, which is the equivalence class of all images characterized by certain filter responses. Together, these two improvements take feed forward texture synthesis and image stylization much closer to the quality of generation-via-optimization, while retaining the speed advantage.", "title": "" }, { "docid": "b9d78f22647d00aab0a79aa0c5dacdcf", "text": "Traditional GANs use a deterministic generator function (typically a neural network) to transform a random noise input z to a sample x that the discriminator seeks to distinguish. We propose a new GAN called Bayesian Conditional Generative Adversarial Networks (BC-GANs) that use a random generator function to transform a deterministic input y′ to a sample x. Our BC-GANs extend traditional GANs to a Bayesian framework, and naturally handle unsupervised learning, supervised learning, and semi-supervised learning problems. Experiments show that the proposed BC-GANs outperforms the state-of-the-arts.", "title": "" }, { "docid": "0a09f894029a0b8730918c14906dca9e", "text": "In the last few years, machine learning has become a very popular tool for analyzing financial text data, with many promising results in stock price forecasting from financial news, a development with implications for the E cient Markets Hypothesis (EMH) that underpins much economic theory. In this work, we explore recurrent neural networks with character-level language model pre-training for both intraday and interday stock market forecasting. In terms of predicting directional changes in the Standard & Poor’s 500 index, both for individual companies and the overall index, we show that this technique is competitive with other state-of-the-art approaches.", "title": "" }, { "docid": "115ed03ccee62fafc1606e6f6fdba1ce", "text": "High voltage SF6 circuit breaker must meet the breaking requirement for large short-circuit current, and ensure absence of breakdown after breaking small current. A 126kV high voltage SF6 circuit breaker was used as the research object in this paper. Based on the calculation results of non-equilibrium arc plasma material parameters, the distribution of pressure, temperature and density were calculated during the breaking progress. The electric field distribution was calculated in the course of flow movement, considering the influence of space charge on dielectric voltage. The change rule of the dielectric recovery progress was given based on the stream theory. The dynamic breakdown test circuit was built to measure the values of breakdown voltage under different open distance. The simulation results and experimental data are analyzed and the results show that: 1) Dielectric recovery speed (175kV/ms) is significantly faster than the voltage recovery rate (37.7kV/ms) during the arc extinguishing process. 2) The shorter the small current arcing time, the smaller the breakdown margin, so it is necessary to keep the arcing time longer than 0.5ms to ensure a large breakdown margin. 3) The calculated results are in good agreement with the experimental results. Since the breakdown voltage is less than the TRV in some test points, restrike may occur within 0.5ms after breaking, so arc extinguishment should be avoid in this time range.", "title": "" }, { "docid": "464065569c6540ac0c4fde8a1f72105d", "text": "Semantic role labeling (SRL) is a method for the semantic analysis of texts that adds a level of semantic abstraction on top of syntactic analysis, for instance adding semantic role labels like Agent on top of syntactic functions like Subject . SRL has been shown to benefit various natural language processing applications such as question answering, information extraction, and summarization. Automatic SRL systems are typically based on a predefined model of semantic predicate argument structure incorporated in lexical knowledge bases like PropBank or FrameNet. They are trained using supervised or semi-supervisedmachine learningmethods using training data labeled with predicate (word sense) and role labels. Even state-of-the-art systems based on deep learning still rely on a labeled training set. However, despite the success in an experimental setting, the real-world application of SRL methods is still prohibited by severe coverage problems (lexicon coverage problem) and lack of domain-relevant training data for training supervised systems (domain adaptation problem). These issues apply to English, but are even more severe for other languages, for which only small resources exist. The goal of this thesis is to develop knowledge-based methods to improve lexicon coverage and training data coverage for SRL. We use linked lexical knowledge bases to extend the lexicon coverage and as a basis for automatic training data generation across languages and domains. Links between lexical resources have already been previously used to address this problem, but the linkings have not been explored and applied at a large scale and the resulting generated training data only contained predicate (word sense) labels, but no role labels. To create predicate and role labels, corpus-based methods have been used. These rely on the existence of labeled training data as sources for label transfer to unlabeled corpora. For certain languages, like German or Spanish, several lexical knowledge bases, but only small amounts of labeled training data exist. For such languages, knowledge-based methods promise greater improvements. In our experiments, we target FrameNet, a lexical-semantic resource with a strong focus on semantic abstraction and generalization, but the methods developed in this thesis can be extended to other models of predicate argument structure, like VerbNet and PropBank. This", "title": "" }, { "docid": "69624e1501b897bf1a9f9a5a84132da3", "text": "360° videos and Head-Mounted Displays (HMDs) are geŠing increasingly popular. However, streaming 360° videos to HMDs is challenging. Œis is because only video content in viewers’ Fieldof-Views (FoVs) is rendered, and thus sending complete 360° videos wastes resources, including network bandwidth, storage space, and processing power. Optimizing the 360° video streaming to HMDs is, however, highly data and viewer dependent, and thus dictates real datasets. However, to our best knowledge, such datasets are not available in the literature. In this paper, we present our datasets of both content data (such as image saliency maps and motion maps derived from 360° videos) and sensor data (such as viewer head positions and orientations derived from HMD sensors). We put extra e‚orts to align the content and sensor data using the timestamps in the raw log €les. Œe resulting datasets can be used by researchers, engineers, and hobbyists to either optimize existing 360° video streaming applications (like rate-distortion optimization) and novel applications (like crowd-driven cameramovements). We believe that our dataset will stimulate more research activities along this exciting new research direction. ACM Reference format: Wen-Chih Lo, Ching-Ling Fan, Jean Lee, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu. 2017. 360° Video Viewing Dataset in Head-Mounted Virtual Reality. In Proceedings ofMMSys’17, Taipei, Taiwan, June 20-23, 2017, 6 pages. DOI: hŠp://dx.doi.org/10.1145/3083187.3083219 CCS Concept • Information systems→Multimedia streaming", "title": "" }, { "docid": "74ce3b76d697d59df0c5d3f84719abb8", "text": "Existing Byzantine fault tolerance (BFT) protocols face significant challenges in the consortium blockchain scenario. On the one hand, we can make little assumptions about the reliability and security of the underlying Internet. On the other hand, the applications on consortium blockchains demand a system as scalable as the Bitcoin but providing much higher performance, as well as provable safety. We present a new BFT protocol, Gosig, that combines crypto-based secret leader selection and multi-round voting in the protocol layer with implementation layer optimizations such as gossip-based message propagation. In particular, Gosig guarantees safety even in a network fully controlled by adversaries, while providing provable liveness with easy-to-achieve network connectivity assumption. On a wide area testbed consisting of 140 Amazon EC2 servers spanning 14 cities on five continents, we show that Gosig can achieve over 4,000 transactions per second with less than 1 minute transaction confirmation time.", "title": "" }, { "docid": "e911045eb1c6469fdaa38102901f104f", "text": "Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. A network based on our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 — it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by ∼10%. Code and models will be made publicly available.", "title": "" }, { "docid": "7b7f5a18bb7629c48c9fbe9475aa0f0c", "text": "These are the notes for my quarter-long course on basic stability theory at UCLA (MATH 285D, Winter 2015). The presentation highlights some relations to set theory and cardinal arithmetic reflecting my impression about the tastes of the audience. We develop the general theory of local stability instead of specializing to the finite rank case, and touch on some generalizations of stability such as NIP and simplicity. The material in this notes is based on [Pil02, Pil96], [vdD05], [TZ12], [Cas11a, Cas07], [Sim15], [Poi01] and [Che12]. I would also like to thank the following people for their comments and suggestions: Tyler Arant, Madeline Barnicle, Allen Gehret, Omer Ben Neria, Anton Bobkov, Jesse Han, Pietro Kreitlon Carolino, Andrew Marks, Alex Mennen, Assaf Shani, John Susice, Spencer Unger. Comments and corrections are very welcome (chernikov@math.ucla.edu, http://www.math.ucla.edu/~chernikov/).", "title": "" }, { "docid": "8b91d7299926329623e528b52880a17f", "text": "The main objective of this paper is to enhance the university's monitoring system taking into account factors such as reliability, time saving, and easy control. The proposed system consists of a mobile RFID solution in a logical context. The system prototype and its small scale application was a complete success. However, the more practical phase will not be immediately ready because a large setup is required and a part of the existing system has to be completely disabled. Some software modifications in the RFID system can be easily done in order for the system to be ready for a new application. In this paper, advantages and disadvantages of the proposed RFID system will be presented.", "title": "" }, { "docid": "d65aa05f6eb97907fe436ff50628a916", "text": "The process of stool transfer from healthy donors to the sick, known as faecal microbiota transplantation (FMT), has an ancient history. However, only recently researchers started investigating its applications in an evidence-based manner. Current knowledge of the microbiome, the concept of dysbiosis and results of preliminary research suggest that there is an association between gastrointestinal bacterial disruption and certain disorders. Researchers have studied the effects of FMT on various gastrointestinal and non-gastrointestinal diseases, but have been unable to precisely pinpoint specific bacterial strains responsible for the observed clinical improvement or futility of the process. The strongest available data support the efficacy of FMT in the treatment of recurrent Clostridium difficile infection with cure rates reported as high as 90% in clinical trials. The use of FMT in other conditions including inflammatory bowel disease, functional gastrointestinal disorders, obesity and metabolic syndrome is still controversial. Results from clinical studies are conflicting, which reflects the gap in our knowledge of the microbiome composition and function, and highlights the need for a more defined and personalised microbial isolation and transfer.", "title": "" }, { "docid": "5ea7ad08d686ab5fbfebc9717b39895d", "text": "Most deep reinforcement and imitation learning methods are data-driven and do not utilize the underlying problem structure. While these methods have achieved great success on many challenging tasks, several key problems such as generalization, data efficiency, compositionality etc. remain open. Utilizing problem structure in the form of architecture design, priors, structured losses, domain knowledge etc. may be a viable strategy to solve some of these problems. In this thesis, we present two approaches towards integrating problem structure with deep reinforcement and imitation learning methods. In the first part of the thesis, we consider reinforcement learning problems where parameters of the model vary with its phase while the agent attempts to learn through its interactions with the environment. We propose phase-parameterized policies and value function approximators which explicitly enforce a phase structure to the policy or value space to better model such environments. We apply our phase-parameterized reinforcement learning approach to both feed-forward and recurrent deep networks in the context of trajectory optimization and locomotion problems. Our experiments show that our proposed approach has superior modeling performance and leads to improved sample complexity when compared with traditional function approximators in cyclic and linear phase environments. In the second part of the thesis, we present a framework that incorporates structure in imitation learning by modelling the imitation of complex tasks or activities as a composition of easier subtasks. We propose a new algorithm based on the Generative Adversarial Imitation Learning (GAIL) framework which automatically learns sub-task policies from unsegmented demonstrations. Our approach leverages the idea of directed or causal information to segment demonstrations of complex tasks into simpler sub-tasks and learn sub-task policies that can then be composed together to perform complicated activities. We thus call our approach Directed-Information GAIL. We experiment with both discrete and continuous state-action environments and show that our proposed approach is able to find meaningful sub-tasks from unsegmented trajectories which are then be combined to perform more complicated tasks.", "title": "" }, { "docid": "0686319ad678ff3e645b423f090c74de", "text": "We consider the challenging problem of entity typing over an extremely fine grained set of types, wherein a single mention or entity can have many simultaneous and often hierarchically-structured types. Despite the importance of the problem, there is a relative lack of resources in the form of fine-grained, deep type hierarchies aligned to existing knowledge bases. In response, we introduce TypeNet, a dataset of entity types consisting of over 1941 types organized in a hierarchy, obtained by manually annotating a mapping from 1081 Freebase types to WordNet. We also experiment with several models comparable to state-of-the-art systems and explore techniques to incorporate a structure loss on the hierarchy with the standard mention typing loss, as a first step towards future research on this dataset.", "title": "" }, { "docid": "da87c8385ac485fe5d2903e27803c801", "text": "It's not surprisingly when entering this site to get the book. One of the popular books now is the polygon mesh processing. You may be confused because you can't find the book in the book store around your city. Commonly, the popular book will be sold quickly. And when you have found the store to buy the book, it will be so hurt when you run out of it. This is why, searching for this popular book in this website will give you benefit. You will not run out of this book.", "title": "" }, { "docid": "357b798f0429a29bb3210cfc3f031c3a", "text": "The Facial Action Coding System (FACS) is a widely used protocol for recognizing and labelling facial expression by describing the movement of muscles of the face. FACS is used to objectively measure the frequency and intensity of facial expressions without assigning any emotional meaning to those muscle movements. Instead FACS breaks down facial expressions into their smallest discriminable movements called Action Units. Each Action Unit creates a distinct change in facial appearance, such as an eyebrow lift or nose wrinkle. FACS coders can identify the Action Units which are present on the face when viewing still images or videos. Psychological research has used FACS to examine a variety of research questions including social-emotional development, neuropsychiatric disorders, and deception. In the course of this report we provide an overview of FACS and the Action Units, its reliability as a measure, and how it has been applied in some key areas of psychological research.", "title": "" }, { "docid": "115b89c782465a740e5e7aa2cae52669", "text": "Japan discards approximately 18 million tonnes of food annually, an amount that accounts for 40% of national food production. In recent years, a number of measures have been adopted at the institutional level to tackle this issue, showing increasing commitment of the government and other organizations. Along with the aim of environmental sustainability, food waste recycling, food loss prevention and consumer awareness raising in Japan are clearly pursuing another common objective. Although food loss and waste problems have been publicly acknowledged only very recently, strong implications arise from the economic and cultural history of the Japanese food system. Specific national concerns over food security have accompanied the formulation of current national strategies whose underlying causes and objectives add a unique facet to Japan’s efforts with respect to those of other developed countries’. Fighting Food Loss and Food Waste in Japan", "title": "" } ]
scidocsrr
3c813c21dbb065c9da5562d21be5b73b
Toxic Behaviors in Esports Games: Player Perceptions and Coping Strategies
[ { "docid": "ac46286c7d635ccdcd41358666026c12", "text": "This paper represents our first endeavor to explore how to better understand the complex nature, scope, and practices of eSports. Our goal is to explore diverse perspectives on what defines eSports as a starting point for further research. Specifically, we critically reviewed existing definitions/understandings of eSports in different disciplines. We then interviewed 26 eSports players and qualitatively analyzed their own perceptions of eSports. We contribute to further exploring definitions and theories of eSports for CHI researchers who have considered online gaming a serious and important area of research, and highlight opportunities for new avenues of inquiry for researchers who are interested in designing technologies for this unique genre.", "title": "" }, { "docid": "3d7fabdd5f56c683de20640abccafc44", "text": "The capacity to exercise control over the nature and quality of one's life is the essence of humanness. Human agency is characterized by a number of core features that operate through phenomenal and functional consciousness. These include the temporal extension of agency through intentionality and forethought, self-regulation by self-reactive influence, and self-reflectiveness about one's capabilities, quality of functioning, and the meaning and purpose of one's life pursuits. Personal agency operates within a broad network of sociostructural influences. In these agentic transactions, people are producers as well as products of social systems. Social cognitive theory distinguishes among three modes of agency: direct personal agency, proxy agency that relies on others to act on one's behest to secure desired outcomes, and collective agency exercised through socially coordinative and interdependent effort. Growing transnational embeddedness and interdependence are placing a premium on collective efficacy to exercise control over personal destinies and national life.", "title": "" } ]
[ { "docid": "244745da710e8c401173fe39359c7c49", "text": "BACKGROUND\nIntegrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression.\n\n\nRESULTS\nUsing functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs.\n\n\nCONCLUSIONS\nThe data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.", "title": "" }, { "docid": "9f5b61ad41dceff67ab328791ed64630", "text": "In this paper we present a resource-adaptive framework for real-time vision-aided inertial navigation. Specifically, we focus on the problem of visual-inertial odometry (VIO), in which the objective is to track the motion of a mobile platform in an unknown environment. Our primary interest is navigation using miniature devices with limited computational resources, similar for example to a mobile phone. Our proposed estimation framework consists of two main components: (i) a hybrid EKF estimator that integrates two algorithms with complementary computational characteristics, namely a sliding-window EKF and EKF-based SLAM, and (ii) an adaptive image-processing module that adjusts the number of detected image features based oadaptive image-processing module that adjusts the number of detected image features based on the availability of resources. By combining the hybrid EKF estimator, which optimally utilizes the feature measurements, with the adaptive image-processing algorithm, the proposed estimation architecture fully utilizes the system's computational resources. We present experimental results showing that the proposed estimation framework isn the availability of resources. By combining the hybrid EKF estimator, which optimally utilizes the feature measurements, with the adaptive image-processing algorithm, the proposed estimation architecture fully utilizes the system's computational resources. We present experimental results showing that the proposed estimation framework is capable of real-time processing of image and inertial data on the processor of a mobile phone.", "title": "" }, { "docid": "6779d20fd95ff4525404bdd4d3c7df4b", "text": "A new method is presented for adaptive document image binarization, where the page is considered as a collection of subcomponents such as text, background and picture. The problems caused by noise, illumination and many source type-related degradations are addressed. Two new algorithms are applied to determine a local threshold for each pixel. The performance evaluation of the algorithm utilizes test images with ground-truth, evaluation metrics for binarization of textual and synthetic images, and a weight-based ranking procedure for the \"nal result presentation. The proposed algorithms were tested with images including di!erent types of document components and degradations. The results were compared with a number of known techniques in the literature. The benchmarking results show that the method adapts and performs well in each case qualitatively and quantitatively. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "1dc7b9dc4f135625e2680dcde8c9e506", "text": "This paper empirically analyzes di erent e ects of advertising in a nondurable, experience good market. A dynamic learning model of consumer behavior is presented in which we allow both \\informative\" e ects of advertising and \\prestige\" or \\image\" e ects of advertising. This learning model is estimated using consumer level panel data tracking grocery purchases and advertising exposures over time. Empirical results suggest that in this data, advertising's primary e ect was that of informing consumers. The estimates are used to quantify the value of this information to consumers and evaluate welfare implications of an alternative advertising regulatory regime. JEL Classi cations: D12, M37, D83 ' Economics Dept., Boston University, Boston, MA 02115 (ackerber@bu.edu). This paper is a revised version of the second and third chapters of my doctoral dissertation at Yale University. Many thanks to my advisors: Steve Berry and Ariel Pakes, as well as Lanier Benkard, Russell Cooper, Gautam Gowrisankaran, Sam Kortum, Mike Riordan, John Rust, Roni Shachar, and many seminar participants, including most recently those at the NBER 1997Winter IO meetings, for advice and comments. I thank the Yale School of Management for gratefully providing the data used in this study. Financial support from the Cowles Foundation in the form of the Arvid Anderson Dissertation Fellowship is acknowledged and appreciated. All remaining errors in this paper are my own.", "title": "" }, { "docid": "f26680bb9306ca413d0fd36efa406107", "text": "Frequency-domain concepts and terminology are commonly used to describe antennas. These are very satisfactory for a CW or narrowband application. However, their validity is questionable for an instantaneous wideband excitation. Time-domain and/or wideband analyses can provide more insight and more effective terminology. Two approaches for this time-domain analysis have been described. The more complete one uses the transfer function, a function which describes the amplitude and phase of the response over the entire frequency spectrum. While this is useful for evaluating the overall response of a system, it may not be practical when trying to characterize an antenna's performance, and trying to compare it with that of other antennas. A more convenient and descriptive approach uses time-domain parameters, such as efficiency, energy pattern, receiving area, etc., with the constraint that the reference or excitation signal is known. The utility of both approaches, for describing the time-domain performance, was demonstrated for antennas which are both small and large, in comparison to the length of the reference signal. The approaches have also been used for other antennas, such as arrays, where they also could be applied to measure the effects of mutual impedance, for a wide-bandwidth signal. The time-domain ground-plane antenna range, on which these measurements were made, is suitable for symmetric antennas. However, the approach can be readily adapted to asymmetric antennas, without a ground plane, by using suitable reference antennas.<<ETX>>", "title": "" }, { "docid": "c8b57dc6e3ef7c6b8712733ec6177275", "text": "A student information system provides a simple interface for the easy collation and maintenance of all manner of student information. The creation and management of accurate, up-to-date information regarding students' academic careers is critical students and for the faculties and administration ofSebha University in Libya and for any other educational institution. A student information system deals with all kinds of data from enrollment to graduation, including program of study, attendance record, payment of fees and examination results to name but a few. All these dataneed to be made available through a secure, online interface embedded in auniversity's website. To lay the groundwork for such a system, first we need to build the student database to be integrated with the system. Therefore we proposed and implementedan online web-based system, which we named the student data system (SDS),to collect and correct all student data at Sebha University. The output of the system was evaluated by using a similarity (Euclidean distance) algorithm. The results showed that the new data collected by theSDS can fill the gaps and correct the errors in the old manual data records.", "title": "" }, { "docid": "7b7e41ced300aeff7916509c04c4fd6a", "text": "We present and evaluate various content-based recommendation models that make use of user and item profiles defined in terms of weighted lists of social tags. The studied approaches are adaptations of the Vector Space and Okapi BM25 information retrieval models. We empirically compare the recommenders using two datasets obtained from Delicious and Last.fm social systems, in order to analyse the performance of the approaches in scenarios with different domains and tagging behaviours.", "title": "" }, { "docid": "3763da6b72ee0a010f3803a901c9eeb2", "text": "As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.", "title": "" }, { "docid": "aa73df5eadafff7533994c05a8d3c415", "text": "In this paper, we report on the outcomes of the European project EduWear. The aim of the project was to develop a construction kit with smart textiles and to examine its impact on young people. The construction kit, including a suitable programming environment and a workshop concept, was adopted by children in a number of workshops.\n The evaluation of the workshops showed that designing, creating, and programming wearables with a smart textile construction kit allows for creating personal meaningful projects which relate strongly to aspects of young people's life worlds. Through their construction activities, participants became more self-confident in dealing with technology and were able to draw relations between their own creations and technologies present in their environment. We argue that incorporating such constructionist processes into an appropriate workshop concept is essential for triggering thought processes about the character of digital media beyond the construction process itself.", "title": "" }, { "docid": "f119b0ee9a237ab1e9acdae19664df0f", "text": "Recent editorials in this journal have defended the right of eminent biologist James Watson to raise the unpopular hypothesis that people of sub-Saharan African descent score lower, on average, than people of European or East Asian descent on tests of general intelligence. As those editorials imply, the scientific evidence is substantial in showing a genetic contribution to these differences. The unjustified ill treatment meted out to Watson therefore requires setting the record straight about the current state of the evidence on intelligence, race, and genetics. In this paper, we summarize our own previous reviews based on 10 categories of evidence: The worldwide distribution of test scores; the g factor of mental ability; heritability differences; brain size differences; trans-racial adoption studies; racial admixture studies; regression-to-the-mean effects; related life-history traits; human origins research; and the poverty of predictions from culture-only explanations. The preponderance of evidence demonstrates that in intelligence, brain size, and other life-history variables, East Asians average a higher IQ and larger brain than Europeans who average a higher IQ and larger brain than Africans. Further, these group differences are 50–80% heritable. These are facts, not opinions and science must be governed by data. There is no place for the ‘‘moralistic fallacy’’ that reality must conform to our social, political, or ethical desires. !c 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7bd3f6b7b2f79f08534b70c16be91c02", "text": "This paper describes a dual-loop delay-locked loop (DLL) which overcomes the problem of a limited delay range by using multiple voltage-controlled delay lines (VCDLs). A reference loop generates quadrature clocks, which are then delayed with controllable amounts by four VCDLs and multiplexed to generate the output clock in a main loop. This architecture enables the DLL to emulate the infinite-length VCDL with multiple finite-length VCDLs. The DLL incorporates a replica biasing circuit for low-jitter characteristics and a duty cycle corrector immune to prevalent process mismatches. A test chip has been fabricated using a 0.25m CMOS process. At 400 MHz, the peak-to-peak jitter with a quiet 2.5-V supply is 54 ps, and the supply-noise sensitivity is 0.32 ps/mV.", "title": "" }, { "docid": "b0727e320a1c532bd3ede4fd892d8d01", "text": "Semantic technologies could facilitate realizing features like interoperability and reasoning for Internet of Things (IoT). However, the dynamic and heterogeneous nature of IoT data, constrained resources, and real-time requirements set challenges for applying these technologies. In this paper, we study approaches for delivering semantic data from IoT nodes to distributed reasoning engines and reasoning over such data. We perform experiments to evaluate the scalability of these approaches and also study how reasoning is affected by different data aggregation strategies.", "title": "" }, { "docid": "5a61c356940eef5eb18c53a71befbe5b", "text": "Recently, plant construction throughout the world, including nuclear power plant construction, has grown significantly. The scale of Korea’s nuclear power plant construction in particular, has increased gradually since it won a contract for a nuclear power plant construction project in the United Arab Emirates in 2009. However, time and monetary resources have been lost in some nuclear power plant construction sites due to lack of risk management ability. The need to prevent losses at nuclear power plant construction sites has become more urgent because it demands professional skills and large-scale resources. Therefore, in this study, the Analytic Hierarchy Process (AHP) and Fuzzy Analytic Hierarchy Process (FAHP) were applied in order to make comparisons between decision-making methods, to assess the potential risks at nuclear power plant construction sites. To suggest the appropriate choice between two decision-making methods, a survey was carried out. From the results, the importance and the priority of 24 risk factors, classified by process, cost, safety, and quality, were analyzed. The FAHP was identified as a suitable method for risk assessment of nuclear power plant construction, compared with risk assessment using the AHP. These risk factors will be able to serve as baseline data for risk management in nuclear power plant construction projects.", "title": "" }, { "docid": "d5ddc141311afb6050a58be88303b577", "text": "Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster RCNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.", "title": "" }, { "docid": "609cc8dd7323e817ddfc5314070a68bf", "text": "We present EVO, an event-based visual odometry algorithm. Our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional (3-D) map of the environment. The implementation runs in real time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes. To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with a recent event-based 3-D reconstruction algorithm in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras.", "title": "" }, { "docid": "7eca894697ee372abe6f67a069dcd910", "text": "Government agencies and consulting companies in charge of pavement management face the challenge of maintaining pavements in serviceable conditions throughout their life from the functional and structural standpoints. For this, the assessment and prediction of the pavement conditions are crucial. This study proposes a neuro-fuzzy model to predict the performance of flexible pavements using the parameters routinely collected by agencies to characterize the condition of an existing pavement. These parameters are generally obtained by performing falling weight deflectometer tests and monitoring the development of distresses on the pavement surface. The proposed hybrid model for predicting pavement performance was characterized by multilayer, feedforward neural networks that led the reasoning process of the IF-THEN fuzzy rules. The results of the neuro-fuzzy model were superior to those of the linear regression model in terms of accuracy in the approximation. The proposed neuro-fuzzy model showed good generalization capability, and the evaluation of the model performance produced satisfactory results, demonstrating the efficiency and potential of these new mathematical modeling techniques.", "title": "" }, { "docid": "60bdd255a19784ed2d19550222e61b69", "text": "Haptic feedback on touch-sensitive displays provides significant benefits in terms of reducing error rates, increasing interaction speed and minimizing visual distraction. This particularly holds true for multitasking situations such as the interaction with mobile devices or touch-based in-vehicle systems. In this paper, we explore how the interaction with tactile touchscreens can be modeled and enriched using a 2+1 state transition model. The model expands an approach presented by Buxton. We present HapTouch -- a force-sensitive touchscreen device with haptic feedback that allows the user to explore and manipulate interactive elements using the sense of touch. We describe the results of a preliminary quantitative study to investigate the effects of tactile feedback on the driver's visual attention, driving performance and operating error rate. In particular, we focus on how active tactile feedback allows the accurate interaction with small on-screen elements during driving. Our results show significantly reduced error rates and input time when haptic feedback is given.", "title": "" }, { "docid": "255ff39001f9bbcd7b1e6fe96f588371", "text": "We derive inner and outer bounds on the capacity region for a class of three-user partially connected interference channels. We focus on the impact of topology, interference alignment, and interplay between interference and noise. The representative channels we consider are the ones that have clear interference alignment gain. For these channels, Z-channel type outer bounds are tight to within a constant gap from capacity. We present near-optimal achievable schemes based on rate-splitting, lattice alignment, and successive decoding.", "title": "" }, { "docid": "85b77b88c2a06603267b770dbad8ec73", "text": "Many errors in coreference resolution come from semantic mismatches due to inadequate world knowledge. Errors in named-entity linking (NEL), on the other hand, are often caused by superficial modeling of entity context. This paper demonstrates that these two tasks are complementary. We introduce NECO, a new model for named entity linking and coreference resolution, which solves both problems jointly, reducing the errors made on each. NECO extends the Stanford deterministic coreference system by automatically linking mentions to Wikipedia and introducing new NEL-informed mention-merging sieves. Linking improves mention-detection and enables new semantic attributes to be incorporated from Freebase, while coreference provides better context modeling by propagating named-entity links within mention clusters. Experiments show consistent improvements across a number of datasets and experimental conditions, including over 11% reduction in MUC coreference error and nearly 21% reduction in F1 NEL error on ACE 2004 newswire data.", "title": "" }, { "docid": "a9b366b2b127b093b547f8a10ac05ca5", "text": "Each user session in an e-commerce system can be modeled as a sequence of web pages, indicating how the user interacts with the system and makes his/her purchase. A typical recommendation approach, e.g., Collaborative Filtering, generates its results at the beginning of each session, listing the most likely purchased items. However, such approach fails to exploit current viewing history of the user and hence, is unable to provide a real-time customized recommendation service. In this paper, we build a deep recurrent neural network to address the problem. The network tracks how users browse the website using multiple hidden layers. Each hidden layer models how the combinations of webpages are accessed and in what order. To reduce the processing cost, the network only records a finite number of states, while the old states collapse into a single history state. Our model refreshes the recommendation result each time when user opens a new web page. As user's session continues, the recommendation result is gradually refined. Furthermore, we integrate the recurrent neural network with a Feedfoward network which represents the user-item correlations to increase the prediction accuracy. Our approach has been applied to Kaola (http://www.kaola.com), an e-commerce website powered by the NetEase technologies. It shows a significant improvement over previous recommendation service.", "title": "" } ]
scidocsrr
b8157f13c56e9fe513b5ba5231606b61
Stereotype Threat Effects on Black and White Athletic Performance
[ { "docid": "f5bc721d2b63912307c4ad04fb78dd2c", "text": "When women perform math, unlike men, they risk being judged by the negative stereotype that women have weaker math ability. We call this predicament st reotype threat and hypothesize that the apprehension it causes may disrupt women’s math performance. In Study 1 we demonstrated that the pattern observed in the literature that women underperform on difficult (but not easy) math tests was observed among a highly selected sample of men and women. In Study 2 we demonstrated that this difference in performance could be eliminated when we lowered stereotype threat by describing the test as not producing gender differences. However, when the test was described as producing gender differences and stereotype threat was high, women performed substantially worse than equally qualified men did. A third experiment replicated this finding with a less highly selected population and explored the mediation of the effect. The implication that stereotype threat may underlie gender differences in advanced math performance, even", "title": "" } ]
[ { "docid": "b4a425c86bdd1814d7de6318ba305c58", "text": "There is a natural correlation between the visual and auditive elements of a video. In this work we leverage this connection to learn general and effective models for both audio and video analysis from self-supervised temporal synchronization. We demonstrate that a calibrated curriculum learning scheme, a careful choice of negative examples, and the use of a contrastive loss are critical ingredients to obtain powerful multi-sensory representations from models optimized to discern temporal synchronization of audio-video pairs. Without further finetuning, the resulting audio features achieve performance superior or comparable to the state-of-the-art on established audio classification benchmarks (DCASE2014 and ESC-50). At the same time, our visual subnet provides a very effective initialization to improve the accuracy of video-based action recognition models: compared to learning from scratch, our self-supervised pretraining yields a remarkable gain of +19.9% in action recognition accuracy on UCF101 and a boost of +17.7% on HMDB51.", "title": "" }, { "docid": "273153d0cf32162acb48ed989fa6d713", "text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" }, { "docid": "6075b9f909a5df033d1222685d30b1dc", "text": "Recent advances in high-throughput cDNA sequencing (RNA-seq) can reveal new genes and splice variants and quantify expression genome-wide in a single assay. The volume and complexity of data from RNA-seq experiments necessitate scalable, fast and mathematically principled analysis software. TopHat and Cufflinks are free, open-source software tools for gene discovery and comprehensive expression analysis of high-throughput mRNA sequencing (RNA-seq) data. Together, they allow biologists to identify new genes and new splice variants of known ones, as well as compare gene and transcript expression under two or more conditions. This protocol describes in detail how to use TopHat and Cufflinks to perform such analyses. It also covers several accessory tools and utilities that aid in managing data, including CummeRbund, a tool for visualizing RNA-seq analysis results. Although the procedure assumes basic informatics skills, these tools assume little to no background with RNA-seq analysis and are meant for novices and experts alike. The protocol begins with raw sequencing reads and produces a transcriptome assembly, lists of differentially expressed and regulated genes and transcripts, and publication-quality visualizations of analysis results. The protocol's execution time depends on the volume of transcriptome sequencing data and available computing resources but takes less than 1 d of computer time for typical experiments and ∼1 h of hands-on time.", "title": "" }, { "docid": "b56b90d98b4b1b136e283111e9acf732", "text": "Mobile phones are widely used nowadays and during the last years developed from simple phones to small computers with an increasing number of features. These result in a wide variety of data stored on the devices which could be a high security risk in case of unauthorized access. A comprehensive user survey was conducted to get information about what data is really stored on the mobile devices, how it is currently protected and if biometric authentication methods could improve the current state. This paper states the results from about 550 users of mobile devices. The analysis revealed a very low securtiy level of the devices. This is partly due to a low security awareness of their owners and partly due to the low acceptance of the offered authentication method based on PIN. Further results like the experiences with mobile thefts and the willingness to use biometric authentication methods as alternative to PIN authentication are also stated.", "title": "" }, { "docid": "a65166fb5584bf634d841353c442b665", "text": "Although business process management ( ̳BPM‘) is a popular concept, it has not yet been properly theoretically grounded. This leads to problems in identifying both generic and case specific critical success factors of BPM programs. The paper proposes an underlying theoretical framework with the utilization of three theories: contingency, dynamic capabilities and task technology fit. The main premise is that primarily the fit between the business environment and business processes is needed. Then both continuous improvement and the proper fit between business process tasks and information systems must exist. The underlying theory is used to identify critical success factors on a case study from the banking sector.", "title": "" }, { "docid": "a80c83fd7bdf2a8550c80c32b98352ec", "text": "In this paper, we propose an online learning algorithm for optimal execution in the limit order book of a financial asset. Given a certain number of shares to sell and an allocated time window to complete the transaction, the proposed algorithm dynamically learns the optimal number of shares to sell via market orders at prespecified time slots within the allocated time interval. We model this problem as a Markov Decision Process (MDP), which is then solved by dynamic programming. First, we prove that the optimal policy has a specific form, which requires either selling no shares or the maximum allowed amount of shares at each time slot. Then, we consider the learning problem, in which the state transition probabilities are unknown and need to be learned on the fly. We propose a learning algorithm that exploits the form of the optimal policy when choosing the amount to trade. Interestingly, this algorithm achieves bounded regret with respect to the optimal policy computed based on the complete knowledge of the market dynamics. Our numerical results on several finance datasets show that the proposed algorithm performs significantly better than the traditional Q-learning algorithm by exploiting the structure of the problem.", "title": "" }, { "docid": "12a8d007ca4dce21675ddead705c7b62", "text": "This paper presents an ethnographic account of the implementation of Lean service redesign methodologies in one UK NHS hospital operating department. It is suggested that this popular management 'technology', with its emphasis on creating value streams and reducing waste, has the potential to transform the social organisation of healthcare work. The paper locates Lean healthcare within wider debates related to the standardisation of clinical practice, the re-configuration of occupational boundaries and the stratification of clinical communities. Drawing on the 'technologies-in-practice' perspective the study is attentive to the interaction of both the intent to transform work and the response of clinicians to this intent as an ongoing and situated social practice. In developing this analysis this article explores three dimensions of social practice to consider the way Lean is interpreted and articulated (rhetoric), enacted in social practice (ritual), and experienced in the context of prevailing lines of power (resistance). Through these interlinked analytical lenses the paper suggests the interaction of Lean and clinical practice remains contingent and open to negotiation. In particular, Lean follows in a line of service improvements that bring to the fore tensions between clinicians and service leaders around the social organisation of healthcare work. The paper concludes that Lean might not be the easy remedy for making both efficiency and effectiveness improvements in healthcare.", "title": "" }, { "docid": "ac1b28346ae9df1dd3b455d113551caf", "text": "The new IEEE 802.11 standard, IEEE 802.11ax, has the challenging goal of serving more Uplink (UL) traffic and users as compared with his predecessor IEEE 802.11ac, enabling consistent and reliable streams of data (average throughput) per station. In this paper we explore several new IEEE 802.11ax UL scheduling mechanisms and compare between the maximum throughputs of unidirectional UDP Multi Users (MU) triadic. The evaluation is conducted based on Multiple-Input-Multiple-Output (MIMO) and Orthogonal Frequency Division Multiple Access (OFDMA) transmission multiplexing format in IEEE 802.11ax vs. the CSMA/CA MAC in IEEE 802.11ac in the Single User (SU) and MU modes for 1, 4, 8, 16, 32 and 64 stations scenario in reliable and unreliable channels. The comparison is conducted as a function of the Modulation and Coding Schemes (MCS) in use. In IEEE 802.11ax we consider two new flavors of acknowledgment operation settings, where the maximum acknowledgment windows are 64 or 256 respectively. In SU scenario the throughputs of IEEE 802.11ax are larger than those of IEEE 802.11ac by 64% and 85% in reliable and unreliable channels respectively. In MU-MIMO scenario the throughputs of IEEE 802.11ax are larger than those of IEEE 802.11ac by 263% and 270% in reliable and unreliable channels respectively. Also, as the number of stations increases, the advantage of IEEE 802.11ax in terms of the access delay also increases.", "title": "" }, { "docid": "8c2c54207fa24358552bc30548bec5bc", "text": "This paper proposes an edge bundling approach applied on parallel coordinates to improve the visualization of cluster information directly from the overview. Lines belonging to a cluster are bundled into a single curve between axes, where the horizontal and vertical positioning of the bundling intersection (known as bundling control points) to encode pertinent information about the cluster in a given dimension, such as variance, standard deviation, mean, median, and so on. The hypothesis is that adding this information to the overview improves the visualization overview at the same that it does not prejudice the understanding in other aspects. We have performed tests with participants to compare our approach with classic parallel coordinates and other consolidated bundling technique. The results showed most of the initially proposed hypotheses to be confirmed at the end of the study, as the tasks were performed successfully in the majority of tasks maintaining a low response time in average, as well as having more aesthetic pleasing according to participants' opinion.", "title": "" }, { "docid": "ee0c8eafd5804b215b34a443d95259d4", "text": "Fog computing has emerged as a promising technology that can bring the cloud applications closer to the physical IoT devices at the network edge. While it is widely known what cloud computing is, and how data centers can build the cloud infrastructure and how applications can make use of this infrastructure, there is no common picture on what fog computing and a fog node, as its main building block, really is. One of the first attempts to define a fog node was made by Cisco, qualifying a fog computing system as a “mini-cloud,” located at the edge of the network and implemented through a variety of edge devices, interconnected by a variety, mostly wireless, communication technologies. Thus, a fog node would be the infrastructure implementing the said mini-cloud. Other proposals have their own definition of what a fog node is, usually in relation to a specific edge device, a specific use case or an application. In this paper, we first survey the state of the art in technologies for fog computing nodes as building blocks of fog computing, paying special attention to the contributions that analyze the role edge devices play in the fog node definition. We summarize and compare the concepts, lessons learned from their implementation, and show how a conceptual framework is emerging towards a unifying fog node definition. We focus on core functionalities of a fog node as well as in the accompanying opportunities and challenges towards their practical realization in the near future.", "title": "" }, { "docid": "3c2b68ac95f1a9300585b73ca4b83122", "text": "The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data. Inspired by the nature of human perception of 3D shapes as a collection of simple parts, we explore such an abstract shape representation based on primitives. Given a single depth image of an object, we present 3DPRNN, a generative recurrent neural network that synthesizes multiple plausible shapes composed of a set of primitives. Our generative model encodes symmetry characteristics of common man-made objects, preserves long-range structural coherence, and describes objects of varying complexity with a compact representation. We also propose a method based on Gaussian Fields to generate a large scale dataset of primitive-based shape representations to train our network. We evaluate our approach on a wide range of examples and show that it outperforms nearest-neighbor based shape retrieval methods and is on-par with voxelbased generative models while using a significantly reduced parameter space.", "title": "" }, { "docid": "fea6f052c032c09408f967950098947e", "text": "The identification of signals of very recent positive selection provides information about the adaptation of modern humans to local conditions. We report here on a genome-wide scan for signals of very recent positive selection in favor of variants that have not yet reached fixation. We describe a new analytical method for scanning single nucleotide polymorphism (SNP) data for signals of recent selection, and apply this to data from the International HapMap Project. In all three continental groups we find widespread signals of recent positive selection. Most signals are region-specific, though a significant excess are shared across groups. Contrary to some earlier low resolution studies that suggested a paucity of recent selection in sub-Saharan Africans, we find that by some measures our strongest signals of selection are from the Yoruba population. Finally, since these signals indicate the existence of genetic variants that have substantially different fitnesses, they must indicate loci that are the source of significant phenotypic variation. Though the relevant phenotypes are generally not known, such loci should be of particular interest in mapping studies of complex traits. For this purpose we have developed a set of SNPs that can be used to tag the strongest approximately 250 signals of recent selection in each population.", "title": "" }, { "docid": "00f2bb2dd3840379c2442c018407b1c8", "text": "BACKGROUND\nFacebook is a social networking site (SNS) for communication, entertainment and information exchange. Recent research has shown that excessive use of Facebook can result in addictive behavior in some individuals.\n\n\nAIM\nTo assess the patterns of Facebook use in post-graduate students of Yenepoya University and evaluate its association with loneliness.\n\n\nMETHODS\nA cross-sectional study was done to evaluate 100 post-graduate students of Yenepoya University using Bergen Facebook Addiction Scale (BFAS) and University of California and Los Angeles (UCLA) loneliness scale version 3. Descriptive statistics were applied. Pearson's bivariate correlation was done to see the relationship between severity of Facebook addiction and the experience of loneliness.\n\n\nRESULTS\nMore than one-fourth (26%) of the study participants had Facebook addiction and 33% had a possibility of Facebook addiction. There was a significant positive correlation between severity of Facebook addiction and extent of experience of loneliness ( r = .239, p = .017).\n\n\nCONCLUSION\nWith the rapid growth of popularity and user-base of Facebook, a significant portion of the individuals are susceptible to develop addictive behaviors related to Facebook use. Loneliness is a factor which influences addiction to Facebook.", "title": "" }, { "docid": "cdb87a9db48b78e193d9229282bd3b67", "text": "While large-scale automatic grading of student programs for correctness is widespread, less effort has focused on automating feedback for good programming style:} the tasteful use of language features and idioms to produce code that is not only correct, but also concise, elegant, and revealing of design intent. We hypothesize that with a large enough (MOOC-sized) corpus of submissions to a given programming problem, we can observe a range of stylistic mastery from naïve to expert, and many points in between, and that we can exploit this continuum to automatically provide hints to learners for improving their code style based on the key stylistic differences between a given learner's submission and a submission that is stylistically slightly better. We are developing a methodology for analyzing and doing feature engineering on differences between submissions, and for learning from instructor-provided feedback as to which hints are most relevant. We describe the techniques used to do this in our prototype, which will be deployed in a residential software engineering course as an alpha test prior to deploying in a MOOC later this year.", "title": "" }, { "docid": "7a8c7f369c060003ed99bb4ff784b687", "text": "An important goal in visual recognition is to devise image representations that are invariant to particular transformations. In this paper, we address this goal with a new type of convolutional neural network (CNN) whose invariance is encoded by a reproducing kernel. Unlike traditional approaches where neural networks are learned either to represent data or for solving a classification task, our network learns to approximate the kernel feature map on training data. Such an approach enjoys several benefits over classical ones. First, by teaching CNNs to be invariant, we obtain simple network architectures that achieve a similar accuracy to more complex ones, while being easy to train and robust to overfitting. Second, we bridge a gap between the neural network literature and kernels, which are natural tools to model invariance. We evaluate our methodology on visual recognition tasks where CNNs have proven to perform well, e.g., digit recognition with the MNIST dataset, and the more challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive with the state of the art.", "title": "" }, { "docid": "6520be1becd7e446b24ecb2fae6b1d50", "text": "Neural networks in their modern deep learning incarnation have achieved state of the art performance on a wide variety of tasks and domains. A core intuition behind these methods is that they learn layers of features which interpolate between two domains in a series of related parts. The first part of this thesis introduces the building blocks of neural networks for computer vision. It starts with linear models then proceeds to deep multilayer perceptrons and convolutional neural networks, presenting the core details of each. However, the introduction also focuses on intuition by visualizing concrete examples of the parts of a modern network. The second part of this thesis investigates regularization of neural networks. Methods like dropout and others have been proposed to favor certain (empirically better) solutions over others. However, big deep neural networks still overfit very easily. This section proposes a new regularizer called DeCov, which leads to significantly reduced overfitting (difference between train and val performance) and greater generalization, sometimes better than dropout and other times not. The regularizer is based on the cross-covariance of hidden representations and takes advantage of the intuition that different features should try to represent different things, an intuition others have explored with similar losses. Experiments across a range of datasets and network architectures demonstrate reduced overfitting due to DeCov while almost always maintaining or increasing generalization performance and often improving performance over dropout.", "title": "" }, { "docid": "879282128be8b423114401f6ec8baf8a", "text": "Yelp is one of the largest online searching and reviewing systems for kinds of businesses, including restaurants, shopping, home services et al. Analyzing the real world data from Yelp is valuable in acquiring the interests of users, which helps to improve the design of the next generation system. This paper targets the evaluation of Yelp dataset, which is provided in the Yelp data challenge. A bunch of interesting results are found. For instance, to reach any one in the Yelp social network, one only needs 4.5 hops on average, which verifies the classical six degree separation theory; Elite user mechanism is especially effective in maintaining the healthy of the whole network; Users who write less than 100 business reviews dominate. Those insights are expected to be considered by Yelp to make intelligent business decisions in the future.", "title": "" }, { "docid": "61ad35eaee012d8c1bddcaeee082fa22", "text": "For realistic simulation it is necessary to thoroughly define and describe light-source characteristics¿especially the light-source geometry and the luminous intensity distribution.", "title": "" }, { "docid": "b6bf6c87040bc4996315fee62acb911b", "text": "The influence of the sleep patterns of 2,259 students, aged 11 to 14 years, on trajectories of depressive symptoms, self-esteem, and grades was longitudinally examined using latent growth cross-domain models. Consistent with previous research, sleep decreased over time. Students who obtained less sleep in sixth grade exhibited lower initial self-esteem and grades and higher initial levels of depressive symptoms. Similarly, students who obtained less sleep over time reported heightened levels of depressive symptoms and decreased self-esteem. Sex of the student played a strong role as a predictor of hours of sleep, self-esteem, and grades. This study underscores the role of sleep in predicting adolescents' psychosocial outcomes and highlights the importance of using idiographic methodologies in the study of developmental processes.", "title": "" }, { "docid": "80563d90bfdccd97d9da0f7276468a43", "text": "An essential aspect of knowing language is knowing the words of that language. This knowledge is usually thought to reside in the mental lexicon, a kind of dictionary that contains information regarding a word's meaning, pronunciation, syntactic characteristics, and so on. In this article, a very different view is presented. In this view, words are understood as stimuli that operate directly on mental states. The phonological, syntactic and semantic properties of a word are revealed by the effects it has on those states.", "title": "" } ]
scidocsrr
e644d6bd9fe2152c7bfef76f6728b2c6
Examining playfulness in adults : Testing its correlates with personality , positive psychological functioning , goal aspirations , and multi-methodically assessed ingenuity
[ { "docid": "059aed9f2250d422d76f3e24fd62bed8", "text": "Single case studies led to the discovery and phenomenological description of Gelotophobia and its definition as the pathological fear of appearing to social partners as a ridiculous object (Titze 1995, 1996, 1997). The aim of the present study is to empirically examine the core assumptions about the fear of being laughed at in a sample comprising a total of 863 clinical and non-clinical participants. Discriminant function analysis yielded that gelotophobes can be separated from other shame-based neurotics, non-shamebased neurotics, and controls. Separation was best for statements specifically describing the gelotophobic symptomatology and less potent for more general questions describing socially avoidant behaviors. Factor analysis demonstrates that while Gelotophobia is composed of a set of correlated elements in homogenous samples, overall the concept is best conceptualized as unidimensional. Predicted and actual group membership converged well in a cross-classification (approximately 69% of correctly classified cases). Overall, it can be concluded that the fear of being laughed at varies tremendously among adults and might hold a key to understanding certain forms", "title": "" }, { "docid": "8c4d4567cf772a76e99aa56032f7e99e", "text": "This paper discusses current perspectives on play and leisure and proposes that if play and leisure are to be accepted as viable occupations, then (a) valid and reliable measures of play must be developed, (b) interventions must be examined for inclusion of the elements of play, and (c) the promotion of play and leisure must be an explicit goal of occupational therapy intervention. Existing tools used by occupational therapists to assess clients' play and leisure are evaluated for the aspects of play and leisure they address and the aspects they fail to address. An argument is presented for the need for an assessment of playfulness, rather than of play or leisure activities. A preliminary model for the development of such an assessment is proposed.", "title": "" } ]
[ { "docid": "51f63ccb338706e59b81cb3dfd36cfc6", "text": "As the first decentralized cryptocurrency, Bitcoin [1] has ignited much excitement, not only for its novel realization of a central bank-free financial instrument, but also as an alternative approach to classical distributed computing problems, such as reaching agreement distributedly in the presence of misbehaving parties, as well as to numerous other applications-contracts, reputation systems, name services, etc. The soundness and security of these applications, however, hinges on the thorough understanding of the fundamental properties of its underlying blockchain data structure, which parties (\"miners\") maintain and try to extend by generating \"proofs of work\" (POW, aka \"cryptographic puzzle\"). In this talk we follow the approach introduced in [2], formulating such fundamental properties of the blockchain, and then showing how applications such as consensus and a robust public transaction ledger can be built ``on top'' of them. The properties are as follows, assuming the adversary's hashing power (our analysis holds against arbitrary attacks) is strictly less than ½ and high network synchrony:\n Common prefix: The blockchains maintained by the honest parties possess a large common prefix. More specifically, if two honest parties \"prune\" (i.e., cut off) k blocks from the end of their local chains, the probability that the resulting pruned chains will not be mutual prefixes of each other drops exponentially in the that parameter.\n Chain quality: We show a bound on the ratio of blocks in the chain of any honest party contributed by malicious parties. In particular, as the adversary's hashing power approaches ½, we show that blockchains are only guaranteed to have few, but still some, blocks contributed by honest parties.\n Chain growth: We quantify the number of blocks that are added to the blockchain during any given number of rounds during the execution of the protocol. (N.B.: This property, which in [2] was proven and used directly in the form of a lemma, was explicitly introduced in [3]. Identifying it as a separate property enables modular proofs of applications' properties.)\n The above properties hold assuming that all parties-honest and adversarial-\"wake up\" and start computing at the same time, or, alternatively, that they compute on a common random string (the \"genesis\" block) only made available at the exact time when the protocol execution is to begin. In this talk we also consider the question of whether such a trusted setup/behavioral assumption is necessary, answering it in the negative by presenting a Bitcoin-like blockchain protocol that is provably secure without trusted setup, and, further, overcomes such lack in a scalable way-i.e., with running time independent of the number of parties [4].\n A direct consequence of our construction above is that consensus can be solved directly by a blockchain protocol without trusted setup assuming an honest majority (in terms of computational power).", "title": "" }, { "docid": "ad58798807256cff2eff9d3befaf290a", "text": "Centrality indices are an essential concept in network analysis. For those based on shortest-path distances the computation is at least quadratic in the number of nodes, since it usually involves solving the single-source shortest-paths (SSSP) problem from every node. Therefore, exact computation is infeasible for many large networks of interest today. Centrality scores can be estimated, however, from a limited number of SSSP computations. We present results from an experimental study of the quality of such estimates under various selection strategies for the source vertices. ∗Research supported in part by DFG under grant Br 2158/2-3", "title": "" }, { "docid": "872a79a47e6a4d83e7440ea5e7126dee", "text": "We propose simple and extremely efficient methods for solving the Basis Pursuit problem min{‖u‖1 : Au = f, u ∈ R}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem min u∈Rn μ‖u‖1 + 1 2 ‖Au− f‖2, for given matrix A and vector fk. We show analytically that this iterative approach yields exact solutions in a finite number of steps, and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving A and A> can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is solely based on such operations for solving the above unconstrained sub-problem, we were able to solve huge instances of compressed sensing problems quickly on a standard PC.", "title": "" }, { "docid": "5c056ba2e29e8e33c725c2c9dd12afa8", "text": "The large amount of text data which are continuously produced over time in a variety of large scale applications such as social networks results in massive streams of data. Typically massive text streams are created by very large scale interactions of individuals, or by structured creations of particular kinds of content by dedicated organizations. An example in the latter category would be the massive text streams created by news-wire services. Such text streams provide unprecedented challenges to data mining algorithms from an efficiency perspective. In this paper, we review text stream mining algorithms for a wide variety of problems in data mining such as clustering, classification and topic modeling. A recent challenge arises in the context of social streams, which are generated by large social networks such as Twitter. We also discuss a number of future challenges in this area of research.", "title": "" }, { "docid": "3023637fd498bb183dae72135812c304", "text": "computational method for its solution. A Psychological Description of LSA as a Theory of Learning, Memory, and Knowledge We give a more complete description of LSA as a mathematical model later when we use it to simulate lexical acquisition. However, an overall outline is necessary to understand a roughly equivalent psychological theory we wish to present first. The input to LSA is a matrix consisting of rows representing unitary event types by columns representing contexts in which instances of the event types appear. One example is a matrix of unique word types by many individual paragraphs in which the words are encountered, where a cell contains the number of times that a particular word type, say model, appears in a particular paragraph, say this one. After an initial transformation of the cell entries, this matrix is analyzed by a statistical technique called singular value decomposition (SVD) closely akin to factor analysis, which allows event types and individual contexts to be re-represented as points or vectors in a high dimensional abstract space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or con-space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or contexts (e.g., word-word, word-paragraph, or paragraph-paragraph similarities). Psychologically, the data that the model starts with are raw, first-order co-occurrence relations between stimuli and the local contexts or episodes in which they occur. The stimuli or event types may be thought of as unitary chunks of perception or memory. The first-order process by which initial pairwise associations are entered and transformed in LSA resembles classical conditioning in that it depends on contiguity or co-occurrence, but weights the result first nonlinearly with local occurrence frequency, then inversely with a function of the number of different contexts in which the particular component is encountered overall and the extent to which its occurrences are spread evenly over contexts. However, there are possibly important differences in the details as currently implemented; in particular, LSA associations are symmetrical; a context is associated with the individual events it contains by the same cell entry as the events are associated with the context. This would not be a necessary feature of the model; it would be possible to make the initial matrix asymmetrical, with a cell indicating the co-occurrence relation, for example, between a word and closely following words. Indeed, Lund and Burgess (in press; Lund, Burgess, & Atchley, 1995), and SchUtze (1992a, 1992b), have explored related models in which such data are the input. The first step of the LSA analysis is to transform each cell entry from the number of times that a word appeared in a particular context to the log of that frequency. This approximates the standard empirical growth functions of simple learning. The fact that this compressive function begins anew with each context also yields a kind of spacing effect; the association of A and B is greater if both appear in two different contexts than if they each appear twice in one context. In a second transformation, all cell entries for a given word are divided by the entropy for that word, Z p log p over all its contexts. Roughly speaking, this step accomplishes much the same thing as conditioning rules such as those described by Rescorla & Wagner (1972), in that it makes the primary association better represent the informative relation between the entities rather than the mere fact that they occurred together. Somewhat more formally, the inverse entropy measure estimates the degree to which observing the occurrence of a component specifies what context it is in; the larger the entropy of, say, a word, the less information its observation transmits about the places it has occurred, so the less usage-defined meaning it acquires, and conversely, the less the meaning of a particular context is determined by containing the word. It is interesting to note that automatic information retrieval methods (including LSA when used for the purpose) are greatly improved by transformations of this general form, the present one usually appearing to be the best (Harman, 1986). It does not seem far-fetched to believe that the necessary transform for good information retrieval, retrieval that brings back text corresponding to what a person has in mind when the person offers one or more query words, corresponds to the functional relations in basic associative processes. Anderson (1990) has drawn attention to the analogy between information retrieval in external systems and those in the human mind. It is not clear which way the relationship goes. Does information retrieval in automatic systems work best when it mimics the circumstances that make people think two things are related, or is there a general logic that tends to make them have similar forms? In automatic information retrieval the logic is usually assumed to be that idealized searchers have in mind exactly the same text as they would like the system to find and draw the words in 2 Although this exploratory process takes some advantage of chance, there is no reason why any number of dimensions should be much better than any other unless some mechanism like the one proposed is at work. In all cases, the model's remaining parameters were fitted only to its input (training) data and not to the criterion (generalization) test. THE LATENT SEMANTIC ANALYSIS THEORY OF KNOWLEDGE 217 their queries from that text (see Bookstein & Swanson, 1974). Then the system's challenge is to estimate the probability that each text in its store is the one that the searcher was thinking about. This characterization, then, comes full circle to the kind of communicative agreement model we outlined above: The sender issues a word chosen to express a meaning he or she has in mind, and the receiver tries to estimate the probability of each of the sender's possible messages. Gallistel (1990), has argued persuasively for the need to separate local conditioning or associative processes from global representation of knowledge. The LSA model expresses such a separation in a very clear and precise way. The initial matrix after transformation to log frequency divided by entropy represents the product of the local or pairwise processes? The subsequent analysis and dimensionality reduction takes all of the previously acquired local information and turns it into a unified representation of knowledge. Thus, the first processing step of the model, modulo its associational symmetry, is a rough approximation to conditioning or associative processes. However, the model's next steps, the singular value decomposition and dimensionality optimization, are not contained as such in any extant psychological theory of learning, although something of the kind may be hinted at in some modem discussions of conditioning and, on a smaller scale and differently interpreted, is often implicit and sometimes explicit in many neural net and spreading-activation architectures. This step converts the transformed associative data into a condensed representation. The condensed representation can be seen as achieving several things, although they are at heart the result of only one mechanism. First, the re-representation captures indirect, higher-order associations. That is, jf a particular stimulus, X, (e.g., a word) has been associated with some other stimulus, Y, by being frequently found in joint context (i.e., contiguity), and Y is associated with Z, then the condensation can cause X and Z to have similar representations. However, the strength of the indirect XZ association depends on much more than a combination of the strengths of XY and YZ. This is because the relation between X and Z also depends, in a wellspecified manner, on the relation of each of the stimuli, X, Y, and Z, to every other entity in the space. In the past, attempts to predict indirect associations by stepwise chaining rules have not been notably successful (see, e.g., Pollio, 1968; Young, 1968). If associations correspond to distances in space, as supposed by LSA, stepwise chaining rules would not be expected to work well; if X is two units from Y and Y is two units from Z, all we know about the distance from X to Z is that it must be between zero and four. But with data about the distances between X, Y, Z, and other points, the estimate of XZ may be greatly improved by also knowing XY and YZ. An alternative view of LSA's effects is the one given earlier, the induction of a latent higher order similarity structure (thus its name) among representations of a large collection of events. Imagine, for example, that every time a stimulus (e.g., a word) is encountered, the distance between its representation and that of every other stimulus that occurs in close proximity to it is adjusted to be slightly smaller. The adjustment is then allowed to percolate through the whole previously constructed structure of relations, each point pulling on its neighbors until all settle into a compromise configuration (physical objects, weather systems, and Hopfield nets do this too; Hopfield, 1982). It is easy to see that the resulting relation between any two representations depends not only on direct experience with them but with everything else ever experienced. Although the current mathematical implementation of LSA does not work in this incremental way, its effects are much the same. The question, then, is whether such a mechanism, when combined with the statistics of experience, produces a faithful reflection of human knowledge. Finally, to anticipate what is developed later, the computational scheme used by LSA for combining and condensing local information into a common", "title": "" }, { "docid": "02d8c55750904b7f4794139bcfa51693", "text": "BACKGROUND\nMore than one-third of deaths during the first five years of life are attributed to undernutrition, which are mostly preventable through economic development and public health measures. To alleviate this problem, it is necessary to determine the nature, magnitude and determinants of undernutrition. However, there is lack of evidence in agro-pastoralist communities like Bule Hora district. Therefore, this study assessed magnitude and factors associated with undernutrition in children who are 6-59 months of age in agro-pastoral community of Bule Hora District, South Ethiopia.\n\n\nMETHODS\nA community based cross-sectional study design was used to assess the magnitude and factors associated with undernutrition in children between 6-59 months. A structured questionnaire was used to collect data from 796 children paired with their mothers. Anthropometric measurements and determinant factors were collected. SPSS version 16.0 statistical software was used for analysis. Bivariate and multivariate logistic regression analyses were conducted to identify factors associated to nutritional status of the children Statistical association was declared significant if p-value was less than 0.05.\n\n\nRESULTS\nAmong study participants, 47.6%, 29.2% and 13.4% of them were stunted, underweight, and wasted respectively. Presence of diarrhea in the past two weeks, male sex, uneducated fathers and > 4 children ever born to a mother were significantly associated with being underweight. Presence of diarrhea in the past two weeks, male sex and pre-lacteal feeding were significantly associated with stunting. Similarly, presence of diarrhea in the past two weeks, age at complementary feed was started and not using family planning methods were associated to wasting.\n\n\nCONCLUSION\nUndernutrition is very common in under-five children of Bule Hora district. Factors associated to nutritional status of children in agro-pastoralist are similar to the agrarian community. Diarrheal morbidity was associated with all forms of Protein energy malnutrition. Family planning utilization decreases the risk of stunting and underweight. Feeding practices (pre-lacteal feeding and complementary feeding practice) were also related to undernutrition. Thus, nutritional intervention program in Bule Hora district in Ethiopia should focus on these factors.", "title": "" }, { "docid": "00b85bd052a196b1f02d00f6ad532ed2", "text": "The book Build Your Own Database Driven Website Using PHP & MySQL by Kevin Yank provides a hands-on look at what's involved in building a database-driven Web site. The author does a good job of patiently teaching the reader how to install and configure PHP 5 and MySQL to organize dynamic Web pages and put together a viable content management system. At just over 350 pages, the book is rather small compared to a lot of others on the topic, but it contains all the essentials. The author employs excellent teaching techniques to set up the foundation stone by stone and then grouts everything solidly together later in the book. This book aims at intermediate and advanced Web designers looking to make the leap to server-side programming. The author assumes his readers are comfortable with simple HTML. He provides an excellent introduction to PHP and MySQL (including installation) and explains how to make them work together. The amount of material he covers guarantees that almost any reader will benefit.", "title": "" }, { "docid": "09c4b35650141dfaf6e945dd6460dcf6", "text": "H2 histamine receptors are localized postsynaptically in the CNS. The aim of this study was to evaluate the effects of acute (1 day) and prolonged (7 day) administration of the H2 histamine receptor antagonist, famotidine, on the anticonvulsant activity of conventional antiepileptic drugs (AEDs; valproate, carbamazepine, diphenylhydantoin and phenobarbital) against maximal electroshock (MES)-induced seizures in mice. In addition, the effects of these drugs alone or in combination with famotidine were studied on motor performance and long-term memory. The influence of H2 receptor antagonist on brain concentrations and free plasma levels of the antiepileptic drugs was also evaluated. After acute or prolonged administration of famotidine (at dose of 10mg/kg) the drug raised the threshold for electroconvulsions. No effect was observed on this parameter at lower doses. Famotidine (5mg/kg), given acutely, significantly enhanced the anticonvulsant activity of valproate, which was expressed by a decrease in ED50. After the 7-day treatment, famotidine (5mg/kg) increased the anticonvulsant activity of diphenylhydantoin against MES. Famotidine (5mg/kg), after acute and prolonged administration, combined with valproate, phenobarbital, diphenylhydantoin and carbamazepine did not alter their free plasma levels. In contrast, brain concentrations of valproate were elevated for 1-day treatment with famotidine (5mg/kg). Moreover, famotidine co-applied with AEDs, given prolonged, worsened motor coordination in mice treated with carbamazepine or diphenylhydantoin. In contrast this histamine antagonist, did not impair the performance of mice evaluated in the long-term memory task. The results of this study indicate that famotidine modifies the anticonvulsant activity of some antiepileptic drugs.", "title": "" }, { "docid": "984f7a2023a14efbbd5027abfc12a586", "text": "Name ambiguity stems from the fact that many people or objects share identical names in the real world. Such name ambiguity decreases the performance of document retrieval, Web search, information integration, and may cause confusion in other applications. Due to the same name spellings and lack of information, it is a nontrivial task to distinguish them accurately. In this article, we focus on investigating the problem in digital libraries to distinguish publications written by authors with identical names. We present an effective framework named GHOST (abbreviation for GrapHical framewOrk for name diSambiguaTion), to solve the problem systematically. We devise a novel similarity metric, and utilize only one type of attribute (i.e., coauthorship) in GHOST. Given the similarity matrix, intermediate results are grouped into clusters with a recently introduced powerful clustering algorithm called Affinity Propagation. In addition, as a complementary technique, user feedback can be used to enhance the performance. We evaluated the framework on the real DBLP and PubMed datasets, and the experimental results show that GHOST can achieve both high precision and recall.", "title": "" }, { "docid": "88a8f162017f80c17be58faad16a6539", "text": "Instruction List (IL) is a simple typed assembly language commonly used in embedded control. There is little tool support for IL and, although defined in the IEC 61131-3 standard, there is no formal semantics. In this work we develop a formal operational semantics. Moreover, we present an abstract semantics, which allows approximative program simulation for a (possibly infinte) set of inputs in one simulation run. We also extended this framework to an abstract interpretation based analysis, which is implemented in our tool Homer. All these analyses can be carried out without knowledge of formal methods, which is typically not present in the IL community.", "title": "" }, { "docid": "40bc405aaec0fd8563de84e163091325", "text": "The extremely tight binding between biotin and avidin or streptavidin makes labeling proteins with biotin a useful tool for many applications. BirA is the Escherichia coli biotin ligase that site-specifically biotinylates a lysine side chain within a 15-amino acid acceptor peptide (also known as Avi-tag). As a complementary approach to in vivo biotinylation of Avi-tag-bearing proteins, we developed a protocol for producing recombinant BirA ligase for in vitro biotinylation. The target protein was expressed as both thioredoxin and MBP fusions, and was released from the corresponding fusion by TEV protease. The liberated ligase was separated from its carrier using HisTrap HP column. We obtained 24.7 and 27.6 mg BirA ligase per liter of culture from thioredoxin and MBP fusion constructs, respectively. The recombinant enzyme was shown to be highly active in catalyzing in vitro biotinylation. The described protocol provides an effective means for making BirA ligase that can be used for biotinylation of different Avi-tag-bearing substrates.", "title": "" }, { "docid": "86177ff4fbc089fde87d1acd8452d322", "text": "Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of grammaticality judgement decreases as AoA increases, until around age 8, thus showing the unique effect of AoA on grammatical judgement in early learners. No such effects were found in those who acquired BSL after age 8. These late learners appear to have first language proficiency in English instead, which may have been used to scaffold learning of BSL as a second language later in life.", "title": "" }, { "docid": "d9830ad99cc9339d62f3c3f5ec1d460a", "text": "The notion of value and of value creation has raised interest over the last 30 years for both researchers and practitioners. Although several studies have been conducted in marketing, value remains and elusive and often ill-defined concept. A clear understanding of value and value determinants can increase the awareness in strategic decisions and pricing choices. Objective of this paper is to preliminary discuss the main kinds of entity that an ontology of economic value should deal with.", "title": "" }, { "docid": "bc1ff96ebc41bc3040bb254f1620b190", "text": "The paper presents a new generation of torque-controlled li ghtweight robots (LWR) developed at the Institute of Robotics and Mechatronics of the German Aerospace Center . I order to act in unstructured environments and interact with humans, the robots have design features an d co trol/software functionalities which distinguish them from classical robots, such as: load-to-weight ratio o f 1:1, torque sensing in the joints, active vibration damping, sensitive collision detection, as well as complia nt control on joint and Cartesian level. Due to the partially unknown properties of the environment, robustne s of planning and control with respect to environmental variations is crucial. After briefly describing the main har dware features, the paper focuses on showing how joint torque sensing (as a main feature of the robot) is conse quently used for achieving the above mentioned performance, safety, and robustness properties.", "title": "" }, { "docid": "d0b287d0bd41dedbbfa3357653389e9c", "text": "Credit scoring model have been developed by banks and researchers to improve the process of assessing credit worthiness during the credit evaluation process. The objective of credit scoring models is to assign credit risk to either a ‘‘good risk’’ group that is likely to repay financial obligation or a ‘‘bad risk’’ group who has high possibility of defaulting on the financial obligation. Construction of credit scoring models requires data mining techniques. Using historical data on payments, demographic characteristics and statistical techniques, credit scoring models can help identify the important demographic characteristics related to credit risk and provide a score for each customer. This paper illustrates using data mining to improve assessment of credit worthiness using credit scoring models. Due to privacy concerns and unavailability of real financial data from banks this study applies the credit scoring techniques using data of payment history of members from a recreational club. The club has been facing a problem of rising number in defaulters in their monthly club subscription payments. The management would like to have a model which they can deploy to identify potential defaulters. The classification performance of credit scorecard model, logistic regression model and decision tree model were compared. The classification error rates for credit scorecard model, logistic regression and decision tree were 27.9%, 28.8% and 28.1%, respectively. Although no model outperforms the other, scorecards are relatively much easier to deploy in practical applications. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8d21369604ad890704d535785c8e3171", "text": "With the integration of advanced computing and communication technologies, smart grid is considered as the next-generation power system, which promises self healing, resilience, sustainability, and efficiency to the energy critical infrastructure. The smart grid innovation brings enormous challenges and initiatives across both industry and academia, in which the security issue emerges to be a critical concern. In this paper, we present a survey of recent security advances in smart grid, by a data driven approach. Compared with existing related works, our survey is centered around the security vulnerabilities and solutions within the entire lifecycle of smart grid data, which are systematically decomposed into four sequential stages: 1) data generation; 2) data acquisition; 3) data storage; and 4) data processing. Moreover, we further review the security analytics in smart grid, which employs data analytics to ensure smart grid security. Finally, an effort to shed light on potential future research concludes this paper.", "title": "" }, { "docid": "7e8f116433e530032d31938703af1cd3", "text": "Background. This systematic review and meta-analysis Tathiane Larissa Lenzi, MSc, PhD; Anelise Fernandes Montagner, MSc, PhD; Fabio Zovico Maxnuck Soares, PhD; Rachel de Oliveira Rocha, MSc, PhD evaluated the effectiveness of professional topical fluoride application (gels or varnishes) on the reversal treatment of incipient enamel carious lesions in primary or permanent", "title": "" }, { "docid": "8324dc0dfcfb845739a22fb9321d5482", "text": "In this paper, we study deep generative models for effective unsupervised learning. We propose VGAN, which works by minimizing a variational lower bound of the negative log likelihood (NLL) of an energy based model (EBM), where the model density p(x) is approximated by a variational distribution q(x) that is easy to sample from. The training of VGAN takes a two step procedure: given p(x), q(x) is updated to maximize the lower bound; p(x) is then updated one step with samples drawn from q(x) to decrease the lower bound. VGAN is inspired by the generative adversarial networks (GANs), where p(x) corresponds to the discriminator and q(x) corresponds to the generator, but with several notable differences. We hence name our model variational GANs (VGANs). VGAN provides a practical solution to training deep EBMs in high dimensional space, by eliminating the need of MCMC sampling. From this view, we are also able to identify causes to the difficulty of training GANs and propose viable solutions. 1", "title": "" }, { "docid": "3a5be5b365cfdc6f29646bf97953fc18", "text": "Fuzzy set methods have been used to model and manage uncertainty in various aspects of image processing, pattern recognition, and computer vision. High-level computer vision applications hold a great potential for fuzzy set theory because of its links to natural language. Linguistic scene description, a language-based interpretation of regions and their relationships, is one such application that is starting to bear the fruits of fuzzy set theoretic involvement. In this paper, we are expanding on two earlier endeavors. We introduce new families of fuzzy directional relations that rely on the computation of histograms of forces. These families preserve important relative position properties. They provide inputs to a fuzzy rule base that produces logical linguistic descriptions along with assessments as to the validity of the descriptions. Each linguistic output uses hedges from a dictionary of about 30 adverbs and other terms that can be tailored to individual users. Excellent results from several synthetic and real image examples show the applicability of this approach.", "title": "" }, { "docid": "37d3954ce00a1f9fd90c6adfde388ab1", "text": "Computational personality traits assessment is one of an interesting areas in affective computing. It becomes popular because personality identification can be used in many areas and get benefits. Such areas are business, politics, education, social media, medicine, and user interface design. The famous statement \"Face is a mirror of the mind\" proves that person's appearance depends on the inner aspects of a person. Conversely, Person's behavior and appearance describe the person's personality, so an analyze on appearance and behavior gives knowledge on personality traits. There are varieties of methods have been discovered by researchers to assess personality computationally with various machine learning algorithms. In this paper reviews methods and theories involved in psychological traits assessment and evolution of computational psychological traits assessment with different machine learning algorithms and different feature sets.", "title": "" } ]
scidocsrr
0a8ffc3e525a9e15863c7e0d84c7a2d0
SPECTRAL BASIS NEURAL NETWORKS FOR REAL-TIME TRAVEL TIME FORECASTING
[ { "docid": "727a97b993098aa1386e5bfb11a99d4b", "text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.", "title": "" }, { "docid": "8b1b0ee79538a1f445636b0798a0c7ca", "text": "Much of the current activity in the area of intelligent vehicle-highway systems (IVHS) focuses on one simple objective: to collect more data. Clearly, improvements in sensor technology and communication systems will allow transportation agencies to more closely monitor the condition of the surface transportation system. However, monitoring alone cannot improve the safety or efficiency of the system. It is imperative that surveillance data be used to manage the system in a proactive rather than a reactive manner. 'Proactive traffic management will require the ability to predict traffic conditions. Previous predictive modeling approaches can be grouped into three categories: (a) historical, data-based algorithms; (b) time-series models; and (c) simulations. A relatively new mathematical model, the neural network, offers an attractive alternative because neural networks can model undefined, complex nonlinear surfaces. In a comparison of a backpropagation neural network model with the more traditional approaches of an historical, data-based algorithm and a time-series model, the backpropagation model· was clearly superior, although all three models did an adequate job of predicting future traffic volumes. The backpropagation model was more responsive to dynamic conditions than the historical, data-based algorithm, and it did not experience the lag and overprediction characteristics of the time-series model. Given these advantages and the backpropagation model's ability to run in a parallel computing environment, it appears that such neural network prediction models hold considerable potential for use in real-time IVHS applications.", "title": "" } ]
[ { "docid": "b01b7d382f534812f07faaaa1442b3f9", "text": "In this paper, we first establish new relationships in matrix forms among discrete Fourier transform (DFT), generalized DFT (GDFT), and various types of discrete cosine transform (DCT) and discrete sine transform (DST) matrices. Two new independent tridiagonal commuting matrices for each of DCT and DST matrices of types I, IV, V, and VIII are then derived from the existing commuting matrices of DFT and GDFT. With these new commuting matrices, the orthonormal sets of Hermite-like eigenvectors for DCT and DST matrices can be determined and the discrete fractional cosine transform (DFRCT) and the discrete fractional sine transform (DFRST) are defined. The relationships among the discrete fractional Fourier transform (DFRFT), fractional GDFT, and various types of DFRCT and DFRST are developed to reduce computations for DFRFT and fractional GDFT.", "title": "" }, { "docid": "d60fb42ca7082289c907c0e2e2c343fc", "text": "As mentioned in the paper, the direct optimization of group assignment variables with reduced gradients yields faster convergence than optimization via softmax reparametrization. Figure 1 shows the distribution plots, which are provided by TensorFlow, of class-to-group assignments using two methods. Despite starting with lower variance, when the distribution of group assignment variables diverged to", "title": "" }, { "docid": "7380419cc9c5eac99e8d46e73df78285", "text": "This paper discusses the classification of books purely based on cover image and title, without prior knowledge or context of author and origin. Several methods were implemented to assess the ability to distinguish books based on only these two characteristics. First we used a color-based distribution approach. Then we implemented transfer learning with convolutional neural networks on the cover image along with natural language processing on the title text. We found that image and text modalities yielded similar accuracy which indicate that we have reached a certain threshold in distinguishing between the genres that we have defined. This was confirmed by the accuracy being quite close to the human oracle accuracy.", "title": "" }, { "docid": "793d41551a918a113f52481ff3df087e", "text": "In this paper, we propose a novel deep captioning framework called Attention-based multimodal recurrent neural network with Visual Concept Transfer Mechanism (A-VCTM). There are three advantages of the proposed A-VCTM. (1) A multimodal layer is used to integrate the visual representation and context representation together, building a bridge that connects context information with visual information directly. (2) An attention mechanism is introduced to lead the model to focus on the regions corresponding to the next word to be generated (3) We propose a visual concept transfer mechanism to generate novel visual concepts and enrich the description sentences. Qualitative and quantitative results on two standard benchmarks, MSCOCO and Flickr30K show the effectiveness and practicability of the proposed A-VCTM framework.", "title": "" }, { "docid": "8c0d117602ecadee24215f5529e527c6", "text": "We present the first open-set language identification experiments using one-class classification models. We first highlight the shortcomings of traditional feature extraction methods and propose a hashing-based feature vectorization approach as a solution. Using a dataset of 10 languages from different writing systems, we train a One-Class Support Vector Machine using only a monolingual corpus for each language. Each model is evaluated against a test set of data from all 10 languages and we achieve an average F-score of 0.99, demonstrating the effectiveness of this approach for open-set language identification.", "title": "" }, { "docid": "478aa46b9dafbc111c1ff2cdb03a5a77", "text": "This paper presents results from recent work using structured light laser profile imaging to create high resolution bathymetric maps of underwater archaeological sites. Documenting the texture and structure of submerged sites is a difficult task and many applicable acoustic and photographic mapping techniques have recently emerged. This effort was completed to evaluate laser profile imaging in comparison to stereo imaging and high frequency multibeam mapping. A ROV mounted camera and inclined 532 nm sheet laser were used to create profiles of the bottom that were then merged into maps using platform navigation data. These initial results show very promising resolution in comparison to multibeam and stereo reconstructions, particularly in low contrast scenes. At the test sites shown here there were no significant complications related to scattering or attenuation of the laser sheet by the water. The resulting terrain was gridded at 0.25 cm and shows overall centimeter level definition. The largest source of error was related to the calibration of the laser and camera geometry. Results from three small areas show the highest resolution 3D models of a submerged archaeological site to date and demonstrate that laser imaging will be a viable method for accurate three dimensional site mapping and documentation.", "title": "" }, { "docid": "2876086e4431e8607d5146f14f0c29dc", "text": "Vascular ultrasonography has an important role in the diagnosis and management of venous disease. The venous system, however, is more complex and variable compared to the arterial system due to its frequent anatomical variations. This often becomes quite challenging for sonographers. This paper discusses the anatomy of the long saphenous vein and its anatomical variations accompanied by sonograms and illustrations.", "title": "" }, { "docid": "d362b36e0c971c43856a07b7af9055f3", "text": "s (New York: ACM), pp. 1617 – 20. MASLOW, A.H., 1954,Motivation and personality (New York: Harper). MCDONAGH, D., HEKKERT, P., VAN ERP, J. and GYI, D. (Eds), 2003, Design and Emotion: The Experience of Everyday Things (London: Taylor & Francis). MILLARD, N., HOLE, L. and CROWLE, S., 1999, Smiling through: motivation at the user interface. In Proceedings of the HCI International’99, Volume 2 (pp. 824 – 8) (Mahwah, NJ, London: Lawrence Erlbaum Associates). NORMAN, D., 2004a, Emotional design: Why we love (or hate) everyday things (New York: Basic Books). NORMAN, D., 2004b, Introduction to this special section on beauty, goodness, and usability. Human Computer Interaction, 19, pp. 311 – 18. OVERBEEKE, C.J., DJAJADININGRAT, J.P., HUMMELS, C.C.M. and WENSVEEN, S.A.G., 2002, Beauty in Usability: Forget about ease of use! In Pleasure with products: Beyond usability, W. Green and P. Jordan (Eds), pp. 9 – 18 (London: Taylor & Francis). 96 M. Hassenzahl and N. Tractinsky D ow nl oa de d by [ M as se y U ni ve rs ity L ib ra ry ] at 2 1: 34 2 3 Ju ly 2 01 1 PICARD, R., 1997, Affective computing (Cambridge, MA: MIT Press). PICARD, R. and KLEIN, J., 2002, Computers that recognise and respond to user emotion: theoretical and practical implications. Interacting with Computers, 14, pp. 141 – 69. POSTREL, V., 2002, The substance of style (New York: Harper Collins). SELIGMAN, M.E.P. and CSIKSZENTMIHALYI, M., 2000, Positive Psychology: An Introduction. American Psychologist, 55, pp. 5 – 14. SHELDON, K.M., ELLIOT, A.J., KIM, Y. and KASSER, T., 2001, What is satisfying about satisfying events? Testing 10 candidate psychological needs. Journal of Personality and Social Psychology, 80, pp. 325 – 39. SINGH, S.N. and DALAL, N.P., 1999, Web home pages as advertisements. Communications of the ACM, 42, pp. 91 – 8. SUH, E., DIENER, E. and FUJITA, F., 1996, Events and subjective well-being: Only recent events matter. Journal of Personality and Social Psychology,", "title": "" }, { "docid": "47ac4b546fe75f2556a879d6188d4440", "text": "There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.", "title": "" }, { "docid": "587f1510411636090bc192b1b9219b58", "text": "Creativity can be considered one of the key competencies for the twenty-first century. It provides us with the capacity to deal with the opportunities and challenges that are part of our complex and fast-changing world. The question as to what facilitates creative cognition-the ability to come up with creative ideas, problem solutions and products-is as old as the human sciences, and various means to enhance creative cognition have been studied. Despite earlier scientific studies demonstrating a beneficial effect of music on cognition, the effect of music listening on creative cognition has remained largely unexplored. The current study experimentally tests whether listening to specific types of music (four classical music excerpts systematically varying on valance and arousal), as compared to a silence control condition, facilitates divergent and convergent creativity. Creativity was higher for participants who listened to 'happy music' (i.e., classical music high on arousal and positive mood) while performing the divergent creativity task, than for participants who performed the task in silence. No effect of music was found for convergent creativity. In addition to the scientific contribution, the current findings may have important practical implications. Music listening can be easily integrated into daily life and may provide an innovative means to facilitate creative cognition in an efficient way in various scientific, educational and organizational settings when creative thinking is needed.", "title": "" }, { "docid": "cdf2235bea299131929700406792452c", "text": "Real-time detection of traffic signs, the task of pinpointing a traffic sign's location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.", "title": "" }, { "docid": "e33d34d0fbc19dbee009134368e40758", "text": "Quantum metrology exploits quantum phenomena to improve the measurement sensitivity. Theoretical analysis shows that quantum measurement can break through the standard quantum limits and reach super sensitivity level. Quantum radar systems based on quantum measurement can fufill not only conventional target detection and recognition tasks but also capable of detecting and identifying the RF stealth platform and weapons systems. The theoretical basis, classification, physical realization of quantum radar is discussed comprehensively in this paper. And the technology state and open questions of quantum radars is reviewed at the end.", "title": "" }, { "docid": "06b4bfebe295e3dceadef1a842b2e898", "text": "Constant changes in the economic environment, where globalization and the development of the knowledge economy act as drivers, are systematically pushing companies towards the challenge of accessing external markets. Web localization constitutes a new field of study and professional intervention. From the translation perspective, localization equates to the website being adjusted to the typological, discursive and genre conventions of the target culture, adapting that website to a different language and culture. This entails much more than simply translating the content of the pages. The content of a webpage is made up of text, images and other multimedia elements, all of which have to be translated and subjected to cultural adaptation. A case study has been carried out to analyze the current presence of localization within Spanish SMEs from the chemical sector. Two types of indicator have been established for evaluating the sample: indicators for evaluating company websites (with a Likert scale from 0–4) and indicators for evaluating web localization (0–2 scale). The results show overall website quality is acceptable (2.5 points out of 4). The higher rating has been obtained by the system quality (with 2.9), followed by information quality (2.7 points) and, lastly, service quality (1.9 points). In the web localization evaluation, the contact information aspects obtain 1.4 points, the visual aspect 1.04, and the navigation aspect was the worse considered (0.37). These types of analysis facilitate the establishment of practical recommendations aimed at SMEs in order to increase their international presence through the localization of their websites.", "title": "" }, { "docid": "3cae5c0440536b95cf1d0273071ad046", "text": "Android platform adopts permissions to protect sensitive resources from untrusted apps. However, after permissions are granted by users at install time, apps could use these permissions (sensitive resources) with no further restrictions. Thus, recent years have witnessed the explosion of undesirable behaviors in Android apps. An important part in the defense is the accurate analysis of Android apps. However, traditional syscall-based analysis techniques are not well-suited for Android, because they could not capture critical interactions between the application and the Android system.\n This paper presents VetDroid, a dynamic analysis platform for reconstructing sensitive behaviors in Android apps from a novel permission use perspective. VetDroid features a systematic framework to effectively construct permission use behaviors, i.e., how applications use permissions to access (sensitive) system resources, and how these acquired permission-sensitive resources are further utilized by the application. With permission use behaviors, security analysts can easily examine the internal sensitive behaviors of an app. Using real-world Android malware, we show that VetDroid can clearly reconstruct fine-grained malicious behaviors to ease malware analysis. We further apply VetDroid to 1,249 top free apps in Google Play. VetDroid can assist in finding more information leaks than TaintDroid, a state-of-the-art technique. In addition, we show how we can use VetDroid to analyze fine-grained causes of information leaks that TaintDroid cannot reveal. Finally, we show that VetDroid can help identify subtle vulnerabilities in some (top free) applications otherwise hard to detect.", "title": "" }, { "docid": "48a8cfc2ac8c8c63bbd15aba5a830ef9", "text": "We extend prior research on masquerade detection using UNIX commands issued by users as the audit source. Previous studies using multi-class training requires gathering data from multiple users to train specific profiles of self and non-self for each user. Oneclass training uses data representative of only one user. We apply one-class Naïve Bayes using both the multivariate Bernoulli model and the Multinomial model, and the one-class SVM algorithm. The result shows that oneclass training for this task works as well as multi-class training, with the great practical advantages of collecting much less data and more efficient training. One-class SVM using binary features performs best among the oneclass training algorithms.", "title": "" }, { "docid": "cf506587f2699d88e4a2e0be36ccac41", "text": "A complete list of the titles in this series appears at the end of this volume. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic format.", "title": "" }, { "docid": "89c85642fc2e0b1f10c9a13b19f1d833", "text": "Many current successful Person Re-Identification(ReID) methods train a model with the softmax loss function to classify images of different persons and obtain the feature vectors at the same time. However, the underlying feature embedding space is ignored. In this paper, we use a modified softmax function, termed Sphere Softmax, to solve the classification problem and learn a hypersphere manifold embedding simultaneously. A balanced sampling strategy is also introduced. Finally, we propose a convolutional neural network called SphereReID adopting Sphere Softmax and training a single model end-to-end with a new warming-up learning rate schedule on four challenging datasets including Market-1501, DukeMTMC-reID, CHHK-03, and CUHK-SYSU. Experimental results demonstrate that this single model outperforms the state-of-the-art methods on all four datasets without fine-tuning or reranking. For example, it achieves 94.4% rank-1 accuracy on Market-1501 and 83.9% rank-1 accuracy on DukeMTMC-reID. The code and trained weights of our model will be released.", "title": "" }, { "docid": "fee96195e50e7418b5d63f8e6bd07907", "text": "Optimal power flow (OPF) is considered for microgrids, with the objective of minimizing either the power distribution losses, or, the cost of power drawn from the substation and supplied by distributed generation (DG) units, while effecting voltage regulation. The microgrid is unbalanced, due to unequal loads in each phase and non-equilateral conductor spacings on the distribution lines. Similar to OPF formulations for balanced systems, the considered OPF problem is nonconvex. Nevertheless, a semidefinite programming (SDP) relaxation technique is advocated to obtain a convex problem solvable in polynomial-time complexity. Enticingly, numerical tests demonstrate the ability of the proposed method to attain the globally optimal solution of the original nonconvex OPF. To ensure scalability with respect to the number of nodes, robustness to isolated communication outages, and data privacy and integrity, the proposed SDP is solved in a distributed fashion by resorting to the alternating direction method of multipliers. The resulting algorithm entails iterative message-passing among groups of consumers and guarantees faster convergence compared to competing alternatives.", "title": "" }, { "docid": "704d729295cddd358eba5eefdf0bdee4", "text": "Remarkable advances in instrument technology, automation and computer science have greatly simplified many aspects of previously tedious tasks in laboratory diagnostics, creating a greater volume of routine work, and significantly improving the quality of results of laboratory testing. Following the development and successful implementation of high-quality analytical standards, analytical errors are no longer the main factor influencing the reliability and clinical utilization of laboratory diagnostics. Therefore, additional sources of variation in the entire laboratory testing process should become the focus for further and necessary quality improvements. Errors occurring within the extra-analytical phases are still the prevailing source of concern. Accordingly, lack of standardized procedures for sample collection, including patient preparation, specimen acquisition, handling and storage, account for up to 93% of the errors currently encountered within the entire diagnostic process. The profound awareness that complete elimination of laboratory testing errors is unrealistic, especially those relating to extra-analytical phases that are harder to control, highlights the importance of good laboratory practice and compliance with the new accreditation standards, which encompass the adoption of suitable strategies for error prevention, tracking and reduction, including process redesign, the use of extra-analytical specifications and improved communication among caregivers.", "title": "" }, { "docid": "e05b1b6e1ca160b06e36b784df30b312", "text": "The vision of the MDSD is an era of software engineering where modelling completely replaces programming i.e. the systems are entirely generated from high-level models, each one specifying a different view of the same system. The MDSD can be seen as the new generation of visual programming languages which provides methods and tools to streamline the process of software engineering. Productivity of the development process is significantly improved by the MDSD approach and it also increases the quality of the resulting software system. The MDSD is particularly suited for those software applications which require highly specialized technical knowledge due to the involvement of complex technologies and the large number of complex and unmanageable standards. In this paper, an overview of the MDSD is presented; the working styles and the main concepts are illustrated in detail.", "title": "" } ]
scidocsrr
f9e8d0990c1a6f0dec888476d13276bc
Music classification using extreme learning machines
[ { "docid": "10d53a05fcfb93231ab100be7eeb6482", "text": "We present a computer audition system that can both annotate novel audio tracks with semantically meaningful words and retrieve relevant tracks from a database of unlabeled audio content given a text-based query. We consider the related tasks of content-based audio annotation and retrieval as one supervised multiclass, multilabel problem in which we model the joint probability of acoustic features and words. We collect a data set of 1700 human-generated annotations that describe 500 Western popular music tracks. For each word in a vocabulary, we use this data to train a Gaussian mixture model (GMM) over an audio feature space. We estimate the parameters of the model using the weighted mixture hierarchies expectation maximization algorithm. This algorithm is more scalable to large data sets and produces better density estimates than standard parameter estimation techniques. The quality of the music annotations produced by our system is comparable with the performance of humans on the same task. Our ldquoquery-by-textrdquo system can retrieve appropriate songs for a large number of musically relevant words. We also show that our audition system is general by learning a model that can annotate and retrieve sound effects.", "title": "" }, { "docid": "b97c9e8238f74539e8a17dcffecdd35f", "text": "This paper presents a novel approach to the task of automatic music genre classification which is based on multiple feature vectors and ensemble of classifiers. Multiple feature vectors are extracted from a single music piece. First, three 30-second music segments, one from the beginning, one from the middle and one from end part of a music piece are selected and feature vectors are extracted from each segment. Individual classifiers are trained to account for each feature vector extracted from each music segment. At the classification, the outputs provided by each individual classifier are combined through simple combination rules such as majority vote, max, sum and product rules, with the aim of improving music genre classification accuracy. Experiments carried out on a large dataset containing more than 3,000 music samples from ten different Latin music genres have shown that for the task of automatic music genre classification, the features extracted from the middle part of the music provide better results than using the segments from the beginning or end part of the music. Furthermore, the proposed ensemble approach, which combines the multiple feature vectors, provides better accuracy than using single classifiers and any individual music segment.", "title": "" }, { "docid": "bf9e56e0e125e922de95381fb5520569", "text": "Today, many private households as well as broadcasting or film companies own large collections of digital music plays. These are time series that differ from, e.g., weather reports or stocks market data. The task is normally that of classification, not prediction of the next value or recognizing a shape or motif. New methods for extracting features that allow to classify audio data have been developed. However, the development of appropriate feature extraction methods is a tedious effort, particularly because every new classification task requires tailoring the feature set anew. This paper presents a unifying framework for feature extraction from value series. Operators of this framework can be combined to feature extraction methods automatically, using a genetic programming approach. The construction of features is guided by the performance of the learning classifier which uses the features. Our approach to automatic feature extraction requires a balance between the completeness of the methods on one side and the tractability of searching for appropriate methods on the other side. In this paper, some theoretical considerations illustrate the trade-off. After the feature extraction, a second process learns a classifier from the transformed data. The practical use of the methods is shown by two types of experiments: classification of genres and classification according to user preferences.", "title": "" } ]
[ { "docid": "819195697309e48749e340a86dfc866d", "text": "For the first time, a single source of cellulosic biomass was pretreated by leading technologies using identical analytical methods to provide comparative performance data. In particular, ammonia explosion, aqueous ammonia recycle, controlled pH, dilute acid, flowthrough, and lime approaches were applied to prepare corn stover for subsequent biological conversion to sugars through a Biomass Refining Consortium for Applied Fundamentals and Innovation (CAFI) among Auburn University, Dartmouth College, Michigan State University, the National Renewable Energy Laboratory, Purdue University, and Texas A&M University. An Agricultural and Industrial Advisory Board provided guidance to the project. Pretreatment conditions were selected based on the extensive experience of the team with each of the technologies, and the resulting fluid and solid streams were characterized using standard methods. The data were used to close material balances, and energy balances were estimated for all processes. The digestibilities of the solids by a controlled supply of cellulase enzyme and the fermentability of the liquids were also assessed and used to guide selection of optimum pretreatment conditions. Economic assessments were applied based on the performance data to estimate each pretreatment cost on a consistent basis. Through this approach, comparative data were developed on sugar recovery from hemicellulose and cellulose by the combined pretreatment and enzymatic hydrolysis operations when applied to corn stover. This paper introduces the project and summarizes the shared methods for papers reporting results of this research in this special edition of Bioresource Technology.", "title": "" }, { "docid": "9dbf6052fc1cf275ddd3ee1a1849b2f7", "text": "Crowdfunding is an exciting new phenomenon with the potential to disrupt early-stage capital markets. Enabled through specialized internet websites and social media, entrepreneurs now have a new source for start-up capital (estimated at $2.8 billion in 2012). Currently, entrepreneurs need to network through intermediaries to have access to wealthy investors. Crowdfunding bypasses these intermediaries and brings the ability to raise capital to the crowd. Consequently, decisions to fund an entrepreneurial endeavor are not made through ‘who you know’ and back-room deals, but through the discourse that occurs through the crowdfunding project page. The purpose of this research is to analyze and understand this discourse and the meaning it creates over the course of a crowdfunding campaign. The lens of sociomateriality in conjunction with discourse analysis is used to identify how meaning is created and its influence on the IS artifact.", "title": "" }, { "docid": "6a9c7da90fe8de2ad6f3819df07f8642", "text": "We define Quality of Service (QoS) and cost model for communications in Systems on Chip (SoC), and derive related Network on Chip (NoC) architecture and design process. SoC inter-module communication traffic is classified into four classes of service: signaling (for inter-module control signals); real-time (representing delay-constrained bit streams); RD/WR (modeling short data access) and block-transfer (handling large data bursts). Communication traffic of the target SoC is analyzed (by means of analytic calculations and simulations), and QoS requirements (delay and throughput) for each service class are derived. A customized Quality-of-Service NoC (QNoC) architecture is derived by modifying a generic network architecture. The customization process minimizes the network cost (in area and power) while maintaining the required QoS. The generic network is based on a two-dimensional planar mesh and fixed shortest path (X–Y based) multi-class wormhole routing. Once communication requirements of the target SoC are identified, the network is customized as follows: The SoC modules are placed so as to minimize spatial traffic density, unnecessary mesh links and switching nodes are removed, and bandwidth is allocated to the remaining links and switches according to their relative load so that link utilization is balanced. The result is a low cost customized QNoC for the target SoC which guarantees that QoS requirements are met. 2003 Elsevier B.V. All rights reserved. IDT: Network on chip; QoS architecture; Wormhole switching; QNoC design process; QNoC", "title": "" }, { "docid": "153f452486e2eacb9dc1cf95275dd015", "text": "This paper presents a Fuzzy Neural Network (FNN) control system for a traveling-wave ultrasonic motor (TWUSM) driven by a dual mode modulation non-resonant driving circuit. First, the motor configuration and the proposed driving circuit of a TWUSM are introduced. To drive a TWUSM effectively, a novel driving circuit, that simultaneously employs both the driving frequency and phase modulation control scheme, is proposed to provide two-phase balance voltage for a TWUSM. Since the dynamic characteristics and motor parameters of the TWUSM are highly nonlinear and time-varying, a FNN control system is therefore investigated to achieve high-precision speed control. The proposed FNN control system incorporates neuro-fuzzy control and the driving frequency and phase modulation to solve the problem of nonlinearities and variations. The proposed control system is digitally implemented by a low-cost digital signal processor based microcontroller, hence reducing the system hardware size and cost. The effectiveness of the proposed driving circuit and control system is verified with hardware experiments under the occurrence of uncertainties. In addition, the advantages of the proposed control scheme are indicated in comparison with a conventional proportional-integral control system.", "title": "" }, { "docid": "d72e4df2e396a11ae7130ca7e0b2fb56", "text": "Advances in location-acquisition and wireless communication technologies have led to wider availability of spatio-temporal (ST) data, which has unique spatial properties (i.e. geographical hierarchy and distance) and temporal properties (i.e. closeness, period and trend). In this paper, we propose a <u>Deep</u>-learning-based prediction model for <u>S</u>patio-<u>T</u>emporal data (DeepST). We leverage ST domain knowledge to design the architecture of DeepST, which is comprised of two components: spatio-temporal and global. The spatio-temporal component employs the framework of convolutional neural networks to simultaneously model spatial near and distant dependencies, and temporal closeness, period and trend. The global component is used to capture global factors, such as day of the week, weekday or weekend. Using DeepST, we build a real-time crowd flow forecasting system called UrbanFlow1. Experiment results on diverse ST datasets verify DeepST's ability to capture ST data's spatio-temporal properties, showing the advantages of DeepST beyond four baseline methods.", "title": "" }, { "docid": "e8f89e651007c7f3a20c1f0c6864ea9f", "text": "We present the design and implementation of a quadrotor tail-sitter Vertical Take-Off and Landing (VTOL) Unmanned Aerial Vehicle (UAV). The VTOL UAV combines the advantage of a quadrotor, vertical take-off and landing and hovering at a stationary point, with that of a fixed-wing, efficient level flight. We describe our vehicle design with special considerations on fully autonomous operation in a real outdoor environment where the wind is present. The designed quadrotor tail-sitter UAV has insignificant vibration level and achieves stable hovering and landing performance when a cross wind is present. Wind tunnel test is conducted to characterize the full envelope aerodynamics of the aircraft, based on which a flight controller is designed, implemented and tested. MATLAB simulation is presented and shows that our vehicle can achieve a continuous transition from hover flight to level flight. Finally, both indoor and outdoor flight experiments are conducted to verify the performance of our vehicle and the designed controller.", "title": "" }, { "docid": "6f387b2c56b042815770605dc8fc9e8c", "text": "Through investigating factors that influence consumers to make a transition from online to mobile banking, this empirical study shows that relative attitude and relative subjective norm positively motivated respondents to switch from Internet to mobile banking while relative perceived behavior control deterred respondents from transitioning. Empirical results also demonstrated that Internet banking is superior to mobile banking in terms of consumer relative compatibility, self-efficacy, resource facilitating conditions, and technology facilitating conditions. Meanwhile, mobile banking emerged as superior to Internet banking for other constructs. By adding a comparative concept into an extended decomposed theory of planned behavior (DTPB) model, this study may expand the applicable domain of current social psychology theories from the adoption of single products or services to the choice between competing products or services that achieve similar purposes and functions.", "title": "" }, { "docid": "2259232b86607e964393c884340efe79", "text": "Dynamic task allocation is an essential requirement for multi-robot systems functioning in unknown dynamic environments. It allows robots to change their behavior in response to environmental changes or actions of other robots in order to improve overall system performance. Emergent coordination algorithms for task allocation that use only local sensing and no direct communication between robots are attractive because they are robust and scalable. However, a lack of formal analysis tools makes emergent coordination algorithms difficult to design. In this paper we present a mathematical model of a general dynamic task allocation mechanism. Robots using this mechanism have to choose between two types of task, and the goal is to achieve a desired task division in the absence of explicit communication and global knowledge. Robots estimate the state of the environment from repeated local observations and decide which task to choose based on these observations. We model the robots and observations as stochastic processes and study the dynamics of individual robots and the collective behavior. We analyze the effect that the number of observations and the choice of decision functions have on the performance of the system. We validate the mathematical models on a multi-foraging scenario in a multi-robot system. We find that the model’s predictions agree very closely with experimental results from sensor-based simulations.", "title": "" }, { "docid": "2c5cab6e37ad905e0e3576259c4357ff", "text": "--------------------------------------------------------ABSTRACT-----------------------------------------------------------Classification and regression as data mining techniques for predicting the diseases outbreak has been permitted in the health institutions which have relative opportunities for conducting the treatment of diseases. But there is a need to develop a strong model for predicting disease outbreak in datasets based in various countries by filling the existing data mining technique gaps where the majority of models are relaying on single data mining techniques which their accuracies in prediction are not maximized for achieving expected results and also prediction are still few. This paper presents a survey and analysis for existing techniques on both classification and regression models techniques that have been applied for diseases outbreak prediction in datasets.", "title": "" }, { "docid": "c78ab55e5d74c6baa3b4b38dea107489", "text": "Methods based on representation learning currently hold the state-of-the-art in many natural language processing and knowledge base inference tasks. Yet, a major challenge is how to efficiently incorporate commonsense knowledge into such models. A recent approach regularizes relation and entity representations by propositionalization of first-order logic rules. However, propositionalization does not scale beyond domains with only few entities and rules. In this paper we present a highly efficient method for incorporating implication rules into distributed representations for automated knowledge base construction. We map entity-tuple embeddings into an approximately Boolean space and encourage a partial ordering over relation embeddings based on implication rules mined from WordNet. Surprisingly, we find that the strong restriction of the entity-tuple embedding space does not hurt the expressiveness of the model and even acts as a regularizer that improves generalization. By incorporating few commonsense rules, we achieve an increase of 2 percentage points mean average precision over a matrix factorization baseline, while observing a negligible increase in runtime.", "title": "" }, { "docid": "4d322609543deba6bea073652b6ff932", "text": "Development of accurate system models of immunity test setups might be extremely time consuming or even impossible. Here a new generalized approach to develop accurate component-based models of different system-level EMC test setups is proposed on the example of a BCI test setup. An equivalent circuit modelling of the components in LF range is combined with measurement-based macromodelling in HF range. The developed models show high accuracy up to 1 GHz. The issues of floating PCB configurations and incorporation of low frequency behaviour could be solved. Both frequency and time-domain simulations are possible. Arbitrary system configurations can be assembled quickly using the proposed component models. Any kind of system simulation like parametric variation and worst-case analysis can be performed with high accuracy.", "title": "" }, { "docid": "e9e2887e7aae5315a8661c9d7456aa2e", "text": "It has been shown that learning distributed word representations is highly useful for Twitter sentiment classification. Most existing models rely on a single distributed representation for each word. This is problematic for sentiment classification because words are often polysemous and each word can contain different sentiment polarities under different topics. We address this issue by learning topic-enriched multi-prototype word embeddings (TMWE). In particular, we develop two neural networks which 1) learn word embeddings that better capture tweet context by incorporating topic information, and 2) learn topic-enriched multiple prototype embeddings for each word. Experiments on Twitter sentiment benchmark datasets in SemEval 2013 show that TMWE outperforms the top system with hand-crafted features, and the current best neural network model.", "title": "" }, { "docid": "c1b8beec6f2cb42b5a784630512525f3", "text": "Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.", "title": "" }, { "docid": "e5380801d69c3acf7bfe36e868b1dadb", "text": "Skin-mountable chemical sensors using flexible chemically sensitive nanomaterials are of great interest for electronic skin (e-skin) application. To build these sensors, the emerging atomically thin two-dimensional (2D) layered semiconductors could be a good material candidate. Herein, we show that a large-area WS2 film synthesized by sulfurization of a tungsten film exhibits high humidity sensing performance both in natural flat and high mechanical flexible states (bending curvature down to 5 mm). The conductivity of as-synthesized WS2 increases sensitively over a wide relative humidity range (up to 90%) with fast response and recovery times in a few seconds. By using graphene as electrodes and thin polydimethylsiloxane (PDMS) as substrate, a transparent, flexible, and stretchable humidity sensor was fabricated. This senor can be well laminated onto skin and shows stable water moisture sensing behaviors in the undeformed relaxed state as well as under compressive and tensile loadings. Furthermore, its high sensing performance enables real-time monitoring of human breath, indicating a potential mask-free breath monitoring for healthcare application. We believe that such a skin-activity compatible WS2 humidity sensor may shed light on developing low power consumption wearable chemical sensors based on 2D semiconductors.", "title": "" }, { "docid": "7b7f5a18bb7629c48c9fbe9475aa0f0c", "text": "These are the notes for my quarter-long course on basic stability theory at UCLA (MATH 285D, Winter 2015). The presentation highlights some relations to set theory and cardinal arithmetic reflecting my impression about the tastes of the audience. We develop the general theory of local stability instead of specializing to the finite rank case, and touch on some generalizations of stability such as NIP and simplicity. The material in this notes is based on [Pil02, Pil96], [vdD05], [TZ12], [Cas11a, Cas07], [Sim15], [Poi01] and [Che12]. I would also like to thank the following people for their comments and suggestions: Tyler Arant, Madeline Barnicle, Allen Gehret, Omer Ben Neria, Anton Bobkov, Jesse Han, Pietro Kreitlon Carolino, Andrew Marks, Alex Mennen, Assaf Shani, John Susice, Spencer Unger. Comments and corrections are very welcome (chernikov@math.ucla.edu, http://www.math.ucla.edu/~chernikov/).", "title": "" }, { "docid": "3a549571e281b9b381a347fb49953d2c", "text": "Social media has been gaining popularity among university students who use social media at higher rates than the general population. Students consequently spend a significant amount of time on social media, which may inevitably have an effect on their academic engagement. Subsequently, scholars have been intrigued to examine the impact of social media on students' academic engagement. Research that has directly explored the use of social media and its impact on students in tertiary institutions has revealed limited and mixed findings, particularly within a South African context; thus leaving a window of opportunity to further investigate the impact that social media has on students' academic engagement. This study therefore aims to investigate the use of social media in tertiary institutions, the impact that the use thereof has on students' academic engagement and to suggest effective ways of using social media in tertiary institutions to improve students' academic engagement from students' perspectives. This study used an interpretivist (inductive) approach in order to determine and comprehend student's perspectives and experiences towards the use of social media and the effects thereof on their academic engagement. A single case study design at Rhodes University was used to determine students' perceptions and data was collected using an online survey. The findings reveal that students use social media for both social and academic purposes. Students further perceived that social media has a positive impact on their academic engagement and suggest that using social media at tertiary level could be advantageous and could enhance students' academic engagement.", "title": "" }, { "docid": "13c79ec2455730f5a493b6dd6053f5ba", "text": "A wealth of computationally efficient approximation methods for Gaussian process regression have been recently proposed. We give a unifying overview of sparse approximations, following Quiñonero-Candela and Rasmussen (2005), and a brief review of approximate matrix-vector multiplication methods.", "title": "" }, { "docid": "68c7509ec0261b1ddccef7e3ad855629", "text": "This research comprehensively illustrates the design, implementation and evaluation of a novel marker less environment tracking technology for an augmented reality based indoor navigation application, adapted to efficiently operate on a proprietary head-mounted display. Although the display device used, Google Glass, had certain pitfalls such as short battery life, slow processing speed, and lower quality visual display but the tracking technology was able to complement these limitations by rendering a very efficient, precise, and intuitive navigation experience. The performance assessments, conducted on the basis of efficiency and accuracy, substantiated the utility of the device for everyday navigation scenarios, whereas a later conducted subjective evaluation of handheld and wearable devices also corroborated the wearable as the preferred device for indoor navigation.", "title": "" }, { "docid": "2f9e5a34137fe7871c9388078c57dc8e", "text": "This paper presents a new model of measuring semantic similarity in the taxonomy of WordNet. The model takes the path length between two concepts and IC value of each concept as its metric, furthermore, the weight of two metrics can be adapted artificially. In order to evaluate our model, traditional and widely used datasets are used. Firstly, coefficients of correlation between human ratings of similarity and six computational models are calculated, the result shows our new model outperforms their homologues. Then, the distribution graphs of similarity value of 65 word pairs are discussed our model having no faulted zone more centralized than other five methods. So our model can make up the insufficient of other methods which only using one metric(path length or IC value) in their model.", "title": "" }, { "docid": "179f0aae00aca18c27d598dcc5e9ecad", "text": "The aim of this study is to design a fuzzy expert system for calculating the health risk level of a patient. The fuzzy logic system is a simple, rule-based system and can be used to monitor biological systems that would be difficult or impossible to model with simple, linear mathematics. The designed system is based on the modified early warning score (MEWS).The system has 5 input field and 1 output field. The input fields are blood pressure, pulse rate, SPO2 ( it is an estimation of the oxygen saturation level in blood. ), temperature, and blood sugar. The output field refers the risk level of the patient. The output ranges from 0 to 14. This system uses Mamdani inference method. A larger value of output refers to greater degree of illness of the patient. This paper describes research results in the development of a fuzzy driven system to determine the risk levels of health for the patients. The implementation and simulation of the system is done using MATLAB fuzzy tool box.", "title": "" } ]
scidocsrr
22bd5eb662e28e0c50a7dfa9a92cec89
Towards SMS Spam Filtering : Results under a New Dataset
[ { "docid": "5a6fc8dd2b73f5481cbba649e5e76c1b", "text": "Mobile phones are becoming the latest target of electronic junk mail. Recent reports clearly indicate that the volume of SMS spam messages are dramatically increasing year by year. Probably, one of the major concerns in academic settings was the scarcity of public SMS spam datasets, that are sorely needed for validation and comparison of different classifiers. To address this issue, we have recently proposed a new SMS Spam Collection that, to the best of our knowledge, is the largest, public and real SMS dataset available for academic studies. However, as it has been created by augmenting a previously existing database built using roughly the same sources, it is sensible to certify that there are no duplicates coming from them. So, in this paper we offer a comprehensive analysis of the new SMS Spam Collection in order to ensure that this does not happen, since it may ease the task of learning SMS spam classifiers and, hence, it could compromise the evaluation of methods. The analysis of results indicate that the procedure followed does not lead to near-duplicates and, consequently, the proposed dataset is reliable to use for evaluating and comparing the performance achieved by different classifiers.", "title": "" }, { "docid": "52a5f4c15c1992602b8fe21270582cc6", "text": "This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size. SMO’s computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On realworld sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.", "title": "" }, { "docid": "73973ae6c858953f934396ab62276e0d", "text": "The unsolicited bulk messages are widespread in the applications of short messages. Although the existing spam filters have satisfying performance, they are facing the challenge of an adversary who misleads the spam filters by manipulating samples. Until now, the vulnerability of spam filtering technique for short messages has not been investigated. Different from the other spam applications, a short message only has a few words and its length usually has an upper limit. The current adversarial learning algorithms may not work efficiently in short message spam filtering. In this paper, we investigate the existing good word attack and its counterattack method, i.e. the feature reweighting, in short message spam filtering in an effort to understand whether, and to what extent, they can work efficiently when the length of a message is limited. This paper proposes a good word attack strategy which maximizes the influence to a classifier with the least number of inserted characters based on the weight values and also the length of words. On the other hand, we also proposes the feature reweighting method with a new rescaling function which minimizes the importance of the feature representing a short word in order to require more inserted characters for a successful evasion. The methods are evaluated experimentally by using the SMS and the comment spam dataset. The results confirm that the length of words is a critical factor of the robustness of short message spam filtering to good word attack. & 2014 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "dfca5783e6ec34d228278f14c5719288", "text": "Generative Adversarial networks (GANs) have obtained remarkable success in many unsupervised learning tasks and unarguably, clustering is an important unsupervised learning problem. While one can potentially exploit the latentspace back-projection in GANs to cluster, we demonstrate that the cluster structure is not retained in the GAN latent space. In this paper, we propose ClusterGAN as a new mechanism for clustering using GANs. By sampling latent variables from a mixture of one-hot encoded variables and continuous latent variables, coupled with an inverse network (which projects the data to the latent space) trained jointly with a clustering specific loss, we are able to achieve clustering in the latent space. Our results show a remarkable phenomenon that GANs can preserve latent space interpolation across categories, even though the discriminator is never exposed to such vectors. We compare our results with various clustering baselines and demonstrate superior performance on both synthetic and real datasets.", "title": "" }, { "docid": "f1d0f218b789ac104448777c82a4093f", "text": "This paper critically reviews the literature on managing diversity through human resource management (HRM). We discuss the major issues and objectives of managing diversity and examine the state of human resource diversity management practices in organizations. Our review shows that inequality and discrimination still widely exist and HRM has focused mainly on compliance with equal employment opportunity (EEO) and affirmative action (AA) legislation. Less attention has been paid to valuing, developing and making use of diversity. Our review reveals limited literature examining how diversity is managed in organizations through effective human resource management. We develop a framework that presents strategies for HR diversity management at the strategic, tactical and operational levels. Our review also discusses the implications for practice and further research.", "title": "" }, { "docid": "6b7ab5e130cba03fd9ec41837f82880a", "text": "High-utility itemset (HUI) mining is a popular data mining task, consisting of enumerating all groups of items that yield a high profit in a customer transaction database. However, an important issue with traditional HUI mining algorithms is that they tend to find itemsets having many items. But those itemsets are often rare, and thus may be less interesting than smaller itemsets for users. In this paper, we address this issue by presenting a novel algorithm named FHM+ for mining HUIs, while considering length constraints. To discover HUIs efficiently with length constraints, FHM+ introduces the concept of Length UpperBound Reduction (LUR), and two novel upper-bounds on the utility of itemsets. An extensive experimental evaluation shows that length constraints are effective at reducing the number of patterns, and the novel upper-bounds can greatly decrease the execution time, and memory usage for HUI mining.", "title": "" }, { "docid": "61c6d49c3cdafe4366d231ebad676077", "text": "Video affective content analysis has been an active research area in recent decades, since emotion is an important component in the classification and retrieval of videos. Video affective content analysis can be divided into two approaches: direct and implicit. Direct approaches infer the affective content of videos directly from related audiovisual features. Implicit approaches, on the other hand, detect affective content from videos based on an automatic analysis of a user's spontaneous response while consuming the videos. This paper first proposes a general framework for video affective content analysis, which includes video content, emotional descriptors, and users' spontaneous nonverbal responses, as well as the relationships between the three. Then, we survey current research in both direct and implicit video affective content analysis, with a focus on direct video affective content analysis. Lastly, we identify several challenges in this field and put forward recommendations for future research.", "title": "" }, { "docid": "d3783bcc47ed84da2c54f5f536450a0c", "text": "In this paper, we present a new framework for large scale online kernel learning, making kernel methods efficient and scalable for large-scale online learning applications. Unlike the regular budget online kernel learning scheme that usually uses some budget maintenance strategies to bound the number of support vectors, our framework explores a completely different approach of kernel functional approximation techniques to make the subsequent online learning task efficient and scalable. Specifically, we present two different online kernel machine learning algorithms: (i) Fourier Online Gradient Descent (FOGD) algorithm that applies the random Fourier features for approximating kernel functions; and (ii) Nyström Online Gradient Descent (NOGD) algorithm that applies the Nyström method to approximate large kernel matrices. We explore these two approaches to tackle three online learning tasks: binary classification, multi-class classification, and regression. The encouraging results of our experiments on large-scale datasets validate the effectiveness and efficiency of the proposed algorithms, making them potentially more practical than the family of existing budget online kernel learning approaches.", "title": "" }, { "docid": "c2a7fa32a3037ff30bd633ed0934ee5f", "text": "databases have been attracting a significant amount of research, industry, and media attention of late. What is all the excitement about? This article provides an overview of this emerging field, clarifying how data mining and knowledge discovery in databases are related both to each other and to related fields, such as machine learning, statistics, and databases. The article mentions particular real-world applications, specific data-mining techniques, challenges involved in real-world applications of knowledge discovery, and current and future research directions in the field.", "title": "" }, { "docid": "30957a6b88724db8f59dd35a79523a4b", "text": "It is believed that repeated exposure to real-life and to entertainment violence may alter cognitive, affective, and behavioral processes, possibly leading to desensitization. The goal of the present study was to determine if there are relationships between real-life and media violence exposure and desensitization as reflected in related characteristics. One hundred fifty fourth and fifth graders completed measures of real-life violence exposure, media violence exposure, empathy, and attitudes towards violence. Regression analyses indicated that only exposure to video game violence was associated with (lower) empathy. Both video game and movie violence exposure were associated with stronger proviolence attitudes. The active nature of playing video games, intense engagement, and the tendency to be translated into fantasy play may explain negative impact, though causality was not investigated in the present design. The samples' relatively low exposure to real-life violence may have limited the identification of relationships. Although difficult to quantify, desensitization to violence should be further studied using related characteristics as in the present study. Individual differences and causal relationships should also be examined.", "title": "" }, { "docid": "30cd626772ad8c8ced85e8312d579252", "text": "An off-state leakage current unique for short-channel SOI MOSFETs is reported. This off-state leakage is the amplification of gate-induced-drain-leakage current by the lateral bipolar transistor in an SOI device due to the floating body. The leakage current can be enhanced by as much as 100 times for 1/4 mu m SOI devices. This can pose severe constraints in future 0.1 mu m SOI device design. A novel technique was developed based on this mechanism to measure the lateral bipolar transistor current gain beta of SOI devices without using a body contact.<<ETX>>", "title": "" }, { "docid": "75c1fa342d6f30d68b0aba906a54dd69", "text": "The Constrained Application Protocol (CoAP) is a promising candidate for future smart city applications that run on resource-constrained devices. However, additional security means are mandatory to cope with the high security requirements of smart city applications. We present a framework to evaluate lightweight intrusion detection techniques for CoAP applications. This framework combines an OMNeT++ simulation with C/C++ application code that also runs on real hardware. As the result of our work, we used our framework to evaluate intrusion detection techniques for a smart public transport application that uses CoAP. Our first evaluations indicate that a hybrid IDS approach is a favorable choice for smart city applications.", "title": "" }, { "docid": "f7562e0540e65fdfdd5738d559b4aad1", "text": "An important aspect of marketing practice is the targeting of consumer segments for differential promotional activity. The premise of this activity is that there exist distinct segments of homogeneous consumers who can be identified by readily available demographic information. The increased availability of individual consumer panel data open the possibility of direct targeting of individual households. The goal of this paper is to assess the information content of various information sets available for direct marketing purposes. Information on the consumer is obtained from the current and past purchase history as well as demographic characteristics. We consider the situation in which the marketer may have access to a reasonably long purchase history which includes both the products purchased and information on the causal environment. Short of this complete purchase history, we also consider more limited information sets which consist of only the current purchase occasion or only information on past product choice without causal variables. Proper evaluation of this information requires a flexible model of heterogeneity which can accommodate observable and unobservable heterogeneity as well as produce household level inferences for targeting purposes. We develop new econometric methods to imple0732-2399/96/1504/0321$01.25 Copyright C 1996, Institute for Operations Research and the Management Sciences ment a random coefficient choice model in which the heterogeneity distribution is related to observable demographics. We couple this approach to modeling heterogeneity with a target couponing problem in which coupons are customized to specific households on the basis of various information sets. The couponing problem allows us to place a monetary value on the information sets. Our results indicate there exists a tremendous potential for improving the profitability of direct marketing efforts by more fully utilizing household purchase histories. Even rather short purchase histories can produce a net gain in revenue from target couponing which is 2.5 times the gain from blanket couponing. The most popular current electronic couponing trigger strategy uses only one observation to customize the delivery of coupons. Surprisingly, even the information contained in observing one purchase occasion boasts net couponing revenue by 50% more than that which would be gained by the blanket strategy. This result, coupled with increased competitive pressures, will force targeted marketing strategies to become much more prevalent in the future than they are today. (Target Marketing; Coupons; Heterogeneity; Bayesian Hierarchical Models) MARKETING SCIENCE/Vol. 15, No. 4, 1996 pp. 321-340 THE VALUE OF PURCHASE HISTORY DATA IN TARGET MARKETING", "title": "" }, { "docid": "c943d44e452c5cd5e027df814f8aac32", "text": "Three experiments tested the hypothesis that the social roles implied by specific contexts can attenuate or reverse the typical pattern of racial bias obtained on both controlled and automatic evaluation measures. Study 1 assessed evaluations of Black and Asian faces in contexts related to athlete or student roles. Study 2 compared evaluations of Black and White faces in 3 role-related contexts (prisoner, churchgoer, and factory worker). Study 3 manipulated role cues (lawyer or prisoner) within the same prison context. All 3 studies produced significant reversals of racial bias as a function of implied role on measures of both controlled and automatic evaluation. These results support the interpretation that differential evaluations based on Race x Role interactions provide one way that context can moderate both controlled and automatic racial bias.", "title": "" }, { "docid": "48aa68862748ab502f3942300b4d8e1e", "text": "While data volumes continue to rise, the capacity of human attention remains limited. As a result, users need analytics engines that can assist in prioritizing attention in this fast data that is too large for manual inspection. We present a set of design principles for the design of fast data analytics engines that leverage the relative scarcity of human attention and overabundance of data: return fewer results, prioritize iterative analysis, and filter fast to compute less. We report on our early experiences employing these principles in the design and deployment of MacroBase, an open source analysis engine for prioritizing attention in fast data. By combining streaming operators for feature transformation, classification, and data summarization, MacroBase provides users with interpretable explanations of key behaviors, acting as a search engine for fast data.", "title": "" }, { "docid": "d5bc5837349333a6f1b0b47f16844c13", "text": "Personalized news recommender systems have gained increasing attention in recent years. Within a news reading community, the implicit correlations among news readers, news articles, topics and named entities, e.g., what types of named entities in articles are preferred by users, and why users like the articles, could be valuable for building an effective news recommender. In this paper, we propose a novel news personalization framework by mining such correlations. We use hypergraph to model various high-order relations among different objects in news data, and formulate news recommendation as a ranking problem on fine-grained hypergraphs. In addition, by transductive inference, our proposed algorithm is capable of effectively handling the so-called cold-start problem. Extensive experiments on a data set collected from various news websites have demonstrated the effectiveness of our proposed algorithm.", "title": "" }, { "docid": "633ae4599a8d5ce5fd3b8dc8c465dd90", "text": "Softmax is an output activation function for modeling categorical probability distributions in many applications of deep learning. However, a recent study revealed that softmax can be a bottleneck of representational capacity of neural networks in language modeling (the softmax bottleneck). In this paper, we propose an output activation function for breaking the softmax bottleneck without additional parameters. We re-analyze the softmax bottleneck from the perspective of the output set of log-softmax and identify the cause of the softmax bottleneck. On the basis of this analysis, we propose sigsoftmax, which is composed of a multiplication of an exponential function and sigmoid function. Sigsoftmax can break the softmax bottleneck. The experiments on language modeling demonstrate that sigsoftmax and mixture of sigsoftmax outperform softmax and mixture of softmax, respectively.", "title": "" }, { "docid": "34546e42bd78161259d2bc190e36c9f7", "text": "Peer to Peer networks are the leading cause for music piracy but also used for music sampling prior to purchase. In this paper we investigate the relations between music file sharing and sales (both physical and digital)using large Peer-to-Peer query database information. We compare file sharing information on songs to their popularity on the Billboard Hot 100 and the Billboard Digital Songs charts, and show that popularity trends of songs on the Billboard have very strong correlation (0.88-0.89) to their popularity on a Peer-to-Peer network. We then show how this correlation can be utilized by common data mining algorithms to predict a song's success in the Billboard in advance, using Peer-to-Peer information.", "title": "" }, { "docid": "6ddad64507fa5ebf3b2930c261584967", "text": "In this article we propose a methodology to determine snow cover by means of Landsat-7 ETM+ and Landsat-5 TM images, as well as an improvement in daily Snow Cover TERRA- MODIS product (MOD10A1), between 2002 and 2005. Both methodologies are based on a NDSI threshold > 0.4. In the Landsat case, and although this threshold also selects water bodies, we have obtained optimal results using a mask of water bodies and generating a pre-boundary snow mask around the snow cover. Moreover, an important improvement in snow cover mapping in shadow cast areas by means of a hybrid classification has been obtained. Using these results as ground truth we have verified MODIS Snow Cover product using coincident dates. In the MODIS product, we have noted important commission errors in water bodies, forest covers and orographic shades because of the NDVI-NDSI filter applied to this product. In order to improve MODIS snow cover determination using MODIS images, we propose a hybrid methodology based on experience with Landsat images, which provide greater spatial resolution.", "title": "" }, { "docid": "640af69086854b79257cbdeb4668830b", "text": "Traditionally traffic safety was addressed by traffic awareness and passive safety measures like solid chassis, seat belts, air bags etc. With the recent breakthroughs in the domain of mobile ad hoc networks, the concept of vehicular ad hoc networks (VANET) was realised. Safety messaging is the most important aspect of VANETs, where the passive safety (accident readiness) in vehicles was reinforced with the idea of active safety (accident prevention). In safety messaging vehicles will message each other over wireless media, updating each other on traffic conditions and hazards. Security is an important aspect of safety messaging, that aims to prevent participants spreading wrong information in the network that are likely to cause mishaps. Equally important is the fact that secure communication protocols should satisfy the communication constraints of VANETs. VANETs are delay intolerant. Features like high speeds, large network size, constant mobility etc. induce certain limitations in the way messaging can be carried out in VANETs. This thesis studies the impact of total message size on VANET messaging system performance, and conducts an analysis of secure communication protocols to measure how they perform in a VANET messaging system.", "title": "" }, { "docid": "e584549afba4c444c32dfe67ee178a84", "text": "Bayesian networks (BNs) provide a means for representing, displaying, and making available in a usable form the knowledge of experts in a given Weld. In this paper, we look at the performance of an expert constructed BN compared with other machine learning (ML) techniques for predicting the outcome (win, lose, or draw) of matches played by Tottenham Hotspur Football Club. The period under study was 1995–1997 – the expert BN was constructed at the start of that period, based almost exclusively on subjective judgement. Our objective was to determine retrospectively the comparative accuracy of the expert BN compared to some alternative ML models that were built using data from the two-year period. The additional ML techniques considered were: MC4, a decision tree learner; Naive Bayesian learner; Data Driven Bayesian (a BN whose structure and node probability tables are learnt entirely from data); and a K-nearest neighbour learner. The results show that the expert BN is generally superior to the other techniques for this domain in predictive accuracy. The results are even more impressive for BNs given that, in a number of key respects, the study assumptions place them at a disadvantage. For example, we have assumed that the BN prediction is ‘incorrect’ if a BN predicts more than one outcome as equally most likely (whereas, in fact, such a prediction would prove valuable to somebody who could place an ‘each way’ bet on the outcome). Although the expert BN has now long been irrelevant (since it contains variables relating to key players who have retired or left the club) the results here tend to conWrm the excellent potential of BNs when they are built by a reliable domain expert. The ability to provide accurate predictions without requiring much learning data are an obvious bonus in any domain where data are scarce. Moreover, the BN was relatively simple for the expert to build and its structure could be used again in this and similar types of problems. © 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "edd6fb76f672e00b14935094cb0242d0", "text": "Despite widespread interests in reinforcement-learning for task-oriented dialogue systems, several obstacles can frustrate research and development progress. First, reinforcement learners typically require interaction with the environment, so conventional dialogue corpora cannot be used directly. Second, each task presents specific challenges, requiring separate corpus of task-specific annotated data. Third, collecting and annotating human-machine or human-human conversations for taskoriented dialogues requires extensive domain knowledge. Because building an appropriate dataset can be both financially costly and time-consuming, one popular approach is to build a user simulator based upon a corpus of example dialogues. Then, one can train reinforcement learning agents in an online fashion as they interact with the simulator. Dialogue agents trained on these simulators can serve as an effective starting point. Once agents master the simulator, they may be deployed in a real environment to interact with humans, and continue to be trained online. To ease empirical algorithmic comparisons in dialogues, this paper introduces a new, publicly available simulation framework, where our simulator, designed for the movie-booking domain, leverages both rules and collected data. The simulator supports two tasks: movie ticket booking and movie seeking. Finally, we demonstrate several agents and detail the procedure to add and test your own agent in the proposed framework.", "title": "" }, { "docid": "a28c5732d2df003e76464e4fc65334e3", "text": "Fingerprint identification is based on two basic premises: (i) persistence: the basic characteristics of fingerprints do not change with time; and (ii) individuality: the fingerprint is unique to an individual. The validity of the first premise has been established by the anatomy and morphogenesis of friction ridge skin. While the second premise has been generally accepted to be true based on empirical results, the underlying scientific basis of fingerprint individuality has not been formally established. As a result, the validity of fingerprint evidence is now being challenged in several court cases. A scientific basis for establishing fingerprint individuality will not only result in the admissibility of fingerprint identification in the courts of law but will also establish an upper bound on the performance of an automatic fingerprint verification system. We address the problem of fingerprint individuality by quantifying the amount of information available in minutiae features to establish a correspondence between two fingerprint images. We derive an expression which estimates the probability of a false correspondence between minutiae-based representations from two arbitrary fingerprints belonging to different fingers. For example, the probability that a fingerprint with 36 minutiae points will share 12 minutiae points with another arbitrarily chosen fingerprint with 36 minutiae ∗An earlier version of this paper appeared in the Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 805-812, Hawaii, December 11-13, 2001. †Corresponding Author", "title": "" } ]
scidocsrr
311172e6662a2d88ccafb0f07613bf35
Multiple Arousal Theory and Daily-Life Electrodermal Activity Asymmetry
[ { "docid": "d76e649c6daeb71baf377c2b36623e29", "text": "The somatic marker hypothesis proposes that decision-making is a process that depends on emotion. Studies have shown that damage of the ventromedial prefrontal (VMF) cortex precludes the ability to use somatic (emotional) signals that are necessary for guiding decisions in the advantageous direction. However, given the role of the amygdala in emotional processing, we asked whether amygdala damage also would interfere with decision-making. Furthermore, we asked whether there might be a difference between the roles that the amygdala and VMF cortex play in decision-making. To address these two questions, we studied a group of patients with bilateral amygdala, but not VMF, damage and a group of patients with bilateral VMF, but not amygdala, damage. We used the \"gambling task\" to measure decision-making performance and electrodermal activity (skin conductance responses, SCR) as an index of somatic state activation. All patients, those with amygdala damage as well as those with VMF damage, were (1) impaired on the gambling task and (2) unable to develop anticipatory SCRs while they pondered risky choices. However, VMF patients were able to generate SCRs when they received a reward or a punishment (play money), whereas amygdala patients failed to do so. In a Pavlovian conditioning experiment the VMF patients acquired a conditioned SCR to visual stimuli paired with an aversive loud sound, whereas amygdala patients failed to do so. The results suggest that amygdala damage is associated with impairment in decision-making and that the roles played by the amygdala and VMF in decision-making are different.", "title": "" } ]
[ { "docid": "1ace2a8a8c6b4274ac0891e711d13190", "text": "Recent music information retrieval (MIR) research pays increasing attention to music classification based on moods expressed by music pieces. The first Audio Mood Classification (AMC) evaluation task was held in the 2007 running of the Music Information Retrieval Evaluation eXchange (MIREX). This paper describes important issues in setting up the task, including dataset construction and ground-truth labeling, and analyzes human assessments on the audio dataset, as well as system performances from various angles. Interesting findings include system performance differences with regard to mood clusters and the levels of agreement amongst human judgments regarding mood labeling. Based on these analyses, we summarize experiences learned from the first community scale evaluation of the AMC task and propose recommendations for future AMC and similar evaluation tasks.", "title": "" }, { "docid": "305ae3e7a263bb12f7456edca94c06ca", "text": "We study the effects of changes in uncertainty about future fiscal policy on aggregate economic activity. In light of large fiscal deficits and high public debt levels in the U.S., a fiscal consolidation seems inevitable. However, there is notable uncertainty about the policy mix and timing of such a budgetary adjustment. To evaluate the consequences of the increased uncertainty, we first estimate tax and spending processes for the U.S. that allow for timevarying volatility. We then feed these processes into an otherwise standard New Keynesian business cycle model calibrated to the U.S. economy. We find that fiscal volatility shocks can have a sizable adverse effect on economic activity.", "title": "" }, { "docid": "7437f0c8549cb8f73f352f8043a80d19", "text": "Graphene is considered as one of leading candidates for gas sensor applications in the Internet of Things owing to its unique properties such as high sensitivity to gas adsorption, transparency, and flexibility. We present self-activated operation of all graphene gas sensors with high transparency and flexibility. The all-graphene gas sensors which consist of graphene for both sensor electrodes and active sensing area exhibit highly sensitive, selective, and reversible responses to NO2 without external heating. The sensors show reliable operation under high humidity conditions and bending strain. In addition to these remarkable device performances, the significantly facile fabrication process enlarges the potential of the all-graphene gas sensors for use in the Internet of Things and wearable electronics.", "title": "" }, { "docid": "efc7adc3963e7ccb0e2f1297a81005b2", "text": "data types Reasoning Englis guitarists Academic degrees Companies establishe... Cubes Internet radio Loc l authorities ad... Figure 5: Topic coverage of LAK data graph for the individual resources. 5. RELATED WORK Cobo et al.[3] presents an analysis of student participation in online discussion forums using an agglomerative hierarchical clustering algorithm, and explore the profiles to find relevant activity patterns and detect different student profiles. Barber et al. [1] uses a predictive analytic model to prevent students from failing in courses. They analyze several variables, such as grades, age, attendance and others, that can impede the student learning.Kahn et al. [7] present a long-term study using hierarchical cluster analysis, t-tests and Pearson correlation that identified seven behavior patterns of learners in online discussion forums based on their access. García-Solórzano et al. [6] introduce a new educational monitoring tool that helps tutors to monitor the development of the students. Unlike traditional monitoring systems, they propose a faceted browser visualization tool to facilitate the analysis of the student progress. Glass [8] provides a versatile visualization tool to enable the creation of additional visualizations of data collections. Essa et al. [4] utilize predictive models to identify learners academically at-risk. They present the problem with an interesting analogy to the patient-doctor workflow, where first they identify the problem, analyze the situation and then prescribe courses that are indicated to help the student to succeed. Siadaty et al.[13] present the Learn-B environment, a hub system that captures information about the users usage in different softwares and learning activities in their workplace and present to the user feedback to support future decisions, planning and accompanies them in the learning process. In the same way, McAuley et al. [9] propose a visual analytics to support organizational learning in online communities. They present their analysis through an adjacency matrix and an adjustable timeline that show the communication-actions of the users and is able to organize it into temporal patterns. Bramucci et al. [2] presents Sherpa an academic recommendation system to support students on making decisions. For instance, using the learner profiles they recommend courses or make interventions in case that students are at-risk. In the related work, we showed how different perspectives and the necessity of new tools and methods to make data available and help decision-makers. 6. CONCLUSION In this paper we presented the main features of the Cite4Me Web application. Cite4Me makes use of several data sources to provide information for users interested on scientific publications and its applications. Additionally, we provided a general framework on data discovery and correlated resources based on a constructed feature set, consisting of items extracted from reference datasets. It made possible for users, to search and relate resources from a dataset with other resources offered as Linked Data. For more information about the Cite4Me Web application refer to http://www.cite4me.com. 7. REFERENCES [1] R. Barber and M. Sharkey. Course correction: using analytics to predict course success. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 259–262, New York, NY, USA, 2012. ACM. [2] R. Bramucci and J. Gaston. Sherpa: increasing student success with a recommendation engine. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 82–83, New York, NY, USA, 2012. ACM. [3] G. Cobo, D. García-Solórzano, J. A. Morán, E. Santamaría, C. Monzo, and J. Melenchón. Using agglomerative hierarchical clustering to model learner participation profiles in online discussion forums. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 248–251, New York, NY, USA, 2012. ACM. [4] A. Essa and H. Ayad. Student success system: risk analytics and data visualization using ensembles of predictive models. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 158–161, New York, NY, USA, 2012. ACM. [5] E. Gabrilovich and S. Markovitch. Computing semantic relatedness using wikipedia-based explicit semantic analysis. In Proc. of the 20th international joint conference on Artifical intelligence, IJCAI’07, pages 1606–1611, San Francisco, CA, USA, 2007. Morgan Kaufmann Pub. Inc. [6] D. García-Solórzano, G. Cobo, E. Santamaría, J. A. Morán, C. Monzo, and J. Melenchón. Educational monitoring tool based on faceted browsing and data portraits. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 170–178, New York, NY, USA, 2012. ACM. [7] T. M. Khan, F. Clear, and S. S. Sajadi. The relationship between educational performance and online access routines: analysis of students’ access to an online discussion forum. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 226–229, New York, NY, USA, 2012. ACM. [8] D. Leony, A. Pardo, L. de la Fuente Valentín, D. S. de Castro, and C. D. Kloos. Glass: a learning analytics visualization tool. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 162–163, New York, NY, USA, 2012. ACM. [9] J. McAuley, A. O’Connor, and D. Lewis. Exploring reflection in online communities. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 102–110, New York, NY, USA, 2012. ACM. [10] P. N. Mendes, M. Jakob, A. García-Silva, and C. Bizer. Dbpedia spotlight: shedding light on the web of documents. In Proc. of the 7th International Conference on Semantic Systems, I-Semantics ’11, pages 1–8, New York, NY, USA, 2011. ACM. [11] B. Pereira Nunes, S. Dietze, M. A. Casanova, R. Kawase, B. Fetahu, and W. Nejdl. Combining a co-occurrence-based and a semantic measure for entity linking. In ESWC, 2013 (to appear). [12] B. Pereira Nunes, R. Kawase, S. Dietze, D. Taibi, M. A. Casanova, and W. Nejdl. Can entities be friends? In G. Rizzo, P. Mendes, E. Charton, S. Hellmann, and A. Kalyanpur, editors, Proc. of the Web of Linked Entities Workshop in conjuction with the 11th International Semantic Web Conference, volume 906 of CEUR-WS.org, pages 45–57, Nov. 2012. [13] M. Siadaty, D. Gašević, J. Jovanović, N. Milikić, Z. Jeremić, L. Ali, A. Giljanović, and M. Hatala. Learn-b: a social analytics-enabled tool for self-regulated workplace learning. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 115–119, New York, NY, USA, 2012. ACM. [14] C. van Rijsbergen, S. Robertson, and M. Porter. New models in probabilistic information retrieval. 1980.", "title": "" }, { "docid": "cf26c4f612a23ec26b284a6b243de7f4", "text": "Grit-perseverance and passion for long-term goals-has been shown to be a significant predictor of academic success, even after controlling for other personality factors. Here, for the first time, we use a U.K.-representative sample and a genetically sensitive design to unpack the etiology of Grit and its prediction of academic achievement in comparison to well-established personality traits. For 4,642 16-year-olds (2,321 twin pairs), we used the Grit-S scale (perseverance of effort and consistency of interest), along with the Big Five personality traits, to predict grades on the General Certificate of Secondary Education (GCSE) exams, which are administered U.K.-wide at the end of compulsory education. Twin analyses of Grit perseverance yielded a heritability estimate of 37% (20% for consistency of interest) and no evidence for shared environmental influence. Personality, primarily conscientiousness, predicts about 6% of the variance in GCSE grades, but Grit adds little to this prediction. Moreover, multivariate twin analyses showed that roughly two-thirds of the GCSE prediction is mediated genetically. Grit perseverance of effort and Big Five conscientiousness are to a large extent the same trait both phenotypically (r = 0.53) and genetically (genetic correlation = 0.86). We conclude that the etiology of Grit is highly similar to other personality traits, not only in showing substantial genetic influence but also in showing no influence of shared environmental factors. Personality significantly predicts academic achievement, but Grit adds little phenotypically or genetically to the prediction of academic achievement beyond traditional personality factors, especially conscientiousness. (PsycINFO Database Record", "title": "" }, { "docid": "997993e389cdb1e40714e20b96927890", "text": "Developer support forums are becoming more popular than ever. Crowdsourced knowledge is an essential resource for many developers yet it can raise concerns about the quality of the shared content. Most existing research efforts address the quality of answers posted by Q&A community members. In this paper, we explore the quality of questions and propose a method of predicting the score of questions on Stack Overflow based on sixteen factors related to questions' format, content and interactions that occur in the post. We performed an extensive investigation to understand the relationship between the factors and the scores of questions. The multiple regression analysis shows that the question's length of the code, accepted answer score, number of tags and the count of views, comments and answers are statistically significantly associated with the scores of questions. Our findings can offer insights to community-based Q&A sites for improving the content of the shared knowledge.", "title": "" }, { "docid": "80947cea68851bc522d5ebf8a74e28ab", "text": "Advertising is key to the business model of many online services. Personalization aims to make ads more relevant for users and more effective for advertisers. However, relatively few studies into user attitudes towards personalized ads are available. We present a San Francisco Bay Area survey (N=296) and in-depth interviews (N=24) with teens and adults. People are divided and often either (strongly) agreed or disagreed about utility or invasiveness of personalized ads and associated data collection. Mobile ads were reported to be less relevant than those on desktop. Participants explained ad personalization based on their personal previous behaviors and guesses about demographic targeting. We describe both metrics improvements as well as opportunities for improving online advertising by focusing on positive ad interactions reported by our participants, such as personalization focused not just on product categories but specific brands and styles, awareness of life events, and situations in which ads were useful or even inspirational.", "title": "" }, { "docid": "1aaacf3d7d6311a118581d836f78d142", "text": "One of the most powerful features of SQL is the use of nested queries. Most research work on the optimization of nested queries focuses on aggregate subqueries. However, the solutions proposed for non-aggregate subqueries are still limited, especially for queries having multiple subqueries and null values. In this paper, we show that existing approaches to queries containing non-aggregate subqueries proposed in the literature (including rewrites) are not adequate. We then propose a new efficient approach, the nested relational approach, based on the nested relational algebra. Our approach directly unnests non-aggregate subqueries using hash joins, and treats all subqueries in a uniform manner, being able to deal with nested queries of any type and any level. We report on experimental work that confirms that existing approaches have difficulties dealing with non-aggregate subqueries, and that our approach offers better performance. We also discuss some possibilities for algebraic optimization and the issue of integrating our approach in a relational database system.", "title": "" }, { "docid": "c863d82ae2b56202d333ffa5bef5dd59", "text": "We present an algorithm for finding landmarks along a manifold. These landmarks provide a small set of locations spaced out along the manifold such that they capture the low-dimensional nonlinear structure of the data embedded in the high-dimensional space. The approach does not select points directly from the dataset, but instead we optimize each landmark by moving along the continuous manifold space (as approximated by the data) according to the gradient of an objective function. We borrow ideas from active learning with Gaussian processes to define the objective, which has the property that a new landmark is “repelled” by those currently selected, allowing for exploration of the manifold. We derive a stochastic algorithm for learning with large datasets and show results on several datasets, including the Million Song Dataset and articles from the New York Times.", "title": "" }, { "docid": "288377464cc80eef5c669e5821e3b2b3", "text": "For a long time, the human genome was considered an intrinsically stable entity; however, it is currently known that our human genome contains many unstable elements consisting of tandem repeat elements, mainly Short tandem repeats (STR), also known as microsatellites or Simple sequence repeats (SSR) (Ellegren, 2000). These sequences involve a repetitive unit of 1-6 bp, forming series with lengths from two to several thousand nucleotides. STR are widely found in proand eukaryotes, including humans. They appear scattered more or less evenly throughout the human genome, accounting for ca. 3% of the entire genome (Sharma et al., 2007). STR are polymorphic but stable in general population; however, repeats can become unstable during DNA replication, resulting in mitotic or meiotic contractions or expansions. STR instability is an important and unique form of mutation that is linked to >40 neurological, neurodegenerative, and neuromuscular disorders (Pearson et al., 2005). In particular, abnormal expansion of trinucleotide repeats (CTG)n, (CGG)n, (CCG)n, (GAA)n, and (CAG)n have been associated with different diseases such as fragile X syndrome, Huntington disease (HD), Dentatorubral-pallidoluysian atrophy (DRPLA), Friedreich ataxia (FA), diverse Spinocerebellar ataxias (SCA), and Myotonic dystrophy type 1 (DM1).", "title": "" }, { "docid": "90b913e3857625f3237ff7a47f675fbb", "text": "A new approach for the design of UWB hairpin-comb filters is presented. The filters can be designed to possess broad upper stopband characteristics by controlling the overall size of their resonators. The measured frequency characteristics of implemented UWB filters show potential first spurious passbands centered at about six times the fundamental passband center frequencies.", "title": "" }, { "docid": "f9c37f460fc0a4e7af577ab2cbe7045b", "text": "Declines in various cognitive abilities, particularly executive control functions, are observed in older adults. An important goal of cognitive training is to slow or reverse these age-related declines. However, opinion is divided in the literature regarding whether cognitive training can engender transfer to a variety of cognitive skills in older adults. In the current study, the authors trained older adults in a real-time strategy video game for 23.5 hr in an effort to improve their executive functions. A battery of cognitive tasks, including tasks of executive control and visuospatial skills, were assessed before, during, and after video-game training. The trainees improved significantly in the measures of game performance. They also improved significantly more than the control participants in executive control functions, such as task switching, working memory, visual short-term memory, and reasoning. Individual differences in changes in game performance were correlated with improvements in task switching. The study has implications for the enhancement of executive control processes of older adults.", "title": "" }, { "docid": "bac5b36d7da7199c1bb4815fa0d5f7de", "text": "During quadrupedal trotting, diagonal pairs of limbs are set down in unison and exert forces on the ground simultaneously. Ground-reaction forces on individual limbs of trotting dogs were measured separately using a series of four force platforms. Vertical and fore-aft impulses were determined for each limb from the force/time recordings. When mean fore-aft acceleration of the body was zero in a given trotting step (steady state), the fraction of vertical impulse on the forelimb was equal to the fraction of body weight supported by the forelimbs during standing (approximately 60 %). When dogs accelerated or decelerated during a trotting step, the vertical impulse was redistributed to the hindlimb or forelimb, respectively. This redistribution of the vertical impulse is due to a moment exerted about the pitch axis of the body by fore-aft accelerating and decelerating forces. Vertical forces exerted by the forelimb and hindlimb resist this pitching moment, providing stability during fore-aft acceleration and deceleration.", "title": "" }, { "docid": "5eb1aa594c3c6210f029b5bbf6acc599", "text": "Intestinal nematodes affecting dogs, i.e. roundworms, hookworms and whipworms, have a relevant health-risk impact for animals and, for most of them, for human beings. Both dogs and humans are typically infected by ingesting infective stages, (i.e. larvated eggs or larvae) present in the environment. The existence of a high rate of soil and grass contamination with infective parasitic elements has been demonstrated worldwide in leisure, recreational, public and urban areas, i.e. parks, green areas, bicycle paths, city squares, playgrounds, sandpits, beaches. This review discusses the epidemiological and sanitary importance of faecal pollution with canine intestinal parasites in urban environments and the integrated approaches useful to minimize the risk of infection in different settings.", "title": "" }, { "docid": "b52f9f47b972e797f11029111f5200b3", "text": "Sentiment lexicons have been leveraged as a useful source of features for sentiment analysis models, leading to the state-of-the-art accuracies. On the other hand, most existing methods use sentiment lexicons without considering context, typically taking the count, sum of strength, or maximum sentiment scores over the whole input. We propose a context-sensitive lexicon-based method based on a simple weighted-sum model, using a recurrent neural network to learn the sentiments strength, intensification and negation of lexicon sentiments in composing the sentiment value of sentences. Results show that our model can not only learn such operation details, but also give significant improvements over state-of-the-art recurrent neural network baselines without lexical features, achieving the best results on a Twitter benchmark.", "title": "" }, { "docid": "472f59fd9017e3c03650619c4f0201f3", "text": "Software Defined Networking (SDN) introduces a new communication network management paradigm and has gained much attention from academia and industry. However, the centralized nature of SDN is a potential vulnerability to the system since attackers may launch denial of services (DoS) attacks against the controller. Existing solutions limit requests rate to the controller by dropping overflowed requests, but they also drop legitimate requests to the controller. To address this problem, we propose FlowRanger, a buffer prioritizing solution for controllers to handle routing requests based on their likelihood to be attacking requests, which derives the trust values of the requesting sources. Based on their trust values, FlowRanger classifies routing requests into multiple buffer queues with different priorities. Thus, attacking requests are served with a lower priority than regular requests. Our simulation results demonstrates that FlowRanger can significantly enhance the request serving rate of regular users under DoS attacks against the controller. To the best of our knowledge, our work is the first solution to battle against controller DoS attacks on the controller side.", "title": "" }, { "docid": "1967de1be0b095b4a59a5bb0fdc403c0", "text": "As the popularity of content sharing websites has increased, they have become targets for spam, phishing and the distribution of malware. On YouTube, the facility for users to post comments can be used by spam campaigns to direct unsuspecting users to malicious third-party websites. In this paper, we demonstrate how such campaigns can be tracked over time using network motif profiling, i.e. by tracking counts of indicative network motifs. By considering all motifs of up to five nodes, we identify discriminating motifs that reveal two distinctly different spam campaign strategies, and present an evaluation that tracks two corresponding active campaigns.", "title": "" }, { "docid": "8877d6753d6b7cd39ba36c074ca56b00", "text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.", "title": "" }, { "docid": "6dd1df4e520f5858d48db9860efb63a7", "text": "This paper proposes single-phase direct pulsewidth modulation (PWM) buck-, boost-, and buck-boost-type ac-ac converters. The proposed converters are implemented with a series-connected freewheeling diode and MOSFET pair, which allows to minimize the switching and conduction losses of the semiconductor devices and resolves the reverse-recovery problem of body diode of MOSFET. The proposed converters are highly reliable because they can solve the shoot-through and dead-time problems of traditional ac-ac converters without voltage/current sensing module, lossy resistor-capacitor (RC) snubbers, or bulky coupled inductors. In addition, they can achieve high obtainable voltage gain and also produce output voltage waveforms of good quality because they do not use lossy snubbers. Unlike the recently developed switching cell (SC) ac-ac converters, the proposed ac-ac converters have no circulating current and do not require bulky coupled inductors; therefore, the total losses, current stresses, and magnetic volume are reduced and efficiency is improved. Detailed analysis and experimental results are provided to validate the novelty and merit of the proposed converters.", "title": "" }, { "docid": "becbcb6ca7ac87a3e43dbc65748b258a", "text": "We present Mean Box Pooling, a novel visual representation that pools over CNN representations of a large number, highly overlapping object proposals. We show that such representation together with nCCA, a successful multimodal embedding technique, achieves state-of-the-art performance on the Visual Madlibs task. Moreover, inspired by the nCCA’s objective function, we extend classical CNN+LSTM approach to train the network by directly maximizing the similarity between the internal representation of the deep learning architecture and candidate answers. Again, such approach achieves a significant improvement over the prior work that also uses CNN+LSTM approach on Visual Madlibs.", "title": "" } ]
scidocsrr
54739b925463523a5fa7e2294e6749a3
Ten years of a model of aesthetic appreciation and aesthetic judgments : The aesthetic episode - Developments and challenges in empirical aesthetics.
[ { "docid": "78c3573511176ba63e2cf727e09c7eb4", "text": "Human aesthetic preference in the visual domain is reviewed from definitional, methodological, empirical, and theoretical perspectives. Aesthetic science is distinguished from the perception of art and from philosophical treatments of aesthetics. The strengths and weaknesses of important behavioral techniques are presented and discussed, including two-alternative forced-choice, rank order, subjective rating, production/adjustment, indirect, and other tasks. Major findings are reviewed about preferences for colors (single colors, color combinations, and color harmony), spatial structure (low-level spatial properties, shape properties, and spatial composition within a frame), and individual differences in both color and spatial structure. Major theoretical accounts of aesthetic response are outlined and evaluated, including explanations in terms of mere exposure effects, arousal dynamics, categorical prototypes, ecological factors, perceptual and conceptual fluency, and the interaction of multiple components. The results of the review support the conclusion that aesthetic response can be studied rigorously and meaningfully within the framework of scientific psychology.", "title": "" } ]
[ { "docid": "396f6b6c09e88ca8e9e47022f1ae195b", "text": "Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level.", "title": "" }, { "docid": "57fbb5bf0e7fe4b8be21fae87f027572", "text": "Android and iOS devices are leading the mobile device market. While various user experiences have been reported from the general user community about their differences, such as battery lifetime, display, and touchpad control, few in-depth reports can be found about their comparative performance when receiving the increasingly popular Internet streaming services. Today, video traffic starts to dominate the Internet mobile data traffic. In this work, focusing on Internet streaming accesses, we set to analyze and compare the performance when Android and iOS devices are accessing Internet streaming services. Starting from the analysis of a server-side workload collected from a top mobile streaming service provider, we find Android and iOS use different approaches to request media content, leading to different amounts of received traffic on Android and iOS devices when a same video clip is accessed. Further studies on the client side show that different data requesting approaches (standard HTTP request vs. HTTP range request) and different buffer management methods (static vs. dynamic) are used in Android and iOS mediaplayers, and their interplay has led to our observations. Our empirical results and analysis provide some insights for the current Android and iOS users, streaming service providers, and mobile mediaplayer developers.", "title": "" }, { "docid": "85f67ab0e1adad72bbe6417d67fd4c81", "text": "Data warehouses are used to store large amounts of data. This data is often used for On-Line Analytical Processing (OLAP). Short response times are essential for on-line decision support. Common approaches to reach this goal in read-mostly environments are the precomputation of materialized views and the use of index structures. In this paper, a framework is presented to evaluate different index structures analytically depending on nine parameters for the use in a data warehouse environment. The framework is applied to four different index structures to evaluate which structure works best for range queries. We show that all parameters influence the performance. Additionally, we show why bitmap index structures use modern disks better than traditional tree structures and why bitmaps will supplant the tree based index structures in the future.", "title": "" }, { "docid": "619c905f7ef5fa0314177b109e0ec0e6", "text": "The aim of this review is to systematically summarise qualitative evidence about work-based learning in health care organisations as experienced by nursing staff. Work-based learning is understood as informal learning that occurs inside the work community in the interaction between employees. Studies for this review were searched for in the CINAHL, PubMed, Scopus and ABI Inform ProQuest databases for the period 2000-2015. Nine original studies met the inclusion criteria. After the critical appraisal by two researchers, all nine studies were selected for the review. The findings of the original studies were aggregated, and four statements were prepared, to be utilised in clinical work and decision-making. The statements concerned the following issues: (1) the culture of the work community; (2) the physical structures, spaces and duties of the work unit; (3) management; and (4) interpersonal relations. Understanding the nurses' experiences of work-based learning and factors behind these experiences provides an opportunity to influence the challenges of learning in the demanding context of health care organisations.", "title": "" }, { "docid": "d135e72c317ea28a64a187b17541f773", "text": "Automatic face recognition (AFR) is an area with immense practical potential which includes a wide range of commercial and law enforcement applications, and it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in AFR continues to improve, benefiting from advances in a range of different fields including image processing, pattern recognition, computer graphics and physiology. However, systems based on visible spectrum images continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease their accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject.", "title": "" }, { "docid": "689f7aad97d36f71e43e843a331fcf5d", "text": "Dimension-reducing feature extraction neural network techniques which also preserve neighbourhood relationships in data have traditionally been the exclusive domain of Kohonen self organising maps. Recently, we introduced a novel dimension-reducing feature extraction process, which is also topographic, based upon a Radial Basis Function architecture. It has been observed that the generalisation performance of the system is broadly insensitive to model order complexity and other smoothing factors such as the kernel widths, contrary to intuition derived from supervised neural network models. In this paper we provide an effective demonstration of this property and give a theoretical justification for the apparent 'self-regularising' behaviour of the 'NEUROSCALE' architecture. 1 'NeuroScale': A Feed-forward Neural Network Topographic Transformation Recently an important class of topographic neural network based feature extraction approaches, which can be related to the traditional statistical methods of Sammon Mappings (Sammon, 1969) and Multidimensional Scaling (Kruskal, 1964), have been introduced (Mao and Jain, 1995; Lowe, 1993; Webb, 1995; Lowe and Tipping, 1996). These novel alternatives to Kohonen-like approaches for topographic feature extraction possess several interesting properties. For instance, the NEuROSCALE architecture has the empirically observed property that the generalisation perfor544 D. Lowe and M. E. Tipping mance does not seem to depend critically on model order complexity, contrary to intuition based upon knowledge of its supervised counterparts. This paper presents evidence for their 'self-regularising' behaviour and provides an explanation in terms of the curvature of the trained models. We now provide a brief introduction to the NEUROSCALE philosophy of nonlinear topographic feature extraction. Further details may be found in (Lowe, 1993; Lowe and Tipping, 1996). We seek a dimension-reducing, topographic transformation of data for the purposes of visualisation and analysis. By 'topographic', we imply that the geometric structure of the data be optimally preserved in the transformation, and the embodiment of this constraint is that the inter-point distances in the feature space should correspond as closely as possible to those distances in the data space. The implementation of this principle by a neural network is very simple. A Radial Basis Function (RBF) neural network is utilised to predict the coordinates of the data point in the transformed feature space. The locations of the feature points are indirectly determined by adjusting the weights of the network. The transformation is determined by optimising the network parameters in order to minimise a suitable error measure that embodies the topographic principle. The specific details of this alternative approach are as follows. Given an mdimensional input space of N data points x q , an n-dimensional feature space of points Yq is generated such that the relative positions of the feature space points minimise the error, or 'STRESS', term: N E = 2: 2:(d~p dqp )2, (1) p q>p where the d~p are the inter-point Euclidean distances in the data space: d~p = J(xq Xp)T(Xq xp), and the dqp are the corresponding distances in the feature space: dqp = J(Yq Yp)T(Yq Yp)· The points yare generated by the RBF, given the data points as input. That is, Yq = f(xq;W), where f is the nonlinear transformation effected by the RBF with parameters (weights and any kernel smoothing factors) W. The distances in the feature space may thus be given by dqp =11 f(xq) f(xp) \" and so more explicitly by", "title": "" }, { "docid": "5e240ad1d257a90c0ca414ce8e7e0949", "text": "Improving Cloud Security using Secure Enclaves by Jethro Gideon Beekman Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences University of California, Berkeley Professor David Wagner, Chair Internet services can provide a wealth of functionality, yet their usage raises privacy, security and integrity concerns for users. This is caused by a lack of guarantees about what is happening on the server side. As a worst case scenario, the service might be subjected to an insider attack. This dissertation describes the unalterable secure service concept for trustworthy cloud computing. Secure services are a powerful abstraction that enables viewing the cloud as a true extension of local computing resources. Secure services combine the security benefits one gets locally with the manageability and availability of the distributed cloud. Secure services are implemented using secure enclaves. Remote attestation of the server is used to obtain guarantees about the programming of the service. This dissertation addresses concerns related to using secure enclaves such as providing data freshness and distributing identity information. Certificate Transparency is augmented to distribute information about which services exist and what they do. All combined, this creates a platform that allows legacy clients to obtain security guarantees about Internet services.", "title": "" }, { "docid": "4d040791f63af5e2ff13ff2b705dc376", "text": "The frequency and severity of forest fires, coupled with changes in spatial and temporal precipitation and temperature patterns, are likely to severely affect the characteristics of forest and permafrost patterns in boreal eco-regions. Forest fires, however, are also an ecological factor in how forest ecosystems form and function, as they affect the rate and characteristics of tree recruitment. A better understanding of fire regimes and forest recovery patterns in different environmental and climatic conditions will improve the management of sustainable forests by facilitating the process of forest resilience. Remote sensing has been identified as an effective tool for preventing and monitoring forest fires, as well as being a potential tool for understanding how forest ecosystems respond to them. However, a number of challenges remain before remote sensing practitioners will be able to better understand the effects of forest fires and how vegetation responds afterward. This article attempts to provide a comprehensive review of current research with respect to remotely sensed data and methods used to model post-fire effects and forest recovery patterns in boreal forest regions. The review reveals that remote sensing-based monitoring of post-fire effects and forest recovery patterns in boreal forest regions is not only limited by the gaps in both field data and remotely sensed data, but also the complexity of far-northern fire regimes, climatic conditions and environmental conditions. We expect that the integration of different remotely sensed data coupled with field campaigns can provide an important data source to support the monitoring of post-fire effects and forest recovery patterns. Additionally, the variation and stratification of preand post-fire vegetation and environmental conditions should be considered to achieve a reasonable, operational model for monitoring post-fire effects and forest patterns in boreal regions. OPEN ACCESS Remote Sens. 2014, 6 471", "title": "" }, { "docid": "807e008d5c7339706f8cfe71e9ced7ba", "text": "Current competitive challenges induced by globalization and advances in information technology have forced companies to focus on managing customer relationships, and in particular customer satisfaction, in order to efficiently maximize revenues. This paper reports exploratory research based on a mail survey addressed to the largest 1,000 Greek organizations. The objectives of the research were: to investigate the extent of the usage of customerand market-related knowledge management (KM) instruments and customer relationship management (CRM) systems by Greek organizations and their relationship with demographic and organizational variables; to investigate whether enterprises systematically carry out customer satisfaction and complaining behavior research; and to examine the impact of the type of the information system used and managers’ attitudes towards customer KM practices. In addition, a conceptual model of CRM development stages is proposed. The findings of the survey show that about half of the organizations of the sample do not adopt any CRM philosophy. The remaining organizations employ instruments to conduct customer satisfaction and other customer-related research. However, according to the proposed model, they are positioned in the first, the preliminary CRM development stage. The findings also suggest that managers hold positive attitudes towards CRM and that there is no significant relationship between the type of the transactional information system used and the extent to which customer satisfaction research is performed by the organizations. The paper concludes by discussing the survey findings and proposing future", "title": "" }, { "docid": "4ed74450320dfef4156013292c1d2cbb", "text": "This paper describes the decisions by which teh Association for Computing Machinery integrated good features from the Los Alamos e-print (physics) archive and from Cornell University's Networked Computer Science Technical Reference Library to form their own open, permanent, online “computing research repository” (CoRR). Submitted papers are not refereed and anyone can browse and extract CoRR material for free, so Corr's eventual success could revolutionize computer science publishing. But several serious challenges remain: some journals forbid online preprints, teh CoRR user interface is cumbersome, submissions are only self-indexed, (no professional library staff manages teh archive) and long-term funding is uncertain.", "title": "" }, { "docid": "0105070bd23400083850627b1603af0b", "text": "This research covers an endeavor by the author on the usage of automated vision and navigation framework; the research is conducted by utilizing a Kinect sensor requiring minimal effort framework for exploration purposes in the zone of robot route. For this framework, GMapping (a highly efficient Rao-Blackwellized particle filer to learn grid maps from laser range data) parameters have been optimized to improve the accuracy of the map generation and the laser scan. With the use of Robot Operating System (ROS), the open source GMapping bundle was utilized as a premise for a map era and Simultaneous Localization and Mapping (SLAM). Out of the many different map generation techniques, the tele-operation used is interactive marker, which controls the TurtleBot 2 movements via RVIZ (3D visualization tool for ROS). Test results completed with the multipurpose robot in a counterfeit and regular environment represents the preferences of the proposed strategy. From experiments, it is found that Kinect sensor produces a more accurate map compared to non-filtered laser range finder data, which is excellent since the price of a Kinect sensor is much cheaper than a laser range finder. An expansion of experimental results was likewise done to test the performance of the portable robot frontier exploring in an obscure environment while performing SLAM alongside the proposed technique.", "title": "" }, { "docid": "e3299737a0fb3cd3c9433f462565b278", "text": "BACKGROUND\nMore than two-thirds of pregnant women experience low-back pain and almost one-fifth experience pelvic pain. The two conditions may occur separately or together (low-back and pelvic pain) and typically increase with advancing pregnancy, interfering with work, daily activities and sleep.\n\n\nOBJECTIVES\nTo update the evidence assessing the effects of any intervention used to prevent and treat low-back pain, pelvic pain or both during pregnancy.\n\n\nSEARCH METHODS\nWe searched the Cochrane Pregnancy and Childbirth (to 19 January 2015), and the Cochrane Back Review Groups' (to 19 January 2015) Trials Registers, identified relevant studies and reviews and checked their reference lists.\n\n\nSELECTION CRITERIA\nRandomised controlled trials (RCTs) of any treatment, or combination of treatments, to prevent or reduce the incidence or severity of low-back pain, pelvic pain or both, related functional disability, sick leave and adverse effects during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy.\n\n\nMAIN RESULTS\nWe included 34 RCTs examining 5121 pregnant women, aged 16 to 45 years and, when reported, from 12 to 38 weeks' gestation. Fifteen RCTs examined women with low-back pain (participants = 1847); six examined pelvic pain (participants = 889); and 13 examined women with both low-back and pelvic pain (participants = 2385). Two studies also investigated low-back pain prevention and four, low-back and pelvic pain prevention. Diagnoses ranged from self-reported symptoms to clinicians' interpretation of specific tests. All interventions were added to usual prenatal care and, unless noted, were compared with usual prenatal care. The quality of the evidence ranged from moderate to low, raising concerns about the confidence we could put in the estimates of effect. For low-back painResults from meta-analyses provided low-quality evidence (study design limitations, inconsistency) that any land-based exercise significantly reduced pain (standardised mean difference (SMD) -0.64; 95% confidence interval (CI) -1.03 to -0.25; participants = 645; studies = seven) and functional disability (SMD -0.56; 95% CI -0.89 to -0.23; participants = 146; studies = two). Low-quality evidence (study design limitations, imprecision) also suggested no significant differences in the number of women reporting low-back pain between group exercise, added to information about managing pain, versus usual prenatal care (risk ratio (RR) 0.97; 95% CI 0.80 to 1.17; participants = 374; studies = two). For pelvic painResults from a meta-analysis provided low-quality evidence (study design limitations, imprecision) of no significant difference in the number of women reporting pelvic pain between group exercise, added to information about managing pain, and usual prenatal care (RR 0.97; 95% CI 0.77 to 1.23; participants = 374; studies = two). For low-back and pelvic painResults from meta-analyses provided moderate-quality evidence (study design limitations) that: an eight- to 12-week exercise program reduced the number of women who reported low-back and pelvic pain (RR 0.66; 95% CI 0.45 to 0.97; participants = 1176; studies = four); land-based exercise, in a variety of formats, significantly reduced low-back and pelvic pain-related sick leave (RR 0.76; 95% CI 0.62 to 0.94; participants = 1062; studies = two).The results from a number of individual studies, incorporating various other interventions, could not be pooled due to clinical heterogeneity. There was moderate-quality evidence (study design limitations or imprecision) from individual studies suggesting that osteomanipulative therapy significantly reduced low-back pain and functional disability, and acupuncture or craniosacral therapy improved pelvic pain more than usual prenatal care. Evidence from individual studies was largely of low quality (study design limitations, imprecision), and suggested that pain and functional disability, but not sick leave, were significantly reduced following a multi-modal intervention (manual therapy, exercise and education) for low-back and pelvic pain.When reported, adverse effects were minor and transient.\n\n\nAUTHORS' CONCLUSIONS\nThere is low-quality evidence that exercise (any exercise on land or in water), may reduce pregnancy-related low-back pain and moderate- to low-quality evidence suggesting that any exercise improves functional disability and reduces sick leave more than usual prenatal care. Evidence from single studies suggests that acupuncture or craniosacral therapy improves pregnancy-related pelvic pain, and osteomanipulative therapy or a multi-modal intervention (manual therapy, exercise and education) may also be of benefit.Clinical heterogeneity precluded pooling of results in many cases. Statistical heterogeneity was substantial in all but three meta-analyses, which did not improve following sensitivity analyses. Publication bias and selective reporting cannot be ruled out.Further evidence is very likely to have an important impact on our confidence in the estimates of effect and change the estimates. Studies would benefit from the introduction of an agreed classification system that can be used to categorise women according to their presenting symptoms, so that treatment can be tailored accordingly.", "title": "" }, { "docid": "c87cc578b4a74bae4ea1e0d0d68a6038", "text": "Human-Computer Interaction (HCI) exists ubiquitously in our daily lives. It is usually achieved by using a physical controller such as a mouse, keyboard or touch screen. It hinders Natural User Interface (NUI) as there is a strong barrier between the user and computer. There are various hand tracking systems available on the market, but they are complex and expensive. In this paper, we present the design and development of a robust marker-less hand/finger tracking and gesture recognition system using low-cost hardware. We propose a simple but efficient method that allows robust and fast hand tracking despite complex background and motion blur. Our system is able to translate the detected hands or gestures into different functional inputs and interfaces with other applications via several methods. It enables intuitive HCI and interactive motion gaming. We also developed sample applications that can utilize the inputs from the hand tracking system. Our results show that an intuitive HCI and motion gaming system can be achieved with minimum hardware requirements.", "title": "" }, { "docid": "505aff71acf5469dc718b8168de3e311", "text": "We propose two suffix array inspired full-text indexes. One, called SAhash, augments the suffix array with a hash table to speed up pattern searches due to significantly narrowed search interval before the binary search phase. The other, called FBCSA, is a compact data structure, similar to Mäkinen’s compact suffix array, but working on fixed sized blocks. Experiments on the Pizza & Chili 200MB datasets show that SA-hash is about 2–3 times faster in pattern searches (counts) than the standard suffix array, for the price of requiring 0.2n− 1.1n bytes of extra space, where n is the text length, and setting a minimum pattern length. FBCSA is relatively fast in single cell accesses (a few times faster than related indexes at about the same or better compression), but not competitive if many consecutive cells are to be extracted. Still, for the task of extracting, e.g., 10 successive cells its time-space relation remains attractive.", "title": "" }, { "docid": "efd2843175ad0b860ad1607f337addc5", "text": "We demonstrate the usefulness of the uniform resource locator (URL) alone in performing web page classification. This approach is faster than typical web page classification, as the pages do not have to be fetched and analyzed. Our approach segments the URL into meaningful chunks and adds component, sequential and orthographic features to model salient patterns. The resulting features are used in supervised maximum entropy modeling. We analyze our approach's effectiveness on two standardized domains. Our results show that in certain scenarios, URL-based methods approach the performance of current state-of-the-art full-text and link-based methods.", "title": "" }, { "docid": "ab15d55e8308843c526aed0c32db1cb2", "text": "ix Chapter 1: Introduction 1 1.1 Knowledge Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Human-Robot Communication . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Life-Long Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.5 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Chapter 2: Background and Related Work 11 2.1 Manual Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Task-Level Robot Control . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Learning from Demonstration . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3.1 Demonstration Approaches . . . . . . . . . . . . . . . . . . . . . 14 2.3.2 Policy Generation . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4 Life-Long Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Chapter 3: Learning from Demonstration 19 3.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 Role of the Instructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3 Role of the Student . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4 Knowledge Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.4.1 Human-Robot Communication . . . . . . . . . . . . . . . . . . . 24 3.4.2 System Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.5 Learning a Task Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30", "title": "" }, { "docid": "e5eb79b313dad91de1144cd0098cde15", "text": "Information Extraction aims to retrieve certain types of information from natural language text by processing them automatically. For example, an information extraction system might retrieve information about geopolitical indicators of countries from a set of web pages while ignoring other types of information. Ontology-based information extraction has recently emerged as a subfield of information extraction. Here, ontologies which provide formal and explicit specifications of conceptualizations play a crucial role in the information extraction process. Because of the use of ontologies, this field is related to knowledge representation and has the potential to assist the development of the Semantic Web. In this paper, we provide an introduction to ontology-based information extraction and review the details of different ontology-based information extraction systems developed so far. We attempt to identify a common architecture among these systems and classify them based on different factors, which leads to a better understanding on their operation. We also discuss the implementation details of these systems including the tools used by them and the metrics used to measure their performance. In addition, we attempt to identify the possible future directions for this field.", "title": "" }, { "docid": "f18833c40f6b15bb588eec3bbe52cdd4", "text": "Presented here is a cladistic analysis of the South American and some North American Camelidae. This analysis shows that Camelini and Lamini are monophyletic groups, as are the genera Palaeolama and Vicugna, while Hemiauchenia and Lama are paraphyletic. Some aspects of the migration and distribution of South American camelids are also discussed, confirming in part the propositions of other authors. According to the cladistic analysis and previous propositions, it is possible to infer that two Camelidae migration events occurred in America. In the first one, Hemiauchenia arrived in South America and, this was related to the speciation processes that originated Lama and Vicugna. In the second event, Palaeolama migrated from North America to the northern portion of South America. It is evident that there is a need for larger studies about fossil Camelidae, mainly regarding older ages and from the South American austral region. This is important to better undertand the geographic and temporal distribution of Camelidae and, thus, the biogeographic aspects after the Great American Biotic Interchange.", "title": "" }, { "docid": "de061c5692bf11876c03b9b5e7c944a0", "text": "The purpose of this article is to summarize several change theories and assumptions about the nature of change. The author shows how successful change can be encouraged and facilitated for long-term success. The article compares the characteristics of Lewin’s Three-Step Change Theory, Lippitt’s Phases of Change Theory, Prochaska and DiClemente’s Change Theory, Social Cognitive Theory, and the Theory of Reasoned Action and Planned Behavior to one another. Leading industry experts will need to continually review and provide new information relative to the change process and to our evolving society and culture. here are many change theories and some of the most widely recognized are briefly summarized in this article. The theories serve as a testimony to the fact that change is a real phenomenon. It can be observed and analyzed through various steps or phases. The theories have been conceptualized to answer the question, “How does successful change happen?” Lewin’s Three-Step Change Theory Kurt Lewin (1951) introduced the three-step change model. This social scientist views behavior as a dynamic balance of forces working in opposing directions. Driving forces facilitate change because they push employees in the desired direction. Restraining forces hinder change because they push employees in the opposite direction. Therefore, these forces must be analyzed and Lewin’s three-step model can help shift the balance in the direction of the planned change (http://www.csupomona.edu/~jvgrizzell/best_practices/bctheory.html). T INTERNATIONAL JOURNAL OF MNAGEMENT, BUSINESS, AND ADMINISTRATION 2_____________________________________________________________________________________ According to Lewin, the first step in the process of changing behavior is to unfreeze the existing situation or status quo. The status quo is considered the equilibrium state. Unfreezing is necessary to overcome the strains of individual resistance and group conformity. Unfreezing can be achieved by the use of three methods. First, increase the driving forces that direct behavior away from the existing situation or status quo. Second, decrease the restraining forces that negatively affect the movement from the existing equilibrium. Third, find a combination of the two methods listed above. Some activities that can assist in the unfreezing step include: motivate participants by preparing them for change, build trust and recognition for the need to change, and actively participate in recognizing problems and brainstorming solutions within a group (Robbins 564-65). Lewin’s second step in the process of changing behavior is movement. In this step, it is necessary to move the target system to a new level of equilibrium. Three actions that can assist in the movement step include: persuading employees to agree that the status quo is not beneficial to them and encouraging them to view the problem from a fresh perspective, work together on a quest for new, relevant information, and connect the views of the group to well-respected, powerful leaders that also support the change (http://www.csupomona.edu/~jvgrizzell/best_practices/bctheory.html). The third step of Lewin’s three-step change model is refreezing. This step needs to take place after the change has been implemented in order for it to be sustained or “stick” over time. It is high likely that the change will be short lived and the employees will revert to their old equilibrium (behaviors) if this step is not taken. It is the actual integration of the new values into the community values and traditions. The purpose of refreezing is to stabilize the new equilibrium resulting from the change by balancing both the driving and restraining forces. One action that can be used to implement Lewin’s third step is to reinforce new patterns and institutionalize them through formal and informal mechanisms including policies and procedures (Robbins 564-65). Therefore, Lewin’s model illustrates the effects of forces that either promote or inhibit change. Specifically, driving forces promote change while restraining forces oppose change. Hence, change will occur when the combined strength of one force is greater than the combined strength of the opposing set of forces (Robbins 564-65). Lippitt’s Phases of Change Theory Lippitt, Watson, and Westley (1958) extend Lewin’s Three-Step Change Theory. Lippitt, Watson, and Westley created a seven-step theory that focuses more on the role and responsibility of the change agent than on the evolution of the change itself. Information is continuously exchanged throughout the process. The seven steps are:", "title": "" }, { "docid": "bda04f2eaee74979d7684681041e19bd", "text": "In March of 2016, Google DeepMind's AlphaGo, a computer Go-playing program, defeated the reigning human world champion Go player, 4-1, a feat far more impressive than previous victories by computer programs in chess (IBM's Deep Blue) and Jeopardy (IBM's Watson). The main engine behind the program combines machine learning approaches with a technique called Monte Carlo tree search. Current versions of Monte Carlo tree search used in Go-playing algorithms are based on a version developed for games that traces its roots back to the adaptive multi-stage sampling simulation optimization algorithm for estimating value functions in finite-horizon Markov decision processes (MDPs) introduced by Chang et al. (2005), which was the first use of Upper Confidence Bounds (UCBs) for Monte Carlo simulation-based solution of MDPs. We review the main ideas in UCB-based Monte Carlo tree search by connecting it to simulation optimization through the use of two simple examples: decision trees and tic-tac-toe.", "title": "" } ]
scidocsrr
46b9ef0704e87a27b376720750fb1259
Predicting taxi demand at high spatial resolution: Approaching the limit of predictability
[ { "docid": "d0bb1b3fc36016b166eb9ed25cb7ee61", "text": "Informed driving is increasingly becoming a key feature for increasing the sustainability of taxi companies. The sensors that are installed in each vehicle are providing new opportunities for automatically discovering knowledge, which, in return, delivers information for real-time decision making. Intelligent transportation systems for taxi dispatching and for finding time-saving routes are already exploring these sensing data. This paper introduces a novel methodology for predicting the spatial distribution of taxi-passengers for a short-term time horizon using streaming data. First, the information was aggregated into a histogram time series. Then, three time-series forecasting techniques were combined to originate a prediction. Experimental tests were conducted using the online data that are transmitted by 441 vehicles of a fleet running in the city of Porto, Portugal. The results demonstrated that the proposed framework can provide effective insight into the spatiotemporal distribution of taxi-passenger demand for a 30-min horizon.", "title": "" }, { "docid": "b294ca2034fa4133e8f7091426242244", "text": "The development of a city gradually fosters different functional regions, such as educational areas and business districts. In this paper, we propose a framework (titled DRoF) that Discovers Regions of different Functions in a city using both human mobility among regions and points of interests (POIs) located in a region. Specifically, we segment a city into disjointed regions according to major roads, such as highways and urban express ways. We infer the functions of each region using a topic-based inference model, which regards a region as a document, a function as a topic, categories of POIs (e.g., restaurants and shopping malls) as metadata (like authors, affiliations, and key words), and human mobility patterns (when people reach/leave a region and where people come from and leave for) as words. As a result, a region is represented by a distribution of functions, and a function is featured by a distribution of mobility patterns. We further identify the intensity of each function in different locations. The results generated by our framework can benefit a variety of applications, including urban planning, location choosing for a business, and social recommendations. We evaluated our method using large-scale and real-world datasets, consisting of two POI datasets of Beijing (in 2010 and 2011) and two 3-month GPS trajectory datasets (representing human mobility) generated by over 12,000 taxicabs in Beijing in 2010 and 2011 respectively. The results justify the advantages of our approach over baseline methods solely using POIs or human mobility.", "title": "" } ]
[ { "docid": "c5122000c9d8736cecb4d24e6f56aab8", "text": "New credit cards containing Europay, MasterCard and Visa (EMV) chips for enhanced security used in-store purchases rather than online purchases have been adopted considerably. EMV supposedly protects the payment cards in such a way that the computer chip in a card referred to as chip-and-pin cards generate a unique one time code each time the card is used. The one time code is designed such that if it is copied or stolen from the merchant system or from the system terminal cannot be used to create a counterfeit copy of that card or counterfeit chip of the transaction. However, in spite of this design, EMV technology is not entirely foolproof from failure. In this paper we discuss the issues, failures and fraudulent cases associated with EMV Chip-And-Card technology.", "title": "" }, { "docid": "8688f904ff190f9434cf20c6fc0f7eb9", "text": "3-D shape analysis has attracted extensive research efforts in recent years, where the major challenge lies in designing an effective high-level 3-D shape feature. In this paper, we propose a multi-level 3-D shape feature extraction framework by using deep learning. The low-level 3-D shape descriptors are first encoded into geometric bag-of-words, from which middle-level patterns are discovered to explore geometric relationships among words. After that, high-level shape features are learned via deep belief networks, which are more discriminative for the tasks of shape classification and retrieval. Experiments on 3-D shape recognition and retrieval demonstrate the superior performance of the proposed method in comparison to the state-of-the-art methods.", "title": "" }, { "docid": "36e4260c43efca5a67f99e38e5dbbed8", "text": "The inherent compliance of soft fluidic actuators makes them attractive for use in wearable devices and soft robotics. Their flexible nature permits them to be used without traditional rotational or prismatic joints. Without these joints, however, measuring the motion of the actuators is challenging. Actuator-level sensors could improve the performance of continuum robots and robots with compliant or multi-degree-of-freedom joints. We make the reinforcing braid of a pneumatic artificial muscle (PAM or McKibben muscle) “smart” by weaving it from conductive insulated wires. These wires form a solenoid-like circuit with an inductance that more than doubles over the PAM contraction. The reinforcing and sensing fibers can be used to measure the contraction of a PAM actuator with a simple linear function of the measured inductance, whereas other proposed self-sensing techniques rely on the addition of special elastomers or transducers, the technique presented in this paper can be implemented without modifications of this kind. We present and experimentally validate two models for Smart Braid sensors based on the long solenoid approximation and the Neumann formula, respectively. We test a McKibben muscle made from a Smart Braid in quasi-static conditions with various end loads and in dynamic conditions. We also test the performance of the Smart Braid sensor alongside steel.", "title": "" }, { "docid": "1ca692464d5d7f4e61647bf728941519", "text": "During natural vision, the entire visual field is stimulated by images rich in spatiotemporal structure. Although many visual system studies restrict stimuli to the classical receptive field (CRF), it is known that costimulation of the CRF and the surrounding nonclassical receptive field (nCRF) increases neuronal response sparseness. The cellular and network mechanisms underlying increased response sparseness remain largely unexplored. Here we show that combined CRF + nCRF stimulation increases the sparseness, reliability, and precision of spiking and membrane potential responses in classical regular spiking (RS(C)) pyramidal neurons of cat primary visual cortex. Conversely, fast-spiking interneurons exhibit increased activity and decreased selectivity during CRF + nCRF stimulation. The increased sparseness and reliability of RS(C) neuron spiking is associated with increased inhibitory barrages and narrower visually evoked synaptic potentials. Our experimental observations were replicated with a simple computational model, suggesting that network interactions among neuronal subtypes ultimately sharpen recurrent excitation, producing specific and reliable visual responses.", "title": "" }, { "docid": "e444dcc97882005658aca256991e816e", "text": "The terms superordinate, hyponym, and subordinate designate the hierarchical taxonomic relationship of words. They also represent categories and concepts. This relationship is a subject of interest for anthropology, cognitive psychology, psycholinguistics, linguistic semantics, and cognitive linguistics. Taxonomic hierarchies are essentially classificatory systems, and they are supposed to reflect the way that speakers of a language categorize the world of experience. A well-formed taxonomy offers an orderly and efficient set of categories at different levels of specificity (Cruse 2000:180). However, the terms and levels of taxonomic hierarchy used in each discipline vary. This makes it difficult to carry out cross-disciplinary readings on the hierarchical taxonomy of words or categories, which act as an interface in these cognitive-based cross-disciplinary ventures. Not only words— terms and concepts differ but often the nature of the problem is compounded as some terms refer to differing word classes, categories and concepts at the same time. Moreover, the lexical relationship of terms among these lexical hierarchies is far from clear. As a result two lines of thinking can be drawn from the literature: (1) technical terms coined for the hierarchical relationship of words are conflicting and do not reflect reality or environment, and (2) the relationship among these hierarchies of word levels and the underlying principles followed to explain them are uncertain except that of inclusion.", "title": "" }, { "docid": "06ba81270357c9bcf1dd8f1871741537", "text": "The ability of a normal human listener to recognize objects in the environment from only the sounds they produce is extraordinarily robust with regard to characteristics of the acoustic environment and of other competing sound sources. In contrast, computer systems designed to recognize sound sources function precariously, breaking down whenever the target sound is degraded by reverberation, noise, or competing sounds. Robust listening requires extensive contextual knowledge, but the potential contribution of sound-source recognition to the process of auditory scene analysis has largely been neglected by researchers building computational models of the scene analysis process. This thesis proposes a theory of sound-source recognition, casting recognition as a process of gathering information to enable the listener to make inferences about objects in the environment or to predict their behavior. In order to explore the process, attention is restricted to isolated sounds produced by a small class of sound sources, the non-percussive orchestral musical instruments. Previous research on the perception and production of orchestral instrument sounds is reviewed from a vantage point based on the excitation and resonance structure of the sound-production process, revealing a set of perceptually salient acoustic features. A computer model of the recognition process is developed that is capable of “listening” to a recording of a musical instrument and classifying the instrument as one of 25 possibilities. The model is based on current models of signal processing in the human auditory system. It explicitly extracts salient acoustic features and uses a novel improvisational taxonomic architecture (based on simple statistical pattern-recognition techniques) to classify the sound source. The performance of the model is compared directly to that of skilled human listeners, using", "title": "" }, { "docid": "4fe39e3d2e7c04263e9015c773a755fb", "text": "This paper presents a novel approach to building natural language interface to databases (NLIDB) based on Computational Paninian Grammar (CPG). It uses two distinct stages of processing, namely, syntactic processing followed by semantic processing. Syntactic processing makes the processing more general and robust. CPG is a dependency framework in which the analysis is in terms of syntactico-semantic relations. The closeness of these relations makes semantic processing easier and more accurate. It also makes the systems more portable.", "title": "" }, { "docid": "9548bd2e37fdd42d09dc6828ac4675f9", "text": "Recent years have seen increasing interest in ranking elite athletes and teams in professional sports leagues, and in predicting the outcomes of games. In this work, we draw an analogy between this problem and one in the field of search engine optimization, namely, that of ranking webpages on the Internet. Motivated by the famous PageRank algorithm, our TeamRank methods define directed graphs of sports teams based on the observed outcomes of individual games, and use these networks to infer the importance of teams that determines their rankings. In evaluating these methods on data from recent seasons in the National Football League (NFL) and National Basketball Association (NBA), we find that they can predict the outcomes of games with up to 70% accuracy, and that they provide useful rankings of teams that cluster by league divisions. We also propose some extensions to TeamRank that consider overall team win records and shifts in momentum over time.", "title": "" }, { "docid": "54dc81aca62267eecf1f5f8a8ace14b9", "text": "Advances in deep learning have led to substantial increases in prediction accuracy but have been accompanied by increases in the cost of rendering predictions. We conjecture that for a majority of real-world inputs, the recent advances in deep learning have created models that effectively “over-think” on simple inputs. In this paper we revisit the classic question of building model cascades that primarily leverage class asymmetry to reduce cost. We introduce the “I Don’t Know” (IDK) prediction cascades framework, a general framework to systematically compose a set of pre-trained models to accelerate inference without a loss in prediction accuracy. We propose two search based methods for constructing cascades as well as a new cost-aware objective within this framework. The proposed IDK cascade framework can be easily adopted in the existing model serving systems without additional model retraining. We evaluate the proposed techniques on a range of benchmarks to demonstrate the effectiveness of the proposed framework.", "title": "" }, { "docid": "a58cbbff744568ae7abd2873d04d48e9", "text": "Training real-world Deep Neural Networks (DNNs) can take an eon (i.e., weeks or months) without leveraging distributed systems. Even distributed training takes inordinate time, of which a large fraction is spent in communicating weights and gradients over the network. State-of-the-art distributed training algorithms use a hierarchy of worker-aggregator nodes. The aggregators repeatedly receive gradient updates from their allocated group of the workers, and send back the updated weights. This paper sets out to reduce this significant communication cost by embedding data compression accelerators in the Network Interface Cards (NICs). To maximize the benefits of in-network acceleration, the proposed solution, named INCEPTIONN (In-Network Computing to Exchange and Process Training Information Of Neural Networks), uniquely combines hardware and algorithmic innovations by exploiting the following three observations. (1) Gradients are significantly more tolerant to precision loss than weights and as such lend themselves better to aggressive compression without the need for the complex mechanisms to avert any loss. (2) The existing training algorithms only communicate gradients in one leg of the communication, which reduces the opportunities for in-network acceleration of compression. (3) The aggregators can become a bottleneck with compression as they need to compress/decompress multiple streams from their allocated worker group. To this end, we first propose a lightweight and hardware-friendly lossy-compression algorithm for floating-point gradients, which exploits their unique value characteristics. This compression not only enables significantly reducing the gradient communication with practically no loss of accuracy, but also comes with low complexity for direct implementation as a hardware block in the NIC. To maximize the opportunities for compression and avoid the bottleneck at aggregators, we also propose an aggregator-free training algorithm that exchanges gradients in both legs of communication in the group, while the workers collectively perform the aggregation in a distributed manner. Without changing the mathematics of training, this algorithm leverages the associative property of the aggregation operator and enables our in-network accelerators to (1) apply compression for all communications, and (2) prevent the aggregator nodes from becoming bottlenecks. Our experiments demonstrate that INCEPTIONN reduces the communication time by 70.9~80.7% and offers 2.2~3.1x speedup over the conventional training system, while achieving the same level of accuracy.", "title": "" }, { "docid": "acf514a4aa34487121cc853e55ceaed4", "text": "Stereotype threat spillover is a situational predicament in which coping with the stress of stereotype confirmation leaves one in a depleted volitional state and thus less likely to engage in effortful self-control in a variety of domains. We examined this phenomenon in 4 studies in which we had participants cope with stereotype and social identity threat and then measured their performance in domains in which stereotypes were not \"in the air.\" In Study 1 we examined whether taking a threatening math test could lead women to respond aggressively. In Study 2 we investigated whether coping with a threatening math test could lead women to indulge themselves with unhealthy food later on and examined the moderation of this effect by personal characteristics that contribute to identity-threat appraisals. In Study 3 we investigated whether vividly remembering an experience of social identity threat results in risky decision making. Finally, in Study 4 we asked whether coping with threat could directly influence attentional control and whether the effect was implemented by inefficient performance monitoring, as assessed by electroencephalography. Our results indicate that stereotype threat can spill over and impact self-control in a diverse array of nonstereotyped domains. These results reveal the potency of stereotype threat and that its negative consequences might extend further than was previously thought.", "title": "" }, { "docid": "3a95b876619ce4b666278810b80cae77", "text": "On 14 November 2016, northeastern South Island of New Zealand was struck by a major moment magnitude (Mw) 7.8 earthquake. Field observations, in conjunction with interferometric synthetic aperture radar, Global Positioning System, and seismology data, reveal this to be one of the most complex earthquakes ever recorded. The rupture propagated northward for more than 170 kilometers along both mapped and unmapped faults before continuing offshore at the island’s northeastern extent. Geodetic and field observations reveal surface ruptures along at least 12 major faults, including possible slip along the southern Hikurangi subduction interface; extensive uplift along much of the coastline; and widespread anelastic deformation, including the ~8-meter uplift of a fault-bounded block. This complex earthquake defies many conventional assumptions about the degree to which earthquake ruptures are controlled by fault segmentation and should motivate reevaluation of these issues in seismic hazard models.", "title": "" }, { "docid": "64c44342abbce474e21df67c0a5cc646", "text": "In this paper it is shown that the principal eigenvector is a necessary representation of the priorities derived from a positive reciprocal pairwise comparison judgment matrix A 1⁄4 ðaijÞ when A is a small perturbation of a consistent matrix. When providing numerical judgments, an individual attempts to estimate sequentially an underlying ratio scale and its equivalent consistent matrix of ratios. Near consistent matrices are essential because when dealing with intangibles, human judgment is of necessity inconsistent, and if with new information one is able to improve inconsistency to near consistency, then that could improve the validity of the priorities of a decision. In addition, judgment is much more sensitive and responsive to large rather than to small perturbations, and hence once near consistency is attained, it becomes uncertain which coefficients should be perturbed by small amounts to transform a near consistent matrix to a consistent one. If such perturbations were forced, they could be arbitrary and thus distort the validity of the derived priority vector in representing the underlying decision. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "6e4798c01a0a241d1f3746cd98ba9421", "text": "BACKGROUND\nLarge blood-based prospective studies can provide reliable assessment of the complex interplay of lifestyle, environmental and genetic factors as determinants of chronic disease.\n\n\nMETHODS\nThe baseline survey of the China Kadoorie Biobank took place during 2004-08 in 10 geographically defined regions, with collection of questionnaire data, physical measurements and blood samples. Subsequently, a re-survey of 25,000 randomly selected participants was done (80% responded) using the same methods as in the baseline. All participants are being followed for cause-specific mortality and morbidity, and for any hospital admission through linkages with registries and health insurance (HI) databases.\n\n\nRESULTS\nOverall, 512,891 adults aged 30-79 years were recruited, including 41% men, 56% from rural areas and mean age was 52 years. The prevalence of ever-regular smoking was 74% in men and 3% in women. The mean blood pressure was 132/79 mmHg in men and 130/77 mmHg in women. The mean body mass index (BMI) was 23.4 kg/m(2) in men and 23.8 kg/m(2) in women, with only 4% being obese (>30 kg/m(2)), and 3.2% being diabetic. Blood collection was successful in 99.98% and the mean delay from sample collection to processing was 10.6 h. For each of the main baseline variables, there is good reproducibility but large heterogeneity by age, sex and study area. By 1 January 2011, over 10,000 deaths had been recorded, with 91% of surviving participants already linked to HI databases.\n\n\nCONCLUSION\nThis established large biobank will be a rich and powerful resource for investigating genetic and non-genetic causes of many common chronic diseases in the Chinese population.", "title": "" }, { "docid": "49387b129347f7255bf77ad9cc726275", "text": "Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the “long tail” of this distribution requires enormous amounts of data. Representations of rare words trained directly on end-tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained against the end task. We show that this improves results against baselines where embeddings are trained on the end task in a reading comprehension task, a recognizing textual entailment task, and in language modelling.", "title": "" }, { "docid": "ed351364658a99d4d9c10dd2b9be3c92", "text": "Information technology continues to provide opportunities to alter the decisionmaking behavior of individuals, groups and organizations. Two related changes that are emerging are social media and Web 2.0 technologies. These technologies can positively and negatively impact the rationality and effectiveness of decision-making. For example, changes that help marketing managers alter consumer decision behavior may result in poorer decisions by consumers. Also, managers who heavily rely on a social network rather than expert opinion and facts may make biased decisions. A number of theories can help explain how social media may impact decision-making and the consequences.", "title": "" }, { "docid": "0b705fc98638cf042e84417849259074", "text": "G et al. [Gallego, G., G. Iyengar, R. Phillips, A. Dubey. 2004. Managing flexible products on a network. CORC Technical Report TR-2004-01, Department of Industrial Engineering and Operations Research, Columbia University, New York.] recently proposed a choice-based deterministic linear programming model (CDLP) for network revenue management (RM) that parallels the widely used deterministic linear programming (DLP) model. While they focused on analyzing “flexible products”—a situation in which the provider has the flexibility of using a collection of products (e.g., different flight times and/or itineraries) to serve the same market demand (e.g., an origin-destination connection)—their approach has broader implications for understanding choice-based RM on a network. In this paper, we explore the implications in detail. Specifically, we characterize optimal offer sets (sets of available network products) by extending to the network case a notion of “efficiency” developed by Talluri and van Ryzin [Talluri, K. T., G. J. van Ryzin. 2004. Revenue management under a general discrete choice model of consumer behavior. Management Sci. 50 15–33.] for the single-leg, choice-based RM problem. We show that, asymptotically, as demand and capacity are scaled up, only these efficient sets are used in an optimal policy. This analysis suggests that efficiency is a potentially useful approach for identifying “good” offer sets on networks, as it is in the case of single-leg problems. Second, we propose a practical decomposition heuristic for converting the static CDLP solution into a dynamic control policy. The heuristic is quite similar to the familiar displacement-adjusted virtual nesting (DAVN) approximation used in traditional network RM, and it significantly improves on the performance of the static LP solution. We illustrate the heuristic on several numerical examples.", "title": "" }, { "docid": "7feda29a5edf6855895f91f80c3286a4", "text": "The ability to conduct logical reasoning is a fundamental aspect of intelligent behavior, and thus an important problem along the way to human-level artificial intelligence. Traditionally, symbolic logic-based methods from the field of knowledge representation and reasoning have been used to equip agents with capabilities that resemble human logical reasoning qualities. More recently, however, there has been an increasing interest in using machine learning rather than symbolic logic-based formalisms to tackle these tasks. In this paper, we employ state-of-the-art methods for training deep neural networks to devise a novel model that is able to learn how to effectively perform logical reasoning in the form of basic ontology reasoning. This is an important and at the same time very natural logical reasoning task, which is why the presented approach is applicable to a plethora of important real-world problems. We present the outcomes of several experiments, which show that our model learned to perform precise ontology reasoning on diverse and challenging tasks. Furthermore, it turned out that the suggested approach suffers much less from different obstacles that prohibit logic-based symbolic reasoning, and, at the same time, is surprisingly plausible from a biological point of view.", "title": "" }, { "docid": "9409922d01a00695745939b47e6446a0", "text": "The Suricata intrusion-detection system for computer-network monitoring has been advanced as an open-source improvement on the popular Snort system that has been available for over a decade. Suricata includes multi-threading to improve processing speed beyond Snort. Previous work comparing the two products has not used a real-world setting. We did this and evaluated the speed, memory requirements, and accuracy of the detection engines in three kinds of experiments: (1) on the full traffic of our school as observed on its \" backbone\" in real time, (2) on a supercomputer with packets recorded from the backbone, and (3) in response to malicious packets sent by a red-teaming product. We used the same set of rules for both products with a few small exceptions where capabilities were missing. We conclude that Suricata can handle larger volumes of traffic than Snort with similar accuracy, and that its performance scaled roughly linearly with the number of processors up to 48. We observed no significant speed or accuracy advantage of Suricata over Snort in its current state, but it is still being developed. Our methodology should be useful for comparing other intrusion-detection products.", "title": "" }, { "docid": "7065db83dbe470f430789ea8e464bd04", "text": "A compact multiband antenna is proposed that consists of a printed circular disc monopole antenna with an L-shaped slot cut out of the ground, forming a defected ground plane. Analysis of the current distribution on the antenna reveals that at low frequencies the addition of the slot creates two orthogonal current paths, which are responsible for two additional resonances in the response of the antenna. By virtue of the orthogonality of these modes the antenna exhibits orthogonal pattern diversity, while enabling the adjacent resonances to be merged, forming a wideband low-frequency response and maintaining the inherent wideband high-frequency response of the monopole. The antenna exhibits a measured -10 dB S 11 bandwidth of 600 MHz from 2.68 to 3.28 GHz, and a bandwidth of 4.84 GHz from 4.74 to 9.58 GHz, while the total size of the antenna is only 24 times 28.3 mm. The efficiency is measured using a modified Wheeler cap method and is verified using the gain comparison method to be approximately 90% at both 2.7 and 5.5 GHz.", "title": "" } ]
scidocsrr
29e4dfe1f2a849a12927791da1ee8090
Unsupervised P2P Rental Recommendations via Integer Programming
[ { "docid": "df0ffd3067abe08a61855f450519086c", "text": "Traditional recommendation algorithms often select products with the highest predicted ratings to recommend. However, earlier research in economics and marketing indicates that a consumer usually makes purchase decision(s) based on the product's marginal net utility (i.e., the marginal utility minus the product price). Utility is defined as the satisfaction or pleasure user u gets when purchasing the corresponding product. A rational consumer chooses the product to purchase in order to maximize the total net utility. In contrast to the predicted rating, the marginal utility of a product depends on the user's purchase history and changes over time. According to the Law of Diminishing Marginal Utility, many products have the decreasing marginal utility with the increase of purchase count, such as cell phones, computers, and so on. Users are not likely to purchase the same or similar product again in a short time if they already purchased it before. On the other hand, some products, such as pet food, baby diapers, would be purchased again and again.\n To better match users' purchase decisions in the real world, this paper explores how to recommend products with the highest marginal net utility in e-commerce sites. Inspired by the Cobb-Douglas utility function in consumer behavior theory, we propose a novel utility-based recommendation framework. The framework can be utilized to revamp a family of existing recommendation algorithms. To demonstrate the idea, we use Singular Value Decomposition (SVD) as an example and revamp it with the framework. We evaluate the proposed algorithm on an e-commerce (shop.com) data set. The new algorithm significantly improves the base algorithm, largely due to its ability to recommend both products that are new to the user and products that the user is likely to re-purchase.", "title": "" }, { "docid": "f0f47ce0fc361740aedf17d6d2061e03", "text": "In supervised learning scenarios, feature selection has be en studied widely in the literature. Selecting features in unsupervis ed learning scenarios is a much harder problem, due to the absence of class la bel that would guide the search for relevant information. And, almos t all of previous unsupervised feature selection methods are “wrapper ” techniques that require a learning algorithm to evaluate the candidate fe ture subsets. In this paper, we propose a “filter” method for feature select ion which is independent of any learning algorithm. Our method can be per formed in either supervised or unsupervised fashion. The proposed me thod is based on the observation that, in many real world classification pr oblems, data from the same class are often close to each other. The importa nce of a feature is evaluated by its power of locality preserving, or , Laplacian Score. We compare our method with data variance (unsupervised) an d Fisher score (supervised) on two data sets. Experimental re sults demonstrate the effectiveness and efficiency of our algorithm.", "title": "" }, { "docid": "c36dac0c410570e84bf8634b32a0cac3", "text": "The design of strategies for branching in Mixed Integer Programming (MIP) is guided by cycles of parameter tuning and offline experimentation on an extremely heterogeneous testbed, using the average performance. Once devised, these strategies (and their parameter settings) are essentially input-agnostic. To address these issues, we propose a machine learning (ML) framework for variable branching in MIP. Our method observes the decisions made by Strong Branching (SB), a time-consuming strategy that produces small search trees, collecting features that characterize the candidate branching variables at each node of the tree. Based on the collected data, we learn an easy-to-evaluate surrogate function that mimics the SB strategy, by means of solving a learning-to-rank problem, common in ML. The learned ranking function is then used for branching. The learning is instance-specific, and is performed on-the-fly while executing a branch-and-bound search to solve the instance. Experiments on benchmark instances indicate that our method produces significantly smaller search trees than existing heuristics, and is competitive with a state-of-the-art commercial solver.", "title": "" }, { "docid": "a5a7e3fe9d6eaf8fc25e7fd91b74219e", "text": "We present in this paper a new approach that uses supervised machine learning techniques to improve the performances of optimization algorithms in the context of mixed-integer programming (MIP). We focus on the branch-and-bound (B&B) algorithm, which is the traditional algorithm used to solve MIP problems. In B&B, variable branching is the key component that most conditions the efficiency of the optimization. Good branching strategies exist but are computationally expensive and usually hinder the optimization rather than improving it. Our approach consists in imitating the decisions taken by a supposedly good branching strategy, strong branching in our case, with a fast approximation. To this end, we develop a set of features describing the state of the ongoing optimization and show how supervised machine learning can be used to approximate the desired branching strategy. The approximated function is created by a supervised machine learning algorithm from a set of observed branching decisions taken by the target strategy. The experiments performed on randomly generated and standard benchmark (MIPLIB) problems show promising results.", "title": "" } ]
[ { "docid": "61185af23da5d0138eef58ab62cd0e72", "text": "BACKGROUND\nEarlobe tears and disfigurement often result from prolonged pierced earring use and trauma. They are a common cosmetic complaint for which surgical reconstruction has often been advocated.\n\n\nMATERIALS AND METHODS\nA series of 10 patients with earlobe tears or disfigurement treated using straight-line closure, carbon dioxide (CO2 ) laser ablation, or both are described. A succinct literature review of torn earlobe repair is provided.\n\n\nRESULTS\nSuccessful repair with excellent cosmesis of torn and disfigured earlobes was obtained after straight-line surgical closure, CO2 laser ablation, or both.\n\n\nCONCLUSION\nA minimally invasive earlobe repair technique that involves concomitant surgical closure and CO2 laser skin vaporization produces excellent cosmetic results for torn or disfigured earlobes.", "title": "" }, { "docid": "69102c54448921bfbc63c007cc927b8d", "text": "Multi-goal reinforcement learning (MGRL) addresses tasks where the desired goal state can change for every trial. State-of-the-art algorithms model these problems such that the reward formulation depends on the goals, to associate them with high reward. This dependence introduces additional goal reward resampling steps in algorithms like Hindsight Experience Replay (HER) that reuse trials in which the agent fails to reach the goal by recomputing rewards as if reached states were psuedo-desired goals. We propose a reformulation of goal-conditioned value functions for MGRL that yields a similar algorithm, while removing the dependence of reward functions on the goal. Our formulation thus obviates the requirement of reward-recomputation that is needed by HER and its extensions. We also extend a closely related algorithm, Floyd-Warshall Reinforcement Learning, from tabular domains to deep neural networks for use as a baseline. Our results are competitive with HER while substantially improving sampling efficiency in terms of reward computation.", "title": "" }, { "docid": "3cfc860fde33aa93840358a6764a73a2", "text": "Renal cysts are commonly encountered in clinical practice. Although most cysts found on routine imaging studies are benign, there must be an index of suspicion to exclude a neoplastic process or the presence of a multicystic disorder. This article focuses on the more common adult cystic diseases, including simple and complex renal cysts, autosomal-dominant polycystic kidney disease, and acquired cystic kidney disease.", "title": "" }, { "docid": "a81004b3fc39a66d93811841c6d42ff0", "text": "Failing to properly isolate components in the same address space has resulted in a substantial amount of vulnerabilities. Enforcing the least privilege principle for memory accesses can selectively isolate software components to restrict attack surface and prevent unintended cross-component memory corruption. However, the boundaries and interactions between software components are hard to reason about and existing approaches have failed to stop attackers from exploiting vulnerabilities caused by poor isolation. We present the secure memory views (SMV) model: a practical and efficient model for secure and selective memory isolation in monolithic multithreaded applications. SMV is a third generation privilege separation technique that offers explicit access control of memory and allows concurrent threads within the same process to partially share or fully isolate their memory space in a controlled and parallel manner following application requirements. An evaluation of our prototype in the Linux kernel (TCB < 1,800 LOC) shows negligible runtime performance overhead in real-world applications including Cherokee web server (< 0.69%), Apache httpd web server (< 0.93%), and Mozilla Firefox web browser (< 1.89%) with at most 12 LOC changes.", "title": "" }, { "docid": "3bd62709eb49e1513daadec561eb9831", "text": "This paper proposes a current-fed LLC resonant converter that is able to achieve high efficiency over a wide input voltage range. It is derived by integrating a two-phase interleaved boost circuit and a full-bridge LLC circuit together by virtue of sharing the same full-bridge switching unit. Compared with conventional full-bridge LLC converter, the gain characteristic is improved in terms of both gain range and optimal operation area, fixed-frequency pulsewidth-modulated (PWM) control is employed to achieve output voltage regulation, and the input current ripple is minimized as well. The voltage across the turned-off primary-side switch can be always clamped by the bus voltage, reducing the switch voltage stress. Besides, its other distinct features, such as single-stage configuration, and soft switching for all switches also contribute to high power conversion efficiency. The operation principles are presented, and then the main characteristics regarding gain, input current ripple, and zero-voltage switching (ZVS) considering the nonlinear output capacitance of MOSFET are investigated and compared with conventional solutions. Also, the design procedure for some key parameters is presented, and two kinds of interleaved boost integrated resonant converter topologies are generalized. Finally, experimental results of a converter prototype with 120-240 V input and 24 V/25 A output verify all considerations.", "title": "" }, { "docid": "daa7db6183c0ca7b90834dba7467c647", "text": "Accurate prediction of rainfall distribution in landfalling tropical cyclones (LTCs) is very important to disaster prevention but quite challenging to operational forecasters. This chapter will describe the rainfall distribution in LTCs, including both axisymmetric and asymmetric distributions and their major controlling parameters, such as environmental vertical wind shear, TC intensity and motion, and coastline. In addition to the composite results from many LTC cases, several case studies are also given to illustrate the predominant factors that are key to the asymmetric rainfall distribution in LTCs. Future directions in this area and potential ways to improve the operational forecasts of rainfall distribution in LTCs are also discussed briefly.", "title": "" }, { "docid": "213ff71ab1c6ac7915f6fb365100c1f5", "text": "Action anticipation and forecasting in videos do not require a hat-trick, as far as there are signs in the context to foresee how actions are going to be deployed. Capturing these signs is hard because the context includes the past. We propose an end-to-end network for action anticipation and forecasting with memory, to both anticipate the current action and foresee the next one. Experiments on action sequence datasets show excellent results indicating that training on histories with a dynamic memory can significantly improve forecasting performance.", "title": "" }, { "docid": "9489ca5b460842d5a8a65504965f0bd5", "text": "This article, based on a tutorial the author presented at ITC 2008, is an overview and introduction to mixed-signal production test. The article focuses on the fundamental techniques and procedures in production test and explores key issues confronting the industry.", "title": "" }, { "docid": "9d3c3a3fa17f47da408be1e24d2121cc", "text": "In this letter, compact substrate integrated waveguide (SIW) power dividers are presented. Both equal and unequal power divisions are considered. A quarter-wavelength long wedge shape SIW structure is used for the power division. Direct coaxial feed is used for the input port and SIW-tomicrostrip transitions are used for the output ports. Four-way equal, unequal and an eight-way equal division power dividers are presented. The four-way and the eight-way power dividers provide -10 dB input matching bandwidth of 39.3% and 13%, respectively, at the design frequency f0 = 2.4 GHz. The main advantage of the power dividers is their compact sizes. Including the microstrip to SIW transitions, size is reduced by at least 46% compared to other reported miniaturized SIW power dividers.", "title": "" }, { "docid": "66f76354b6470a49f18300f67e47abd0", "text": "Technologies in museums often support learning goals, providing information about exhibits. However, museum visitors also desire meaningful experiences and enjoy the social aspects of museum-going, values ignored by most museum technologies. We present ArtLinks, a visualization with three goals: helping visitors make connections to exhibits and other visitors by highlighting those visitors who share their thoughts; encouraging visitors' reflection on the social and liminal aspects of museum-going and their expectations of technology in museums; and doing this with transparency, aligning aesthetically pleasing elements of the design with the goals of connection and reflection. Deploying ArtLinks revealed that people have strong expectations of technology as an information appliance. Despite these expectations, people valued connections to other people, both for their own sake and as a way to support meaningful experience. We also found several of our design choices in the name of transparency led to unforeseen tradeoffs between the social and the liminal.", "title": "" }, { "docid": "d33b2e5883b14ac771cf128d309eddbf", "text": "Automated lip reading is the process of converting movements of the lips, face and tongue to speech in real time with enhanced accuracy. Although performance of lip reading systems is still not remotely similar to audio speech recognition, recent developments in processor technology and the massive explosion and ubiquity of computing devices accompanied with increased research in this field has reduced the ambiguities of the labial language, making it possible for free speech-to-text conversion. This paper surveys the field of lip reading and provides a detailed discussion of the trade-offs between various approaches. It gives a reverse chronological topic wise listing of the developments in lip reading systems in recent years. With advancement in computer vision and pattern recognition tools, the efficacy of real time, effective conversion has increased. The major goal of this paper is to provide a comprehensive reference source for the researchers involved in lip reading, not just for the esoteric academia but all the people interested in this field regardless of particular application areas.", "title": "" }, { "docid": "79574c304675e0ec1a2282027c9fc7c6", "text": "The metaphoric mapping theory suggests that abstract concepts, like time, are represented in terms of concrete dimensions such as space. This theory receives support from several lines of research ranging from psychophysics to linguistics and cultural studies; especially strong support comes from recent response time studies. These studies have reported congruency effects between the dimensions of time and space indicating that time evokes spatial representations that may facilitate or impede responses to words with a temporal connotation. The present paper reports the results of three linguistic experiments that examined this congruency effect when participants processed past- and future-related sentences. Response time was shorter when past-related sentences required a left-hand response and future-related sentences a right-hand response than when this mapping of time onto response hand was reversed (Experiment 1). This result suggests that participants can form time-space associations during the processing of sentences and thus this result is consistent with the view that time is mentally represented from left to right. The activation of these time-space associations, however, appears to be non-automatic as shown by the results of Experiments 2 and 3 when participants were asked to perform a non-temporal meaning discrimination task.", "title": "" }, { "docid": "a83bde310a2311fc8e045486a7961657", "text": "Radio frequency identification (RFID) of objects or people has become very popular in many services in industry, distribution logistics, manufacturing companies and goods flow systems. When RFID frequency rises into the microwave region, the tag antenna must be carefully designed to match the free space and to the following ASIC. In this paper, we present a novel folded dipole antenna with a very simple configuration. The required input impedance can be achieved easily by choosing suitable geometry parameters.", "title": "" }, { "docid": "ba2769abc859882f600e64cb14af2ac6", "text": "OBJECTIVE\nThis study measures and compares the outcome of conservative physical therapy with traction, by using magnetic resonance imaging and clinical parameters in patients presenting with low back pain caused by lumbar disc herniation.\n\n\nMETHODS\nA total of 26 patients with LDH (14F, 12M with mean aged 37 +/- 11) were enrolled in this study and 15 sessions (per day on 3 weeks) of physical therapy were applied. That included hot pack, ultrasound, electrotherapy and lumbar traction. Physical examination of the lumbar spine, severity of pain, sleeping order, patient and physician global assessment with visual analogue scale, functional disability by HAQ, Roland Disability Questionnaire, and Modified Oswestry Disability Questionnaire were assessed at baseline and at 4-6 weeks after treatment. Magnetic resonance imaging examinations were carried out before and 4-6 weeks after the treatment\n\n\nRESULTS\nAll patients completed the therapy session. There were significant reductions in pain, sleeping disturbances, patient and physician global assessment and disability scores, and significant increases in lumbar movements between baseline and follow-up periods. There were significant reductions of size of the herniated mass in five patients, and significant increase in 3 patients on magnetic resonance imaging after treatment, but no differences in other patients.\n\n\nCONCLUSIONS\nThis study showed that conventional physical therapies with lumbar traction were effective in the treatment of patient with subacute LDH. These results suggest that clinical improvement is not correlated with the finding of MRI. Patients with LDH should be monitored clinically (Fig. 3, Ref. 18).", "title": "" }, { "docid": "7eb150a364984512de830025a6e93e0c", "text": "The mobile ecosystem is characterized by a large and complex network of companies interacting with each other, directly and indirectly, to provide a broad array of mobile products and services to end-customers. With the convergence of enabling technologies, the complexity of the mobile ecosystem is increasing multifold as new actors are emerging, new relations are formed, and the traditional distribution of power is shifted. Drawing on theories of complex systems, interfirm relationships, and the creative art and science of network visualization, this paper identifies key catalysts and develops a method to effectively map the complex structure and dynamics of over 7,000 global companies and 18,000 relationships in the mobile ecosystem. Our visual approach enables decision makers to explore the complexity of interfirm relations in the mobile ecosystem, understand their firmpsilas competitive position in a network context, and identify patterns that may influence their choice of innovation strategy or business models.", "title": "" }, { "docid": "f10e086ca3791ece660ae2f0f4877916", "text": "The routine use of four-chamber screening of the fetal heart was pioneered in the early 1980s and has been shown to detect reliably mainly univentricular hearts in the fetus. Many conotruncal anomalies and ductal-dependent lesions may, however, not be detected with the four-chamber view alone and additional planes are needed. The three-vessel and tracheal (3VT) view is a transverse plane in the upper mediastinum demonstrating simultaneously the course and the connection of both the aortic and ductal arches, their relationship to the trachea and the visualization of the superior vena cava. The purpose of the article is to review the two-dimensional anatomy of this plane and the contribution of colour Doppler and to present a checklist to be achieved on screening ultrasound. Typical suspicions include the detection of abnormal vessel number, abnormal vessel size, abnormal course and alignment and abnormal colour Doppler pattern. Anomalies such as pulmonary and aortic stenosis and atresia, aortic coarctation, interrupted arch, tetralogy of Fallot, common arterial trunk, transposition of the great arteries, right aortic arch, double aortic arch, aberrant right subclavian artery, left superior vena cava are some of the anomalies showing an abnormal 3VT image. Recent studies on the comprehensive evaluation of the 3VT view and adjacent planes have shown the potential of visualizing the thymus and the left brachiocephalic vein during fetal echocardiography and in detecting additional rare conditions. National and international societies are increasingly recommending the use of this plane during routine ultrasound in order to improve prenatal detection rates of critical cardiac defects.", "title": "" }, { "docid": "9fd5e182851ff0be67e8865c336a1f77", "text": "Following the developments of wireless and mobile communication technologies, mobile-commerce (M-commerce) has become more and more popular. However, most of the existing M-commerce protocols do not consider the user anonymity during transactions. This means that it is possible to trace the identity of a payer from a M-commerce transaction. Luo et al. in 2014 proposed an NFC-based anonymous mobile payment protocol. It used an NFC-enabled smartphone and combined a built-in secure element (SE) as a trusted execution environment to build an anonymous mobile payment service. But their scheme has several problems and cannot be functional in practice. In this paper, we introduce a new NFC-based anonymous mobile payment protocol. Our scheme has the following features:(1) Anonymity. It prevents the disclosure of user's identity by using virtual identities instead of real identity during the transmission. (2) Efficiency. Confidentiality is achieved by symmetric key cryptography instead of public key cryptography so as to increase the performance. (3) Convenience. The protocol is based on NFC and is EMV compatible. (4) Security. All the transaction is either encrypted or signed by the sender so the confidentiality and authenticity are preserved.", "title": "" }, { "docid": "84f7b499cd608de1ee7443fcd7194f19", "text": "In this paper, we present a new computationally efficient numerical scheme for the minimizing flow approach for optimal mass transport (OMT) with applications to non-rigid 3D image registration. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. Our implementation also employs multigrid, and parallel methodologies on a consumer graphics processing unit (GPU) for fast computation. Although computing the optimal map has been shown to be computationally expensive in the past, we show that our approach is orders of magnitude faster then previous work and is capable of finding transport maps with optimality measures (mean curl) previously unattainable by other works (which directly influences the accuracy of registration). We give results where the algorithm was used to compute non-rigid registrations of 3D synthetic data as well as intra-patient pre-operative and post-operative 3D brain MRI datasets.", "title": "" }, { "docid": "59b10765f9125e9c38858af901a39cc7", "text": "--------__------------------------------------__---------------", "title": "" }, { "docid": "fa7916c0afe0b18956f19b4fc8006971", "text": "INTRODUCTION\nPrevious studies demonstrated that multiple treatments using focused ultrasound can be effective as an non-invasive method for reducing unwanted localized fat deposits. The objective of the study is to investigate the safety and efficacy of this focused ultrasound device in body contouring in Asians.\n\n\nMETHOD\nFifty-three (51 females and 2 males) patients were enrolled into the study. Subjects had up to three treatment sessions with approximately 1-month interval in between treatment. Efficacy was assessed by changes in abdominal circumference, ultrasound fat thickness, and caliper fat thickness. Weight change was monitored to distinguish weight loss induced changes in these measurements. Patient questionnaire was completed after each treatment. The level of pain or discomfort, improvement in body contour and overall satisfaction were graded with a score of 1-5 (1 being the least). Any adverse effects such as erythema, pain during treatment or blistering were recorded.\n\n\nRESULT\nThe overall satisfaction amongst subjects was poor. Objective measurements by ultrasound, abdominal circumference, and caliper did not show significant difference after treatment. There is a negative correlation between the abdominal fat thickness and number of shots per treatment session.\n\n\nCONCLUSION\nFocused ultrasound is not effective for non-invasive body contouring among Southern Asians as compared with Caucasian. Such observation is likely due to smaller body figures. Design modifications can overcome this problem and in doing so, improve clinical outcome.", "title": "" } ]
scidocsrr
71ab0493c8a0dc97c8ae31eac2d7c7f5
High-level synthesis of dynamic data structures: A case study using Vivado HLS
[ { "docid": "ed9e22167d3e9e695f67e208b891b698", "text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.", "title": "" }, { "docid": "cd1cfbdae08907e27a4e1c51e0508839", "text": "High-level synthesis (HLS) is an increasingly popular approach in electronic design automation (EDA) that raises the abstraction level for designing digital circuits. With the increasing complexity of embedded systems, these tools are particularly relevant in embedded systems design. In this paper, we present our evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results. Even though HLS tools are still lacking some maturity, they are constantly improving and the industry is now starting to adopt them into their design flows.", "title": "" } ]
[ { "docid": "8ba192226a3c3a4f52ca36587396e85c", "text": "For many years I have been engaged in psychotherapy with individuals in distress. In recent years I have found myself increasingly concerned with the process of abstracting from that experience the general principles which appear to be involved in it. I have endeavored to discover any orderliness, any unity which seems to inhere in the subtle, complex tissue of interpersonal relationship in which I have so constantly been immersed in therapeutic work. One of the current products of this concern is an attempt to state, in formal terms, a theory of psychotherapy, of personality, and of interpersonal relationships which will encompass and contain the phenomena of my experience. What I wish to do in this paper is to take one very small segment of that theory, spell it out more completely, and explore its meaning and usefulness.", "title": "" }, { "docid": "44934f07118f7ec619c7e165cdf9d797", "text": "The American Heart Association (AHA) has had a longstanding commitment to provide information about the role of nutrition in cardiovascular disease (CVD) risk reduction. Many activities have been and are currently directed toward this objective, including issuing AHA Dietary Guidelines periodically (most recently in 20001) and Science Advisories and Statements on an ongoing basis to review emerging nutrition-related issues. The objective of the AHA Dietary Guidelines is to promote healthful dietary patterns. A consistent focus since the inception of the AHA Dietary Guidelines has been to reduce saturated fat (and trans fat) and cholesterol intake, as well as to increase dietary fiber consumption. Collectively, all the AHA Dietary Guidelines have supported a dietary pattern that promotes the consumption of diets rich in fruits, vegetables, whole grains, low-fat or nonfat dairy products, fish, legumes, poultry, and lean meats. This dietary pattern has a low energy density to promote weight control and a high nutrient density to meet all nutrient needs. As reviewed in the first AHA Science Advisory2 on antioxidant vitamins, epidemiological and population studies reported that some micronutrients may beneficially affect CVD risk (ie, antioxidant vitamins such as vitamin E, vitamin C, and -carotene). Recent epidemiological evidence3 is consistent with the earlier epidemiological and population studies (reviewed in the first Science Advisory).2 These findings have been supported by in vitro studies that have established a role of oxidative processes in the development of the atherosclerotic plaque. Underlying the atherosclerotic process are proatherogenic and prothrombotic oxidative events in the artery wall that may be inhibited by antioxidants. The 1999 AHA Science Advisory2 recommended that the general population consume a balanced diet with emphasis on antioxidant-rich fruits, vegetables, and whole grains, advice that was consistent with the AHA Dietary Guidelines at the time. In the absence of data from randomized, controlled clinical trials, no recommendations were made with regard to the use of antioxidant supplements. In the past 5 years, a number of controlled clinical studies have reported the effects of antioxidant vitamin and mineral supplements on CVD risk (see Tables 1 through 3).4–21 These studies have been the subject of several recent reviews22–26 and formed the database for the present article. In general, the studies presented in the tables differ with regard to subject populations studied, type and dose of antioxidant/cocktail administered, length of study, and study end points. Overall, the studies have been conducted on post–myocardial infarction subjects or subjects at high risk for CVD, although some studied healthy subjects. In addition to dosage differences in vitamin E studies, some trials used the synthetic form, whereas others used the natural form of the vitamin. With regard to the other antioxidants, different doses were administered (eg, for -carotene and vitamin C). The antioxidant cocktail formulations used also varied. Moreover, subjects were followed up for at least 1 year and for as long as 12 years. In addition, a meta-analysis of 15 studies (7 studies of vitamin E, 50 to 800 IU; 8 studies of -carotene, 15 to 50 mg) with 1000 or more subjects per trial has been conducted to ascertain the effects of antioxidant vitamins on cardiovascular morbidity and mortality.27 Collectively, for the most part, clinical trials have failed to demonstrate a beneficial effect of antioxidant supplements on CVD morbidity and mortality. With regard to the meta-analysis, the lack of efficacy was demonstrated consistently for different doses of various antioxidants in diverse population groups. Although the preponderance of clinical trial evidence has not shown beneficial effects of antioxidant supplements, evidence from some smaller studies documents a benefit of -tocopherol (Cambridge Heart AntiOxidant Study,13 Secondary Prevention with Antioxidants of Cardiovascular disease in End-stage renal disease study),15 -tocopherol and slow-release vitamin C (Antioxidant Supplementation in Atherosclerosis Prevention study),16 and vitamin C plus vitamin E (Intravascular Ultrasonography Study)17 on cardio-", "title": "" }, { "docid": "54327e52ad52e1b7a6ead7c1afe4a6d5", "text": "Implementation of smart grid provides an opportunity for concurrent implementation of nonintrusive appliance load monitoring (NIALM), which disaggregates the total household electricity data into data on individual appliances. This paper introduces a new disaggregation algorithm for NIALM based on a modified Viterbi algorithm. This modification takes advantage of the sparsity of transitions between appliances' states to decompose the main algorithm, thus making the algorithm complexity linearly proportional to the number of appliances. By consideration of a series of data and integrating a priori information, such as the frequency of use and time on/time off statistics, the algorithm dramatically improves NIALM accuracy as compared to the accuracy of established NIALM algorithms.", "title": "" }, { "docid": "30f48021bca12899d6f2e012e93ba12d", "text": "There are several locomotion mechanisms in Nature. The study of mechanics of any locomotion is very useful for scientists and researchers. Many locomotion principles from Nature have been adapted in robotics. There are several species which are capable of multimode locomotion such as walking and swimming, and flying etc. Frogs are such species, capable of jumping, walking, and swimming. Multimode locomotion is important for robots to work in unknown environment. Frogs are widely known as good multimode locomotors. Webbed feet help them to swim efficiently in water. This paper presents the study of frog's swimming locomotion and adapting the webbed feet for swimming locomotion of the robots. A simple mechanical model of robotic leg with webbed foot, which can be used for multi-mode locomotion and robotic frog, is put forward. All the joints of the legs are designed to be driven by tendon-pulley arrangement with the actuators mounted on the body, which allows the legs to be lighter and compact.", "title": "" }, { "docid": "b715631367001fb60b4aca9607257923", "text": "This paper describes a new predictive algorithm that can be used for programming large arrays of analog computational memory elements within 0.2% of accuracy for 3.5 decades of currents. The average number of pulses required are 7-8 (20 mus each). This algorithm uses hot-electron injection for accurate programming and Fowler-Nordheim tunneling for global erase. This algorithm has been tested for programming 1024times16 and 96times16 floating-gate arrays in 0.25 mum and 0.5 mum n-well CMOS processes, respectively", "title": "" }, { "docid": "ebe14e601d0b61f10f6674e2d7108d41", "text": "In this letter, the design procedure and electrical performance of a dual band (2.4/5.8GHz) printed dipole antenna using spiral structure are proposed and investigated. For the first time, a dual band printed dipole antenna with spiral configuration is proposed. In addition, a matching method by adjusting the transmission line width, and a new bandwidth broadening method varying the distance between the top and bottom spirals are reported. The operating frequencies of the proposed antenna are 2.4GHz and 5.8GHz which cover WLAN system. The proposed antenna achieves a good matching using tapered transmission lines for the top and bottom spirals. The desired resonant frequencies are obtained by adjusting the number of turns of the spirals. The bandwidth is optimized by varying the distance between the top and bottom spirals. A relative position of the bottom spiral plays an important role in achieving a bandwidth in terms of 10-dB return loss.", "title": "" }, { "docid": "7957742cd5da5a720446ae9af185df65", "text": "Data Mining ist ein Prozess, bei dem mittels statistischer Verfahren komplexe Muster in meist großen Mengen von Daten gesucht werden. Damit dieser von Organisationen verstärkt zur Entscheidungsunterstützung eingesetzt werden kann, wäre es hilfreich, wenn Domänenexperten durch Self-Service-Anwendungen in die Lage versetzt würden, diese Form der Analysen eigenständig durchzuführen, damit sie nicht mehr auf Datenwissenschaftler und IT-Fachkräfte angewiesen sind. In diesem Artikel soll eine Versuchsreihe vorgestellt werden, die eine Bewertung darüber ermöglicht, wie geeignet etablierte Data-MiningSoftwareplattformen (IBM SPSS Modeler, KNIME, RapidMiner und WEKA) sind, um sie Gelegenheitsanwendern zur Verfügung zu stellen. In den vorgestellten Versuchen sollen Entscheidungsbäume im Fokus stehen, eine besonders einfache Form von Algorithmen, die der Literatur und unserer Erfahrung nach am ehesten für die Nutzung in Self-Service-Data-Mining-Anwendungen geeignet sind. Dabei werden mithilfe eines einheitlichen Datensets auf den verschiedenen Plattformen Entscheidungsbäume für identische Zielvariablen konstruiert. Die Ergebnisse sind im Hinblick auf die Klassifikationsgenauigkeit zwar relativ ähnlich, die Komplexität der Modelle variiert jedoch. Aktuelle grafische Benutzeroberflächen lassen sich zwar auch ohne tiefgehende Kompetenzen in den Bereichen Informatik und Statistik bedienen, sie ersetzen aber nicht den Bedarf an datenwissenschaftlichen Kompetenzen, die besonders beim Schritt der Datenvorbereitung zum Einsatz kommen, welcher den größten Teil des Data-Mining-Prozesses ausmacht.", "title": "" }, { "docid": "f782af034ef46a15d89637a43ad2849c", "text": "Introduction: Evidence-based treatment of abdominal hernias involves the use of prosthetic mesh. However, the most commonly used method of treatment of diastasis of the recti involves plication with non-absorbable sutures as part of an abdominoplasty procedure. This case report describes single-port laparoscopic repair of diastasis of recti and umbilical hernia with prosthetic mesh after plication with slowly absorbable sutures combined with abdominoplasty. Technique Description: Our patient is a 36-year-old woman with severe diastasis of the recti, umbilical hernia and an excessive amount of redundant skin after two previous pregnancies and caesarean sections. After raising the upper abdominal flap, a single-port was placed in the left upper quadrant and the ligamenturn teres was divided. The diastasis of the recti and umbilical hernia were plicated under direct vision with continuous and interrupted slowly absorbable sutures before an antiadhesive mesh was placed behind the repair with 6 cm overlap, transfixed in 4 quadrants and tacked in place with non-absorbable tacks in a double-crown technique. The left upper quadrant wound was closed with slowly absorbable sutures. The excess skin was removed and fibrin sealant was sprayed in the subcutaneous space to minimize the risk of serorna formation without using drains. Discussion: Combining single-port laparoscopic repair of diastasis of recti and umbilical hemia repair minimizes inadvertent suturing of abdominal contents during plication, the risks of port site hernias associated with conventional multipart repair and permanently reinforced the midline weakness while achieving “scarless” surgery.", "title": "" }, { "docid": "a5cb288b5a2f29c22a9338be416a27f7", "text": "L ^ N C O U R A G I N G CHILDREN'S INTRINSIC MOTIVATION CAN HELP THEM TO ACHIEVE ACADEMIC SUCCESS (ADELMAN, 1978; ADELMAN & TAYLOR, 1986; GOTTFRIED, 1 9 8 3 , 1 9 8 5 ) . TO HELP STUDENTS WITH AND WITHOUT LEARNING DISABILITIES TO DEVELOP ACADEMIC INTRINSIC MOTIVATION, IT IS IMPORTANT TO DEFINE THE FACTORS THAT AFFECT MOTIVATION (ADELMAN & CHANEY, 1 9 8 2 ; ADELMAN & TAYLOR, 1983). T H I S ARTICLE OFFERS EDUCATORS AN INSIGHT INTO THE EFFECTS OF DIFFERENT MOTIVATIONAL ORIENTATIONS ON THE SCHOOL LEARNING OF STUDENTS WITH LEARNING DISABILITIES, AS W E L L AS INTO THE VARIABLES AFFECTING INTRINSIC AND EXTRINSIC MOTIVATION. ALSO INCLUDED ARE RECOMMENDATIONS, BASED ON EMPIRICAL EVIDENCE, FOR ENHANCING ACADEMIC INTRINSIC MOTIVATION IN LEARNERS OF VARYING ABIL IT IES AT A L L GRADE LEVELS. I .NTEREST IN THE VARIOUS ASPECTS OF INTRINSIC and extrinsic motivation has accelerated in recent years. Motivational orientation is considered to be an important factor in determining the academic success of children with and without disabilities (Adelman & Taylor, 1986; Calder & Staw, 1975; Deci, 1975; Deci & Chandler, 1986; Schunk, 1991). Academic intrinsic motivation has been found to be significantly correlated with academic achievement in students with learning disabilities (Gottfried, 1985) and without learning disabilities (Adelman, 1978; Adelman & Taylor, 1983). However, children with learning disabilities (LD) are less likely than their nondisabled peers to be intrinsically motivated (Adelman & Chaney, 1982; Adelman & Taylor, 1986; Mastropieri & Scruggs, 1994; Smith, 1994). Students with LD have been found to have more positive attitudes toward school than toward school learning (Wilson & David, 1994). Wilson and David asked 89 students with LD to respond to items on the School Attitude Measures (SAM; Wick, 1990) and on the Children's Academic Intrinsic Motivation Inventory (CAIMI; Gottfried, 1986). The students with L D were found to have a more positive attitude toward the school environment than toward academic tasks. Research has also shown that students with LD may derive their self-perceptions from areas other than school, and do not see themselves as less competent in areas of school learning (Grolnick & Ryan, 1990). Although there is only a limited amount of research available on intrinsic motivation in the population with special needs (Adelman, 1978; Adelman & Taylor, 1986; Grolnick & Ryan, 1990), there is an abundance of research on the general school-age population. This article is an at tempt to use existing research to identify variables pertinent to the academic intrinsic motivation of children with learning disabilities. The first part of the article deals with the definitions of intrinsic and extrinsic motivation. The next part identifies some of the factors affecting the motivational orientation and subsequent academic achievement of school-age children. This is followed by empirical evidence of the effects of rewards on intrinsic motivation, and suggestions on enhancing intrinsic motivation in the learner. At the end, several strategies are presented that could be used by the teacher to develop and encourage intrinsic motivation in children with and without LD. l O R E M E D I A L A N D S P E C I A L E D U C A T I O N Volume 18. Number 1, January/February 1997, Pages 12-19 D E F I N I N G M O T I V A T I O N A L A T T R I B U T E S Intrinsic Motivation Intrinsic motivation has been defined as (a) participation in an activity purely out of curiosity, that is, from a need to know more about something (Deci, 1975; Gottfried, 1983; Woolfolk, 1990); (b) the desire to engage in an activity purely for the sake of participating in and completing a task (Bates, 1979; Deci, Vallerand, Pelletier, & Ryan, 1991); and (c) the desire to contribute (Mills, 1991). Academic intrinsic motivation has been measured by (a) the ability of the learner to persist with the task assigned (Brophy, 1983; Gottfried, 1983); (b) the amount of time spent by the student on tackling the task (Brophy, 1983; Gottfried, 1983); (c) the innate curiosity to learn (Gottfried, 1983); (d) the feeling of efficacy related to an activity (Gottfried, 1983; Schunk, 1991; Smith, 1994); (e) the desire to select an activity (Brophy, 1983); and (f) a combination of all these variables (Deci, 1975; Deci & Ryan, 1985). A student who is intrinsically motivated will persist with the assigned task, even though it may be difficult (Gottfried, 1983; Schunk, 1990), and will not need any type of reward or incentive to initiate or complete a task (Beck, 1978; Deci, 1975; Woolfolk, 1990). This type of student is more likely to complete the chosen task and be excited by the challenging nature of an activity. The intrinsically motivated student is also more likely to retain the concepts learned and to feel confident about tackling unfamiliar learning situations, like new vocabulary words. However, the amount of interest generated by the task also plays a role in the motivational orientation of the learner. An assigned task with zero interest value is less likely to motivate the student than is a task that arouses interest and curiosity. Intrinsic motivation is based in the innate, organismic needs for competence and self-determination (Deci & Ryan, 1985; Woolfolk, 1990), as well as the desire to seek and conquer challenges (Adelman & Taylor, 1990). People are likely to be motivated to complete a task on the basis of their level of interest and the nature of the challenge. Research has suggested that children with higher academic intrinsic motivation function more effectively in school (Adelman & Taylor, 1990; Boggiano & Barrett, 1992; Gottfried, 1990; Soto, 1988). Besides innate factors, there are several other variables that can affect intrinsic motivation. Extrinsic Motivation Adults often give the learner an incentive to participate in or to complete an activity. The incentive might be in the form of a tangible reward, such as money or candy. Or, it might be the likelihood of a reward in the future, such as a good grade. Or, it might be a nontangible reward, for example, verbal praise or a pat on the back. The incentive might also be exemption from a less liked activity or avoidance of punishment. These incentives are extrinsic motivators. A person is said to be extrinsically motivated when she or he undertakes a task purely for the sake of attaining a reward or for avoiding some punishment (Adelman & Taylor, 1990; Ball, 1984; Beck, 1978; Deci, 1975; Wiersma, 1992; Woolfolk, 1990). Extrinsic motivation can, especially in learning and other forms of creative work, interfere with intrinsic motivation (Benninga et al., 1991; Butler, 1989; Deci, 1975; McCullers, Fabes, & Moran, 1987). In such cases, it might be better not to offer rewards for participating in or for completing an activity, be it textbook learning or an organized play activity. Not only teachers but also parents have been found to negatively influence the motivational orientation of the child by providing extrinsic consequences contingent upon their school performance (Gottfried, Fleming, & Gottfried, 1994). The relationship between rewards (and other extrinsic factors) and the intrinsic motivation of the learner is outlined in the following sections. MOTIVATION AND THE LEARNER In a classroom, the student is expected to tackle certain types of tasks, usually with very limited choices. Most of the research done on motivation has been done in settings where the learner had a wide choice of activities, or in a free-play setting. In reality, the student has to complete tasks that are compulsory as well as evaluated (Brophy, 1983). Children are expected to complete a certain number of assignments that meet specified criteria. For example, a child may be asked to complete five multiplication problems and is expected to get correct answers to at least three. Teachers need to consider how instructional practices are designed from the motivational perspective (Schunk, 1990). Development of skills required for academic achievement can be influenced by instructional design. If the design undermines student ability and skill level, it can reduce motivation (Brophy, 1983; Schunk, 1990). This is especially applicable to students with disabilities. Students with LD have shown a significant increase in academic learning after engaging in interesting tasks like computer games designed to enhance learning (Adelman, Lauber, Nelson, & Smith, 1989). A common aim of educators is to help all students enhance their learning, regardless of the student's ability level. To achieve this outcome, the teacher has to develop a curriculum geared to the individual needs and ability levels of the students, especially the students with special needs. If the assigned task is within the child's ability level as well as inherently interesting, the child is very likely to be intrinsically motivated to tackle the task. The task should also be challenging enough to stimulate the child's desire to attain mastery. The probability of success or failure is often attributed to factors such as ability, effort, difficulty level of the task, R E M E D I A L A N D S P E C I A L E D U C A T I O N 1 O Volume 18, Number 1, January/February 1997 and luck (Schunk, 1990). One or more of these attributes might, in turn, affect the motivational orientation of a student. The student who is sure of some level of success is more likely to be motivated to tackle the task than one who is unsure of the outcome (Adelman & Taylor, 1990). A student who is motivated to learn will find school-related tasks meaningful (Brophy, 1983, 1987). Teachers can help students to maximize their achievement by adjusting the instructional design to their individual characteristics and motivational orientation. The personality traits and motivational tendency of learners with mild handicaps can either help them to compensate for their inadequate learning abilities and enhance performanc", "title": "" }, { "docid": "f683ae3ae16041977f0d6644213de112", "text": "Keywords: Wind turbine Fault prognosis Fault detection Pitch system ANFIS Neuro-fuzzy A-priori knowledge a b s t r a c t The fast growing wind industry has shown a need for more sophisticated fault prognosis analysis in the critical and high value components of a wind turbine (WT). Current WT studies focus on improving their reliability and reducing the cost of energy, particularly when WTs are operated offshore. WT Supervisory Control and Data Acquisition (SCADA) systems contain alarms and signals that could provide an early indication of component fault and allow the operator to plan system repair prior to complete failure. Several research programmes have been made for that purpose; however, the resulting cost savings are limited because of the data complexity and relatively low number of failures that can be easily detected in early stages. A new fault prognosis procedure is proposed in this paper using a-priori knowledge-based Adaptive Neuro-Fuzzy Inference System (ANFIS). This has the aim to achieve automated detection of significant pitch faults, which are known to be significant failure modes. With the advantage of a-priori knowledge incorporation, the proposed system has improved ability to interpret the previously unseen conditions and thus fault diagnoses are improved. In order to construct the proposed system, the data of the 6 known WT pitch faults were used to train the system with a-priori knowledge incorporated. The effectiveness of the approach was demonstrated using three metrics: (1) the trained system was tested in a new wind farm containing 26 WTs to show its prognosis ability; (2) the first test result was compared to a general alarm approach; (3) a Confusion Matrix analysis was made to demonstrate the accuracy of the proposed approach. The result of this research has demonstrated that the proposed a-priori knowledge-based ANFIS (APK-ANFIS) approach has strong potential for WT pitch fault prognosis. Wind is currently the fastest growing renewable energy source for electrical generation around the world. It is expected that a large number of wind turbines (WTs), especially offshore, will be employed in the near future (EWEA, 2011; Krohn, Morthorst, & Awerbuch, 2009). Following a rapid acceleration of wind energy development in the early 21st century, WT manufacturers are beginning to focus on improving their cost of energy. WT operational performance is critical to the cost of energy. This is because Operation and Maintenance (O&M) costs constitute a significant share of the annual cost of a wind …", "title": "" }, { "docid": "4406b7c9d53b895355fa82b11da21293", "text": "In today's scenario, World Wide Web (WWW) is flooded with huge amount of information. Due to growing popularity of the internet, finding the meaningful information among billions of information resources on the WWW is a challenging task. The information retrieval (IR) provides documents to the end users which satisfy their need of information. Search engine is used to extract valuable information from the internet. Web crawler is the principal part of search engine; it is an automatic script or program which can browse the WWW in automatic manner. This process is known as web crawling. In this paper, review on strategies of information retrieval in web crawling has been presented that are classifying into four categories viz: focused, distributed, incremental and hidden web crawlers. Finally, on the basis of user customized parameters the comparative analysis of various IR strategies has been performed.", "title": "" }, { "docid": "55370f9487be43f2fbd320c903005185", "text": "Exemplar-based texture synthesis is the process of generating, from an input sample, new texture images of arbitrary size and which are perceptually equivalent to the sample. The two main approaches are statisticsbased methods and patch re-arrangement methods. In the first class, a texture is characterized by a statistical signature; then, a random sampling conditioned to this signature produces genuinely different texture images. The second class boils down to a clever “copy-paste” procedure, which stitches together large regions of the sample. Hybrid methods try to combines ideas from both approaches to avoid their hurdles. Current methods, including the recent CNN approaches, are able to produce impressive synthesis on various kinds of textures. Nevertheless, most real textures are organized at multiple scales, with global structures revealed at coarse scales and highly varying details at finer ones. Thus, when confronted with large natural images of textures the results of state-of-the-art methods degrade rapidly.", "title": "" }, { "docid": "89703b730ff63548530bdb9e2ce59c6b", "text": "How to develop creative digital products which really meet the prosumer's needs while promoting a positive user experience? That question has guided this work looking for answers through different disciplinary fields. Born on 2002 as an Engineering PhD dissertation, since 2003 the method has been improved by teaching it to Communication and Design graduate and undergraduate courses. It also guided some successful interdisciplinary projects. Its main focus is on developing a creative conceptual model that might meet a human need within its context. The resulting method seeks: (1) solutions for the main problems detected in the previous versions; (2) significant ways to represent Design practices; (3) a set of activities that could be developed by people without programming knowledge. The method and its research current state are presented in this work.", "title": "" }, { "docid": "c804aa80440827033fa787723d23c698", "text": "The present paper analyzes the self-generated explanations (from talk-aloud protocols) that “Good” ond “Poor” students produce while studying worked-out exomples of mechanics problems, and their subsequent reliance on examples during problem solving. We find that “Good” students learn with understanding: They generate many explanations which refine and expand the conditions for the action ports of the exomple solutions, ond relate these actions to principles in the text. These self-explanations are guided by accurate monitoring of their own understanding and misunderstanding. Such learning results in example-independent knowledge and in a better understanding of the principles presented in the text. “Poor” students do not generate sufficient self-explonations, monitor their learning inaccurately, and subsequently rely heovily an examples. We then discuss the role of self-explanations in facilitating problem solving, as well OS the adequacy of current Al models of explanation-based learning to account for these psychological findings.", "title": "" }, { "docid": "cf020ec1d5fbaa42d4699b16d27434d0", "text": "Direct methods for restoration of images blurred by motion are analyzed and compared. The term direct means that the considered methods are performed in a one-step fashion without any iterative technique. The blurring point-spread function is assumed to be unknown, and therefore the image restoration process is called blind deconvolution. What is believed to be a new direct method, here called the whitening method, was recently developed. This method and other existing direct methods such as the homomorphic and the cepstral techniques are studied and compared for a variety of motion types. Various criteria such as quality of restoration, sensitivity to noise, and computation requirements are considered. It appears that the recently developed method shows some improvements over other older methods. The research presented here clarifies the differences among the direct methods and offers an experimental basis for choosing which blind deconvolution method to use. In addition, some improvements on the methods are suggested.", "title": "" }, { "docid": "b4714cacd13600659e8a94c2b8271697", "text": "AIM AND OBJECTIVE\nExamine the pharmaceutical qualities of cannabis including a historical overview of cannabis use. Discuss the use of cannabis as a clinical intervention for people experiencing palliative care, including those with life-threatening chronic illness such as multiple sclerosis and motor neurone disease [amyotrophic lateral sclerosis] in the UK.\n\n\nBACKGROUND\nThe non-medicinal use of cannabis has been well documented in the media. There is a growing scientific literature on the benefits of cannabis in symptom management in cancer care. Service users, nurses and carers need to be aware of the implications for care and treatment if cannabis is being used medicinally.\n\n\nDESIGN\nA comprehensive literature review.\n\n\nMETHOD\nLiterature searches were made of databases from 1996 using the term cannabis and the combination terms of cannabis and palliative care; symptom management; cancer; oncology; chronic illness; motor neurone disease/amyotrophic lateral sclerosis; and multiple sclerosis. Internet material provided for service users searching for information about the medicinal use of cannabis was also examined.\n\n\nRESULTS\nThe literature on the use of cannabis in health care repeatedly refers to changes for users that may be equated with improvement in quality of life as an outcome of its use. This has led to increased use of cannabis by these service users. However, the cannabis used is usually obtained illegally and can have consequences for those who choose to use it for its therapeutic value and for nurses who are providing care.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nQuestions and dilemmas are raised concerning the role of the nurse when caring and supporting a person making therapeutic use of cannabis.", "title": "" }, { "docid": "0bc7de3f7ac06aa080ec590bdaf4c3b3", "text": "This paper demonstrates that US prestige-press coverage of global warming from 1988 to 2002 has contributed to a significant divergence of popular discourse from scientific discourse. This failed discursive translation results from an accumulation of tactical media responses and practices guided by widely accepted journalistic norms. Through content analysis of US prestige press— meaning the New York Times, the Washington Post, the Los Angeles Times, and the Wall Street Journal—this paper focuses on the norm of balanced reporting, and shows that the prestige press’s adherence to balance actually leads to biased coverage of both anthropogenic contributions to global warming and resultant action. r 2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "03277ef81159827a097c73cd24f8b5c0", "text": "It is generally accepted that there is something special about reasoning by using mental images. The question of how it is special, however, has never been satisfactorily spelled out, despite more than thirty years of research in the post-behaviorist tradition. This article considers some of the general motivation for the assumption that entertaining mental images involves inspecting a picture-like object. It sets out a distinction between phenomena attributable to the nature of mind to what is called the cognitive architecture, and ones that are attributable to tacit knowledge used to simulate what would happen in a visual situation. With this distinction in mind, the paper then considers in detail the widely held assumption that in some important sense images are spatially displayed or are depictive, and that examining images uses the same mechanisms that are deployed in visual perception. I argue that the assumption of the spatial or depictive nature of images is only explanatory if taken literally, as a claim about how images are physically instantiated in the brain, and that the literal view fails for a number of empirical reasons--for example, because of the cognitive penetrability of the phenomena cited in its favor. Similarly, while it is arguably the case that imagery and vision involve some of the same mechanisms, this tells us very little about the nature of mental imagery and does not support claims about the pictorial nature of mental images. Finally, I consider whether recent neuroscience evidence clarifies the debate over the nature of mental images. I claim that when such questions as whether images are depictive or spatial are formulated more clearly, the evidence does not provide support for the picture-theory over a symbol-structure theory of mental imagery. Even if all the empirical claims were true, they do not warrant the conclusion that many people have drawn from them: that mental images are depictive or are displayed in some (possibly cortical) space. Such a conclusion is incompatible with what is known about how images function in thought. We are then left with the provisional counterintuitive conclusion that the available evidence does not support rejection of what I call the \"null hypothesis\"; namely, that reasoning with mental images involves the same form of representation and the same processes as that of reasoning in general, except that the content or subject matter of thoughts experienced as images includes information about how things would look.", "title": "" }, { "docid": "7d301fc945abe95cef82cb56e98e6cfe", "text": "Many modern applications are a mixture of streaming, transactional and analytical workloads. However, traditional data platforms are each designed for supporting a specific type of workload. The lack of a single platform to support all these workloads has forced users to combine disparate products in custom ways. The common practice of stitching heterogeneous environments has caused enormous production woes by increasing complexity and the total cost of ownership. To support this class of applications, we present SnappyData as the first unified engine capable of delivering analytics, transactions, and stream processing in a single integrated cluster. We build this hybrid engine by carefully marrying a big data computational engine (Apache Spark) with a scale-out transactional store (Apache GemFire). We study and address the challenges involved in building such a hybrid distributed system with two conflicting components designed on drastically different philosophies: one being a lineage-based computational model designed for high-throughput analytics, the other a consensusand replication-based model designed for low-latency operations.", "title": "" } ]
scidocsrr
e5ecda54422caa18ce34e33227796a69
Product Fit Uncertainty in Online Markets: Nature, Effects, and Antecedents
[ { "docid": "5f366ed9a90448be28c1ec9249b4ec96", "text": "With the rapid growth of the Internet, the ability of users to create and publish content has created active electronic communities that provide a wealth of product information. However, the high volume of reviews that are typically published for a single product makes harder for individuals as well as manufacturers to locate the best reviews and understand the true underlying quality of a product. In this paper, we reexamine the impact of reviews on economic outcomes like product sales and see how different factors affect social outcomes such as their perceived usefulness. Our approach explores multiple aspects of review text, such as subjectivity levels, various measures of readability and extent of spelling errors to identify important text-based features. In addition, we also examine multiple reviewer-level features such as average usefulness of past reviews and the self-disclosed identity measures of reviewers that are displayed next to a review. Our econometric analysis reveals that the extent of subjectivity, informativeness, readability, and linguistic correctness in reviews matters in influencing sales and perceived usefulness. Reviews that have a mixture of objective, and highly subjective sentences are negatively associated with product sales, compared to reviews that tend to include only subjective or only objective information. However, such reviews are rated more informative (or helpful) by other users. By using Random Forest-based classifiers, we show that we can accurately predict the impact of reviews on sales and their perceived usefulness. We examine the relative importance of the three broad feature categories: “reviewer-related” features, “review subjectivity” features, and “review readability” features, and find that using any of the three feature sets results in a statistically equivalent performance as in the case of using all available features. This paper is the first study that integrates econometric, text mining, and predictive modeling techniques toward a more complete analysis of the information captured by user-generated online reviews in order to estimate their helpfulness and economic impact.", "title": "" }, { "docid": "cbf878cd5fbf898bdf88a2fcf5024826", "text": "Hypotheses involving mediation are common in the behavioral sciences. Mediation exists when a predictor affects a dependent variable indirectly through at least one intervening variable, or mediator. Methods to assess mediation involving multiple simultaneous mediators have received little attention in the methodological literature despite a clear need. We provide an overview of simple and multiple mediation and explore three approaches that can be used to investigate indirect processes, as well as methods for contrasting two or more mediators within a single model. We present an illustrative example, assessing and contrasting potential mediators of the relationship between the helpfulness of socialization agents and job satisfaction. We also provide SAS and SPSS macros, as well as Mplus and LISREL syntax, to facilitate the use of these methods in applications.", "title": "" } ]
[ { "docid": "d7ea7f669ada1ae6cb52ad33ab150837", "text": "Description Given an undirected graph G = ( V, E ), a clique S is a subset of V such that for any two elements u, v ∈ S, ( u, v ) ∈ E. Using the notation ES to represent the subset of edges which have both endpoints in clique S, the induced graph GS = ( S, ES ) is complete. Finding the largest clique in a graph is an NP-hard problem, called the maximum clique problem (MCP). Cliques are intimately related to vertex covers and independent sets. Given a graph G, and defining E* to be the complement of E, S is a maximum independent set in the complementary graph G* = ( V, E* ) if and only if S is a maximum clique in G. It follows that V – S is a minimum vertex cover in G*. There is a separate weighted form of MCP that we will not consider further here.", "title": "" }, { "docid": "7d6cd23ec44d7425b10ed086380bfc14", "text": "Objectives: To analysis different approaches for taxonomy construction to improve the knowledge classification, information retrieval and other data mining process. Findings: Taxonomies learning keep getting more important process for knowledge sharing about a domain. It is also used for application development such as knowledge searching, information retrieval. The taxonomy can be build manually but it is a complex process when the data are so large and it also produce some errors while taxonomy construction. There is various automatic taxonomy construction techniques are used to learn taxonomy based on keyword phrases, text corpus and from domain specific concepts etc. So it is required to build taxonomy with less human effort and with less error rate. This paper provides detailed information about those techniques. Methods: The methods such as lexico-syntatic pattern, semi supervised methods, graph based methods, ontoplus, TaxoLearn, Bayesian approach, two-step method, ontolearn and Automatic Taxonomy Construction from Text are analyzed in this paper. Application/Improvements: The findings of this work prove that the TaxoFinder approach provides better result than other approaches.", "title": "" }, { "docid": "6e82e635682cf87a84463f01c01a1d33", "text": "Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.", "title": "" }, { "docid": "21c4cd3a91a659fcd3800967943a2ffd", "text": "Ground reaction force (GRF) measurement is important in the analysis of human body movements. The main drawback of the existing measurement systems is the restriction to a laboratory environment. This study proposes an ambulatory system for assessing the dynamics of ankle and foot, which integrates the measurement of the GRF with the measurement of human body movement. The GRF and the center of pressure (CoP) are measured using two 6D force/moment sensors mounted beneath the shoe. The movement of the foot and the lower leg is measured using three miniature inertial sensors, two rigidly attached to the shoe and one to the lower leg. The proposed system is validated using a force plate and an optical position measurement system as a reference. The results show good correspondence between both measurement systems, except for the ankle power. The root mean square (rms) difference of the magnitude of the GRF over 10 evaluated trials was 0.012 ± 0.001 N/N (mean ± standard deviation), being 1.1 ± 0.1 % of the maximal GRF magnitude. It should be noted that the forces, moments, and powers are normalized with respect to body weight. The CoP estimation using both methods shows good correspondence, as indicated by the rms difference of 5.1± 0.7 mm, corresponding to 1.7 ± 0.3 % of the length of the shoe. The rms difference between the magnitudes of the heel position estimates was calculated as 18 ± 6 mm, being 1.4 ± 0.5 % of the maximal magnitude. The ankle moment rms difference was 0.004 ± 0.001 Nm/N, being 2.3 ± 0.5 % of the maximal magnitude. Finally, the rms difference of the estimated power at the ankle was 0.02 ± 0.005 W/N, being 14 ± 5 % of the maximal power. This power difference is caused by an inaccurate estimation of the angular velocities using the optical reference measurement system, which is due to considering the foot as a single segment. The ambulatory system considers separate heel and forefoot segments, thus allowing an additional foot moment and power to be estimated. Based on the results of this research, it is concluded that the combination of the instrumented shoe and inertial sensing is a promising tool for the assessment of the dynamics of foot and ankle in an ambulatory setting.", "title": "" }, { "docid": "63c550438679c0353c2f175032a73369", "text": "Large screens or projections in public and private settings have become part of our daily lives, as they enable the collaboration and presentation of information in many diverse ways. When discussing the shown information with other persons, we often point to a displayed object with our index finger or a laser pointer in order to talk about it. Although mobile phone-based interactions with remote screens have been investigated intensively in the last decade, none of them considered such direct pointing interactions for application in everyday tasks. In this paper, we present the concept and design space of PointerPhone which enables users to directly point at objects on a remote screen with their mobile phone and interact with them in a natural and seamless way. We detail the design space and distinguish three categories of interactions including low-level interactions using the mobile phone as a precise and fast pointing device, as well as an input and output device. We detail the category of widgetlevel interactions. Further, we demonstrate versatile high-level interaction techniques and show their application in a collaborative presentation scenario. Based on the results of a qualitative study, we provide design implications for application designs.", "title": "" }, { "docid": "717e11d1a112557abdc4160afe75ce16", "text": "Various types of lipids and their metabolic products associated with the biological membrane play a crucial role in signal transduction, modulation, and activation of receptors and as precursors of bioactive lipid mediators. Dysfunction in the lipid homeostasis in the brain could be a risk factor for the many types of neurodegenerative disorders, including Alzheimer’s disease, Huntington’s disease, Parkinson’s disease, and amyotrophic lateral sclerosis. These neurodegenerative disorders are marked by extensive neuronal apoptosis, gliosis, and alteration in the differentiation, proliferation, and development of neurons. Sphingomyelin, a constituent of plasma membrane, as well as its primary metabolite ceramide acts as a potential lipid second messenger molecule linked with the modulation of various cellular signaling pathways. Excessive production of reactive oxygen species associated with enhanced oxidative stress has been implicated with these molecules and involved in the regulation of a variety of different neurodegenerative and neuroinflammatory disorders. Studies have shown that alterations in the levels of plasma lipid/cholesterol concentration may result to neurodegenerative diseases. Alteration in the levels of inflammatory cytokines and mediators in the brain has also been found to be implicated in the pathophysiology of neurodegenerative diseases. Although several mechanisms involved in neuronal apoptosis have been described, the molecular mechanisms underlying the correlation between lipid metabolism and the neurological deficits are not clearly understood. In the present review, an attempt has been made to provide detailed information about the association of lipids in neurodegeneration especially in Alzheimer’s disease.", "title": "" }, { "docid": "b84d6210438144ebe20271ceaffc28a3", "text": "Although precision agriculture has been adopted in few countries; the agriculture industry in India still needs to be modernized with the involvement of technologies for better production, distribution and cost control. In this paper we proposed a multidisciplinary model for smart agriculture based on the key technologies: Internet-of-Things (IoT), Sensors, Cloud-Computing, MobileComputing, Big-Data analysis. Farmers, AgroMarketing agencies and Agro-Vendors need to be registered to the AgroCloud module through MobileApp module. AgroCloud storage is used to store the details of farmers, periodic soil properties of farmlands, agro-vendors and agro-marketing agencies, Agro e-governance schemes and current environmental conditions. Soil and environment properties are sensed and periodically sent to AgroCloud through IoT (Beagle Black Bone). Bigdata analysis on AgroCloud data is done for fertilizer requirements, best crop sequences analysis, total production, and current stock and market requirements. Proposed model is beneficial for increase in agricultural production and for cost control of Agro-products.", "title": "" }, { "docid": "258269307e097a89fd089cf44ba50ecd", "text": "The Visual Notation for OWL Ontologies (VOWL) provides a visual language for the representation of ontologies. In contrast to related work, VOWL aims for an intuitive and interactive visualization that is also understandable to users less familiar with ontologies. This paper presents ProtégéVOWL, a first implementation of VOWL realized as a plugin for the ontology editor Protégé. It accesses the internal ontology representation provided by the OWL API and defines graphical mappings according to the VOWL specification. The information visualization toolkit Prefuse is used to render the visual elements and to combine them to a force-directed graph layout. Results from a preliminary user study indicate that ProtégéVOWL does indeed provide a comparatively intuitive and usable ontology visualization.", "title": "" }, { "docid": "ac078f78fcf0f675c21a337f8e3b6f5f", "text": "bstract. Plenoptic cameras, constructed with internal microlens rrays, capture both spatial and angular information, i.e., the full 4-D adiance, of a scene. The design of traditional plenoptic cameras ssumes that each microlens image is completely defocused with espect to the image created by the main camera lens. As a result, nly a single pixel in the final image is rendered from each microlens mage, resulting in disappointingly low resolution. A recently develped alternative approach based on the focused plenoptic camera ses the microlens array as an imaging system focused on the imge plane of the main camera lens. The flexible spatioangular tradeff that becomes available with this design enables rendering of final mages with significantly higher resolution than those from traditional lenoptic cameras. We analyze the focused plenoptic camera in ptical phase space and present basic, blended, and depth-based endering algorithms for producing high-quality, high-resolution imges. We also present our graphics-processing-unit-based impleentations of these algorithms, which are able to render full screen efocused images in real time. © 2010 SPIE and IS&T. DOI: 10.1117/1.3442712", "title": "" }, { "docid": "52941fc8dc63e57ce2f937410c424b95", "text": "The multidimensional 0–1 knapsack problem is one of the most well-known integer programming problems and has received wide attention from the operational research community during the last four decades. Although recent advances have made possible the solution of medium size instances, solving this NP-hard problem remains a very interesting challenge, especially when the number of constraints increases. This paper surveys the main results published in the literature. The focus is on the theoretical properties as well as approximate or exact solutions of this special 0–1 program. 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c2a7fa32a3037ff30bd633ed0934ee5f", "text": "databases have been attracting a significant amount of research, industry, and media attention of late. What is all the excitement about? This article provides an overview of this emerging field, clarifying how data mining and knowledge discovery in databases are related both to each other and to related fields, such as machine learning, statistics, and databases. The article mentions particular real-world applications, specific data-mining techniques, challenges involved in real-world applications of knowledge discovery, and current and future research directions in the field.", "title": "" }, { "docid": "27b9350b8ea1032e727867d34c87f1c3", "text": "A field study and an experimental study examined relationships among organizational variables and various responses of victims to perceived wrongdoing. Both studies showed that procedural justice climate moderates the effect of organizational variables on the victim's revenge, forgiveness, reconciliation, or avoidance behaviors. In Study 1, a field study, absolute hierarchical status enhanced forgiveness and reconciliation, but only when perceptions of procedural justice climate were high; relative hierarchical status increased revenge, but only when perceptions of procedural justice climate were low. In Study 2, a laboratory experiment, victims were less likely to endorse vengeance or avoidance depending on the type of wrongdoing, but only when perceptions of procedural justice climate were high.", "title": "" }, { "docid": "16c56a9ca685cb1100d175268b6e8ba6", "text": "In this paper, we study the stochastic gradient descent method in analyzing nonconvex statistical optimization problems from a diffusion approximation point of view. Using the theory of large deviation of random dynamical system, we prove in the small stepsize regime and the presence of omnidirectional noise the following: starting from a local minimizer (resp. saddle point) the SGD iteration escapes in a number of iteration that is exponentially (resp. linearly) dependent on the inverse stepsize. We take the deep neural network as an example to study this phenomenon. Based on a new analysis of the mixing rate of multidimensional Ornstein-Uhlenbeck processes, our theory substantiate a very recent empirical results by Keskar et al. (2016), suggesting that large batch sizes in training deep learning for synchronous optimization leads to poor generalization error.", "title": "" }, { "docid": "afcb6c9130e16002100ff68f68d98ff3", "text": "This study characterizes adults who report being physically abused during childhood, and examines associations of reported type and frequency of abuse with adult mental health. Data were derived from the 2000-2001 and 2004-2005 National Epidemiologic Survey on Alcohol and Related Conditions, a large cross-sectional survey of a representative sample (N = 43,093) of the U.S. population. Weighted means, frequencies, and odds ratios of sociodemographic correlates and prevalence of psychiatric disorders were computed. Logistic regression models were used to examine the strength of associations between child physical abuse and adult psychiatric disorders adjusted for sociodemographic characteristics, other childhood adversities, and comorbid psychiatric disorders. Child physical abuse was reported by 8% of the sample and was frequently accompanied by other childhood adversities. Child physical abuse was associated with significantly increased adjusted odds ratios (AORs) of a broad range of DSM-IV psychiatric disorders (AOR = 1.16-2.28), especially attention-deficit hyperactivity disorder, posttraumatic stress disorder, and bipolar disorder. A dose-response relationship was observed between frequency of abuse and several adult psychiatric disorder groups; higher frequencies of assault were significantly associated with increasing adjusted odds. The long-lasting deleterious effects of child physical abuse underscore the urgency of developing public health policies aimed at early recognition and prevention.", "title": "" }, { "docid": "19ae7c50f4393f5a1b39e1160c78f76c", "text": "Building bilingual lexica from non-parallel data is a longstanding natural language processing research problem that could benefit thousands of resource-scarce languages which lack parallel data. Recent advances of continuous word representations have opened up new possibilities for this task, e.g. by establishing cross-lingual mapping between word embeddings via a seed lexicon. The method is however unreliable when there are only a limited number of seeds, which is a reasonable setting for resource-scarce languages. We tackle the limitation by introducing a novel matching mechanism into bilingual word representation learning. It captures extra translation pairs exposed by the seeds to incrementally improve the bilingual word embeddings. In our experiments, we find the matching mechanism to substantially improve the quality of the bilingual vector space, which in turn allows us to induce better bilingual lexica with seeds as few as 10.", "title": "" }, { "docid": "7edaef142ecf8a3825affc09ad10d73a", "text": "Internet of Things (IoT) is a network of sensors, actuators, mobile and wearable devices, simply things that have processing and communication modules and can connect to the Internet. In a few years time, billions of such things will start serving in many fields within the concept of IoT. Self-configuration, autonomous device addition, Internet connection and resource limitation features of IoT causes it to be highly prone to the attacks. Denial of Service (DoS) attacks which have been targeting the communication networks for years, will be the most dangerous threats to IoT networks. This study aims to analyze and classify the DoS attacks that may target the IoT environments. In addition to this, the systems that try to detect and mitigate the DoS attacks to IoT will be evaluated.", "title": "" }, { "docid": "a8f391b630a0261a0693c7038370411a", "text": "In this paper, we address the problem of globally localizing and tracking the pose of a camera-equipped micro aerial vehicle (MAV) flying in urban streets at low altitudes without GPS. An image-based global positioning system is introduced to localize the MAV with respect to the surrounding buildings. We propose a novel airground image-matching algorithm to search the airborne image of the MAV within a ground-level, geotagged image database. Based on the detected matching image features, we infer the global position of the MAV by back-projecting the corresponding image points onto a cadastral three-dimensional city model. Furthermore, we describe an algorithm to track the position of the flying vehicle over several frames and to correct the accumulated drift of the visual odometry whenever a good match is detected between the airborne and the ground-level images. The proposed approach is tested on a 2 km trajectory with a small quadrocopter flying in the streets of Zurich. Our vision-based global localization can robustly handle extreme changes in viewpoint, illumination, perceptual aliasing, and over-season variations, thus outperforming conventional visual placerecognition approaches. The dataset is made publicly available to the research community. To the best of our knowledge, this is the first work that studies and demonstrates global localization and position tracking of a drone in urban streets with a single onboard camera. C © 2015 Wiley Periodicals, Inc.", "title": "" }, { "docid": "088078841a9bf35bcfb38c1d85573860", "text": "Multilingual Word Embeddings (MWEs) represent words from multiple languages in a single distributional vector space. Unsupervised MWE (UMWE) methods acquire multilingual embeddings without cross-lingual supervision, which is a significant advantage over traditional supervised approaches and opens many new possibilities for low-resource languages. Prior art for learning UMWEs, however, merely relies on a number of independently trained Unsupervised Bilingual Word Embeddings (UBWEs) to obtain multilingual embeddings. These methods fail to leverage the interdependencies that exist among many languages. To address this shortcoming, we propose a fully unsupervised framework for learning MWEs1 that directly exploits the relations between all language pairs. Our model substantially outperforms previous approaches in the experiments on multilingual word translation and cross-lingual word similarity. In addition, our model even beats supervised approaches trained with cross-lingual resources.", "title": "" }, { "docid": "d7dc0dd72295a5c8e49afb4ed3bb763f", "text": "Many significant sources of error take place in the smart antenna system like mismatching between the supposed steering vectors and the real vectors, insufficient calibration of array antenna, etc. These errors correspond to adding spatially white noise to each element of the array antenna, therefore the performance of the smart antenna falls and the desired output signal is destroyed. This paper presents a performance study of a smart antenna system at different noise levels using five adaptive beamforming algorithms and compares between them. The investigated algorithms are Least Mean Square (LMS), Normalized Least Mean Square (NLMS), Sample Matrix Inversion (SMI), Recursive Least Square (RLS) and Hybrid Least Mean Square / Sample Matrix Inversion (LMS/SMI). MATLAB simulation results are illustrated to investigate the performance of these algorithms.", "title": "" }, { "docid": "4bdcc552853c8b658762c0c5d509f362", "text": "In this work, we study the problem of partof-speech tagging for Tweets. In contrast to newswire articles, Tweets are usually informal and contain numerous out-ofvocabulary words. Moreover, there is a lack of large scale labeled datasets for this domain. To tackle these challenges, we propose a novel neural network to make use of out-of-domain labeled data, unlabeled in-domain data, and labeled indomain data. Inspired by adversarial neural networks, the proposed method tries to learn common features through adversarial discriminator. In addition, we hypothesize that domain-specific features of target domain should be preserved in some degree. Hence, the proposed method adopts a sequence-to-sequence autoencoder to perform this task. Experimental results on three different datasets show that our method achieves better performance than state-of-the-art methods.", "title": "" } ]
scidocsrr
98f15dcee44b3b0014a0dc70c2ba6fca
Survey on distance metric learning and dimensionality reduction in data mining
[ { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "effa64c878add2a55a804415cb7c8169", "text": "Dimensionality reduction is an important issue in many machine learning and pattern recognition applications, and the trace ratio (TR) problem is an optimization problem involved in many dimensionality reduction algorithms. Conventionally, the solution is approximated via generalized eigenvalue decomposition due to the difficulty of the original problem. However, prior works have indicated that it is more reasonable to solve it directly than via the conventional way. In this brief, we propose a theoretical overview of the global optimum solution to the TR problem via the equivalent trace difference problem. Eigenvalue perturbation theory is introduced to derive an efficient algorithm based on the Newton-Raphson method. Theoretical issues on the convergence and efficiency of our algorithm compared with prior literature are proposed, and are further supported by extensive empirical results.", "title": "" }, { "docid": "7655df3f32e6cf7a5545ae2231f71e7c", "text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.", "title": "" } ]
[ { "docid": "f136e875f021ea3ea67a87c6d0b1e869", "text": "Platelet-rich plasma (PRP) has been utilized for many years as a regenerative agent capable of inducing vascularization of various tissues using blood-derived growth factors. Despite this, drawbacks mostly related to the additional use of anti-coagulants found in PRP have been shown to inhibit the wound healing process. For these reasons, a novel platelet concentrate has recently been developed with no additives by utilizing lower centrifugation speeds. The purpose of this study was therefore to investigate osteoblast behavior of this novel therapy (injectable-platelet-rich fibrin; i-PRF, 100% natural with no additives) when compared to traditional PRP. Human primary osteoblasts were cultured with either i-PRF or PRP and compared to control tissue culture plastic. A live/dead assay, migration assay as well as a cell adhesion/proliferation assay were investigated. Furthermore, osteoblast differentiation was assessed by alkaline phosphatase (ALP), alizarin red and osteocalcin staining, as well as real-time PCR for genes encoding Runx2, ALP, collagen1 and osteocalcin. The results showed that all cells had high survival rates throughout the entire study period irrespective of culture-conditions. While PRP induced a significant 2-fold increase in osteoblast migration, i-PRF demonstrated a 3-fold increase in migration when compared to control tissue-culture plastic and PRP. While no differences were observed for cell attachment, i-PRF induced a significantly higher proliferation rate at three and five days when compared to PRP. Furthermore, i-PRF induced significantly greater ALP staining at 7 days and alizarin red staining at 14 days. A significant increase in mRNA levels of ALP, Runx2 and osteocalcin, as well as immunofluorescent staining of osteocalcin was also observed in the i-PRF group when compared to PRP. In conclusion, the results from the present study favored the use of the naturally-formulated i-PRF when compared to traditional PRP with anti-coagulants. Further investigation into the direct role of fibrin and leukocytes contained within i-PRF are therefore warranted to better elucidate their positive role in i-PRF on tissue wound healing.", "title": "" }, { "docid": "625c5c89b9f0001a3eed1ec6fb498c23", "text": "About a 100 years ago, the Drosophila white mutant marked the birth of Drosophila genetics. The white gene turned out to encode the first well studied ABC transporter in arthropods. The ABC gene family is now recognized as one of the largest transporter families in all kingdoms of life. The majority of ABC proteins function as primary-active transporters that bind and hydrolyze ATP while transporting a large diversity of substrates across lipid membranes. Although extremely well studied in vertebrates for their role in drug resistance, less is known about the role of this family in the transport of endogenous and exogenous substances in arthropods. The ABC families of five insect species, a crustacean and a chelicerate have been annotated in some detail. We conducted a thorough phylogenetic analysis of the seven arthropod and human ABC protein subfamilies, to infer orthologous relationships that might suggest conserved function. Most orthologous relationships were found in the ABCB half transporter, ABCD, ABCE and ABCF subfamilies, but specific expansions within species and lineages are frequently observed and discussed. We next surveyed the role of ABC transporters in the transport of xenobiotics/plant allelochemicals and their involvement in insecticide resistance. The involvement of ABC transporters in xenobiotic resistance in arthropods is historically not well documented, but an increasing number of studies using unbiased differential gene expression analysis now points to their importance. We give an overview of methods that can be used to link ABC transporters to resistance. ABC proteins have also recently been implicated in the mode of action and resistance to Bt toxins in Lepidoptera. Given the enormous interest in Bt toxicology in transgenic crops, such findings will provide an impetus to further reveal the role of ABC transporters in arthropods. 2014 The Authors. Published by Elsevier Ltd. Open access under CC BY-NC-ND license.", "title": "" }, { "docid": "552ad2b05d0e7812bb5e17fb22c3de28", "text": "Behavior-based agents are becoming increasingly used across a variety of platforms. The common approach to building such agents involves implementing the behavior synchronization and management algorithms directly in the agent’s programming environment. This process makes it hard, if not impossible, to share common components of a behavior architecture across different agent implementations. This lack of reuse also makes it cumbersome to experiment with different behavior architectures as it forces users to manipulate native code directly, e.g. C++ or Java. In this paper, we provide a high-level behavior-centric programming language and an automated code generation system which together overcome these issues and facilitate the process of implementing and experimenting with different behavior architectures. The language is specifically designed to allow clear and precise descriptions of a behavior hierarchy, and can be automatically translated by our generator into C++ code. Once compiled, this C++ code yields an executable that directs the execution of behaviors in the agent’s sense-plan-act cycle. We have tested this process with different platforms, including both software and robot agents, with various behavior architectures. We experienced the advantages of defining an agent by directly reasoning at the behavior architecture level followed by the automatic native code generation.", "title": "" }, { "docid": "3535e70b1c264d99eff5797413650283", "text": "MIMO is one of the techniques used in LTE Release 8 to achieve very high data rates. A field trial was performed in a pre-commercial LTE network. The objective is to investigate how well MIMO works with realistically designed handhelds in band 13 (746-756 MHz in downlink). In total, three different handheld designs were tested using antenna mockups. In addition to the mockups, a reference antenna design with less stringent restrictions on physical size and excellent properties for MIMO was used. The trial comprised test drives in areas with different characteristics and with different network load levels. The effects of hands holding the devices and the effect of using the device inside a test vehicle were also investigated. In general, it is very clear from the trial that MIMO works very well and gives a substantial performance improvement at the tested carrier frequency if the antenna design of the hand-held is well made with respect to MIMO. In fact, the best of the handhelds performed similar to the reference antenna.", "title": "" }, { "docid": "8aa305f217314d60ed6c9f66d20a7abf", "text": "The circadian timing system drives daily rhythmic changes in drug metabolism and controls rhythmic events in cell cycle, DNA repair, apoptosis, and angiogenesis in both normal tissue and cancer. Rodent and human studies have shown that the toxicity and anticancer activity of common cancer drugs can be significantly modified by the time of administration. Altered sleep/activity rhythms are common in cancer patients and can be disrupted even more when anticancer drugs are administered at their most toxic time. Disruption of the sleep/activity rhythm accelerates cancer growth. The complex circadian time-dependent connection between host, cancer and therapy is further impacted by other factors including gender, inter-individual differences and clock gene polymorphism and/or down regulation. It is important to take circadian timing into account at all stages of new drug development in an effort to optimize the therapeutic index for new cancer drugs. Better measures of the individual differences in circadian biology of host and cancer are required to further optimize the potential benefit of chronotherapy for each individual patient.", "title": "" }, { "docid": "9c89c4c4ae75f9b003fca6696163619a", "text": "We study a class of stochastic optimization models of expected utility in markets with stochastically changing investment opportunities. The prices of the primitive assets are modelled as diffusion processes whose coefficients evolve according to correlated diffusion factors. Under certain assumptions on the individual preferences, we are able to produce reduced form solutions. Employing a power transformation, we express the value function in terms of the solution of a linear parabolic equation, with the power exponent depending only on the coefficients of correlation and risk aversion. This reduction facilitates considerably the study of the value function and the characterization of the optimal hedging demand. The new results demonstrate an interesting connection with valuation techniques using stochastic differential utilities and also, with distorted measures in a dynamic setting.", "title": "" }, { "docid": "d3d57d67d4384f916f9e9e48f3fcdcdb", "text": "Web-based social networks have become popular as a medium for disseminating information and connecting like-minded people. The public accessibility of such networks with the ability to share opinions, thoughts, information, and experience offers great promise to enterprises and governments. In addition to individuals using such networks to connect to their friends and families, governments and enterprises have started exploiting these platforms for delivering their services to citizens and customers. However, the success of such attempts relies on the level of trust that members have with each other as well as with the service provider. Therefore, trust becomes an essential and important element of a successful social network. In this article, we present the first comprehensive review of social and computer science literature on trust in social networks. We first review the existing definitions of trust and define social trust in the context of social networks. We then discuss recent works addressing three aspects of social trust: trust information collection, trust evaluation, and trust dissemination. Finally, we compare and contrast the literature and identify areas for further research in social trust.", "title": "" }, { "docid": "405a1e8badfb85dcd1d5cc9b4a0026d2", "text": "It is of great practical importance to improve yield and quality of vegetables in soilless cultures. This study investigated the effects of iron-nutrition management on yield and quality of hydroponic-cultivated spinach (Spinacia oleracea L.). The results showed that mild Fe-deficient treatment (1 μM FeEDTA) yielded a greater biomass of edible parts than Fe-omitted treatment (0 μM FeEDTA) or Fe-sufficient treatments (10 and 50 μM FeEDTA). Conversely, mild Fe-deficient treatment had the lowest nitrate concentration in the edible parts out of all the Fe treatments. Interestingly, all the concentrations of soluble sugar, soluble protein and ascorbate in mild Fe-deficient treatments were higher than Fe-sufficient treatments. In addition, both phenolic concentration and DPPH scavenging activity in mild Fe-deficient treatments were comparable with those in Fe-sufficient treatments, but were higher than those in Fe-omitted treatments. Therefore, we concluded that using a mild Fe-deficient nutrition solution to cultivate spinach not only would increase yield, but also would improve quality.", "title": "" }, { "docid": "781ebbf85a510cfd46f0c824aa4aba7e", "text": "Human activity recognition (HAR) is an important research area in the fields of human perception and computer vision due to its wide range of applications. These applications include: intelligent video surveillance, ambient assisted living, human computer interaction, human-robot interaction, entertainment, and intelligent driving. Recently, with the emergence and successful deployment of deep learning techniques for image classification, researchers have migrated from traditional handcrafting to deep learning techniques for HAR. However, handcrafted representation-based approaches are still widely used due to some bottlenecks such as computational complexity of deep learning techniques for activity recognition. However, approaches based on handcrafted representation are not able to handle complex scenarios due to their limitations and incapability; therefore, resorting to deep learning-based techniques is a natural option. This review paper presents a comprehensive survey of both handcrafted and learning-based action representations, offering comparison, analysis, and discussions on these approaches. In addition to this, the well-known public datasets available for experimentations and important applications of HAR are also presented to provide further insight into the field. This is the first review paper of its kind which presents all these aspects of HAR in a single review article with comprehensive coverage of each part. Finally, the paper is concluded with important discussions and research directions in the domain of HAR.", "title": "" }, { "docid": "0c805b994e89c878a62f2e1066b0a8e7", "text": "3D spatial data modeling is one of the key research problems in 3D GIS. More and more applications depend on these 3D spatial data. Mostly, these data are stored in Geo-DBMSs. However, recent Geo-DBMSs do not support 3D primitives modeling, it only able to describe a single-attribute of the third-dimension, i.e. modeling 2.5D datasets that used 2D primitives (plus a single z-coordinate) such as polygons in 3D space. This research focuses on 3D topological model based on space partition for 3D GIS, for instance, 3D polygons or tetrahedron form a solid3D object. Firstly, this report discusses formal definitions of 3D spatial objects, and then all the properties of each object primitives will be elaborated in detailed. The author also discusses methods for constructing the topological properties to support object semantics is introduced. The formal framework to describe the spatial model, database using Oracle Spatial is also given in this report. All related topological structures that forms the object features are discussed in detail. All related features are tested using real 3D spatial dataset of 3D building. Finally, the report concludes the experiment via visualization of using AutoDesk Map 3D.", "title": "" }, { "docid": "1b030e734e3ddfb5e612b1adc651b812", "text": "Clustering1is an essential task in many areas such as machine learning, data mining and computer vision among others. Cluster validation aims to assess the quality of partitions obtained by clustering algorithms. Several indexes have been developed for cluster validation purpose. They can be external or internal depending on the availability of ground truth clustering. This paper deals with the issue of cluster validation of large data set. Indeed, in the era of big data this task becomes even more difficult to handle and requires parallel and distributed approaches. In this work, we are interested in external validation indexes. More specifically, this paper proposes a model for purity based cluster validation in parallel and distributed manner using Map-Reduce paradigm in order to be able to scale with increasing dataset sizes.\n The experimental results show that our proposed model is valid and achieves properly cluster validation of large datasets.", "title": "" }, { "docid": "d71040311b8753299377b02023ba5b4c", "text": "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.", "title": "" }, { "docid": "dc2ea774fb11bc09e80b9de3acd7d5a6", "text": "The Hough transform is a well-known straight line detection algorithm and it has been widely used for many lane detection algorithms. However, its real-time operation is not guaranteed due to its high computational complexity. In this paper, we designed a Hough transform hardware accelerator on FPGA to process it in real time. Its FPGA logic area usage was reduced by limiting the angles of the lines to (-20, 20) degrees which are enough for lane detection applications, and its arithmetic computations were performed in parallel to speed up the processing time. As a result of FPGA synthesis using Xilinx Vertex-5 XC5VLX330 device, it occupies 4,521 slices and 25.6Kbyte block memory giving performance of 10,000fps in VGA images(5000 edge points). The proposed hardware on FPGA (0.1ms) is 450 times faster than the software implementation on ARM Cortex-A9 1.4GHz (45ms). Our Hough transform hardware was verified by applying it to the newly developed LDWS (lane departure warning system).", "title": "" }, { "docid": "dd726458660c3dfe05bd775df562e188", "text": "Maternally deprived rats were treated with tianeptine (15 mg/kg) once a day for 14 days during their adult phase. Their behavior was then assessed using the forced swimming and open field tests. The BDNF, NGF and energy metabolism were assessed in the rat brain. Deprived rats increased the immobility time, but tianeptine reversed this effect and increased the swimming time; the BDNF levels were decreased in the amygdala of the deprived rats treated with saline and the BDNF levels were decreased in the nucleus accumbens within all groups; the NGF was found to have decreased in the hippocampus, amygdala and nucleus accumbens of the deprived rats; citrate synthase was increased in the hippocampus of non-deprived rats treated with tianeptine and the creatine kinase was decreased in the hippocampus and amygdala of the deprived rats; the mitochondrial complex I and II–III were inhibited, and tianeptine increased the mitochondrial complex II and IV in the hippocampus of the non-deprived rats; the succinate dehydrogenase was increased in the hippocampus of non-deprived rats treated with tianeptine. So, tianeptine showed antidepressant effects conducted on maternally deprived rats, and this can be attributed to its action on the neurochemical pathways related to depression.", "title": "" }, { "docid": "79593cc56da377d834f33528b833641f", "text": "Machine learning offers a fantastically powerful toolkit f or building complex systems quickly. This paper argues that it is dangerous to think of these quick wins as coming for free. Using the framework of technical debt , we note that it is remarkably easy to incur massive ongoing maintenance costs at the system level when applying machine learning. The goal of this paper is hig hlight several machine learning specific risk factors and design patterns to b e avoided or refactored where possible. These include boundary erosion, entanglem ent, hidden feedback loops, undeclared consumers, data dependencies, changes i n the external world, and a variety of system-level anti-patterns. 1 Machine Learning and Complex Systems Real world software engineers are often faced with the chall enge of moving quickly to ship new products or services, which can lead to a dilemma between spe ed of execution and quality of engineering. The concept of technical debtwas first introduced by Ward Cunningham in 1992 as a way to help quantify the cost of such decisions. Like incurri ng fiscal debt, there are often sound strategic reasons to take on technical debt. Not all debt is n ecessarily bad, but technical debt does tend to compound. Deferring the work to pay it off results in i ncreasing costs, system brittleness, and reduced rates of innovation. Traditional methods of paying off technical debt include re factoring, increasing coverage of unit tests, deleting dead code, reducing dependencies, tighten ng APIs, and improving documentation [4]. The goal of these activities is not to add new functionality, but to make it easier to add future improvements, be cheaper to maintain, and reduce the likeli hood of bugs. One of the basic arguments in this paper is that machine learn ing packages have all the basic code complexity issues as normal code, but also have a larger syst em-level complexity that can create hidden debt. Thus, refactoring these libraries, adding bet ter unit tests, and associated activity is time well spent but does not necessarily address debt at a systems level. In this paper, we focus on the system-level interaction betw e n machine learning code and larger systems as an area where hidden technical debt may rapidly accum ulate. At a system-level, a machine learning model may subtly erode abstraction boundaries. It may be tempting to re-use input signals in ways that create unintended tight coupling of otherw ise disjoint systems. Machine learning packages may often be treated as black boxes, resulting in la rge masses of “glue code” or calibration layers that can lock in assumptions. Changes in the exte rnal world may make models or input signals change behavior in unintended ways, ratcheting up m aintenance cost and the burden of any debt. Even monitoring that the system as a whole is operating s intended may be difficult without careful design.", "title": "" }, { "docid": "6cad42e549f449c7156b0a07e2e02726", "text": "Fog computing extends the cloud computing paradigm by placing resources close to the edges of the network to deal with the upcoming growth of connected devices. Smart city applications, such as health monitoring and predictive maintenance, will introduce a new set of stringent requirements, such as low latency, since resources can be requested on-demand simultaneously by multiple devices at different locations. It is then necessary to adapt existing network technologies to future needs and design new architectural concepts to help meet these strict requirements. This article proposes a fog computing framework enabling autonomous management and orchestration functionalities in 5G-enabled smart cities. Our approach follows the guidelines of the European Telecommunications Standards Institute (ETSI) NFV MANO architecture extending it with additional software components. The contribution of our work is its fully-integrated fog node management system alongside the foreseen application layer Peer-to-Peer (P2P) fog protocol based on the Open Shortest Path First (OSPF) routing protocol for the exchange of application service provisioning information between fog nodes. Evaluations of an anomaly detection use case based on an air monitoring application are presented. Our results show that the proposed framework achieves a substantial reduction in network bandwidth usage and in latency when compared to centralized cloud solutions.", "title": "" }, { "docid": "d59d1ac7b3833ee1e60f7179a4a9af99", "text": "s Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. GJCST Classification : C.1.4, C.2.1 Research Issues in Cloud Computing Strictly as per the compliance and regulations of: Research Issues in Cloud Computing V. Krishna Reddy , B. Thirumala Rao , Dr. L.S.S. Reddy , P. Sai Kiran ABSTRACT : Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges.", "title": "" }, { "docid": "fedcb2bd51b9fd147681ae23e03c7336", "text": "Epidemiological studies have revealed the important role that foodstuffs of vegetable origin have to play in the prevention of numerous illnesses. The natural antioxidants present in such foodstuffs, among which the fl avonoids are widely present, may be responsible for such an activity. Flavonoids are compounds that are low in molecular weight and widely distributed throughout the vegetable kingdom. They may be of great utility in states of accute or chronic diarrhoea through the inhibition of intestinal secretion and motility, and may also be benefi cial in the reduction of chronic infl ammatory damage in the intestine, by affording protection against oxidative stress and by preserving mucosal function. For this reason, the use of these agents is recommended in the treatment of infl ammatory bowel disease, in which various factors are involved in extreme immunological reactions, which lead to chronic intestinal infl ammation.", "title": "" }, { "docid": "a89c0a16d161ef41603583567f85a118", "text": "360° Video services with resolutions of UHD and beyond for Virtual Reality head mounted displays are a challenging task due to limits of video decoders in constrained end devices. Adaptivity to the current user viewport is a promising approach but incurs significant encoding overhead when encoding per user or set of viewports. A more efficient way to achieve viewport adaptive streaming is to facilitate motion-constrained HEVC tiles. Original content resolution within the user viewport is preserved while content currently not presented to the user is delivered in lower resolution. A lightweight aggregation of varying resolution tiles into a single HEVC bitstream can be carried out on-the-fly and allows usage of a single decoder instance on the end device.", "title": "" }, { "docid": "241f5a88f53c929cc11ce0edce191704", "text": "Enabled by mobile and wearable technology, personal health data delivers immense and increasing value for healthcare, benefiting both care providers and medical research. The secure and convenient sharing of personal health data is crucial to the improvement of the interaction and collaboration of the healthcare industry. Faced with the potential privacy issues and vulnerabilities existing in current personal health data storage and sharing systems, as well as the concept of self-sovereign data ownership, we propose an innovative user-centric health data sharing solution by utilizing a decentralized and permissioned blockchain to protect privacy using channel formation scheme and enhance the identity management using the membership service supported by the blockchain. A mobile application is deployed to collect health data from personal wearable devices, manual input, and medical devices, and synchronize data to the cloud for data sharing with healthcare providers and health insurance companies. To preserve the integrity of health data, within each record, a proof of integrity and validation is permanently retrievable from cloud database and is anchored to the blockchain network. Moreover, for scalable and performance considerations, we adopt a tree-based data processing and batching method to handle large data sets of personal health data collected and uploaded by the mobile platform.", "title": "" } ]
scidocsrr
6e69ec92774bbaa8842689871960d123
Emotion and motivation: the role of the amygdala, ventral striatum, and prefrontal cortex
[ { "docid": "9d9714639d8f5c24bdb3f731f31c88d7", "text": "Controversy surrounds the function of the anterior cingulate cortex. Recent discussions about its role in behavioural control have centred on three main issues: its involvement in motor control, its proposed role in cognition and its relationship with the arousal/drive state of the organism. I argue that the overlap of these three domains is key to distinguishing the anterior cingulate cortex from other frontal regions, placing it in a unique position to translate intentions to actions.", "title": "" } ]
[ { "docid": "b4bc5ccbe0929261856d18272c47a3de", "text": "ROC analysis is increasingly being recognised as an important tool for evaluation and comparison of classifiers when the operating characteristics (i.e. class distribution and cost parameters) are not known at training time. Usually, each classifier is characterised by its estimated true and false positive rates and is represented by a single point in the ROC diagram. In this paper, we show how a single decision tree can represent a set of classifiers by choosing different labellings of its leaves, or equivalently, an ordering on the leaves. In this setting, rather than estimating the accuracy of a single tree, it makes more sense to use the area under the ROC curve (AUC) as a quality metric. We also propose a novel splitting criterion which chooses the split with the highest local AUC. To the best of our knowledge, this is the first probabilistic splitting criterion that is not based on weighted average impurity. We present experiments suggesting that the AUC splitting criterion leads to trees with equal or better AUC value, without sacrificing accuracy if a single labelling is chosen.", "title": "" }, { "docid": "86dd65bddeb01d4395b81cef0bc4f00e", "text": "Many people may see the development of software and hardware like different disciplines. However, there are great similarities between them that have been shown due to the appearance of extensions for general purpose programming languages for its use as hardware description languages. In this contribution, the approach proposed by the MyHDL package to use Python as an HDL is analyzed by making a comparative study. This study is based on the independent application of Verilog and Python based flows to the development of a real peripheral. The use of MyHDL has revealed to be a powerful and promising tool, not only because of the surprising results, but also because it opens new horizons towards the development of new techniques for modeling and verification, using the full power of one of the most versatile programming languages nowadays.", "title": "" }, { "docid": "1891bf842d446a7d323dc207b38ff5a9", "text": "We use linear programming techniques to obtain new upper bounds on the maximal squared minimum distance of spherical codes with fixed cardinality. Functions Qj(n, s) are introduced with the property that Qj(n, s) < 0 for some j > m iff the Levenshtein bound Lm(n, s) on A(n, s) = max{|W | : W is an (n, |W |, s) code} can be improved by a polynomial of degree at least m+1. General conditions on the existence of new bounds are presented. We prove that for fixed dimension n ≥ 5 there exist a constant k = k(n) such that all Levenshtein bounds Lm(n, s) for m ≥ 2k− 1 can be improved. An algorithm for obtaining new bounds is proposed and discussed.", "title": "" }, { "docid": "655ca54fc6867d05b7a17fe2f0c2905e", "text": "First of all, the railway traffic control process should ensure the safety. One of the current research areas is to ensure the security of data in the distributed rail traffic control systems using wireless networks. Emerging security threats are the result of, among others, an unknown number of users who may want to access the network, and an unknown number and type of equipment that can be connected to the network. It can cause potential threats resulting from unknown format of data and hacker attacks. In order to counteract these threats, it is necessary to apply safety functions. These functions include the use of data integrity code and encryption methods. Additionally, due to character of railway traffic control systems, it is necessary to keep time determinism while sending telegrams. Exceeding the maximum execution time of a cryptographic algorithm and creating too large blocks of data constitute two critical factors that should be taken into account while developing the system for data transmission. This could result in the inability to transmit data at a given throughput of the transmission channel (bandwidth) at a certain time. The paper presents analysis of delays resulting from the realization of safety functions: such as to prepare the data for transfer and their later decoding. Following block encryption algorithms have been analyzed: Blowfish, Twofish, DES, 3DES, AES-128, AES-192 and AES-256 for modes: ECB, CBC, PCBC, CFB, OFB, CTR and data integrity codes: MD-5, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224 and SHA-512/256. The obtained results can be very helpful in the development of new rail traffic control systems in which wireless data transmission is planned.", "title": "" }, { "docid": "7925100b85dce273b92f4d9f52253cda", "text": "Named entities such as people, locations, and organizations play a vital role in characterizing online content. They often reflect information of interest and are frequently used in search queries. Although named entities can be detected reliably from textual content, extracting relations among them is more challenging, yet useful in various applications (e.g., news recommending systems). In this paper, we present a novel model and system for learning semantic relations among named entities from collections of news articles. We model each named entity occurrence with sparse structured logistic regression, and consider the words (predictors) to be grouped based on background semantics. This sparse group LASSO approach forces the weights of word groups that do not influence the prediction towards zero. The resulting sparse structure is utilized for defining the type and strength of relations. Our unsupervised system yields a named entities’ network where each relation is typed, quantified, and characterized in context. These relations are the key to understanding news material over time and customizing newsfeeds for readers. Extensive evaluation of our system on articles from TIME magazine and BBC News shows that the learned relations correlate with static semantic relatedness measures like WLM, and capture the evolving relationships among named entities over time.", "title": "" }, { "docid": "12fa352b1e5912f67337e7dc42c3d4b1", "text": "A novel parallel VLSI architecture is proposed in order to improve the performance of the H.265/HEVC deblocking filter. The overall computation is pipelined, and a new parallel-zigzag processing order is introduced to achieve high throughput. The processing order of the filter is efficiently rearranged to process the horizontal edges and vertical edges at the same time. The proposed H.265/HEVC deblocking filter architecture improves the parallelism by dissolving the data dependency between the adjacent filtering operations. Our design is also compatible with H.264/AVC. Experimental results demonstrate that our architecture shows the best performance compared with other architectures known so far at the expense of the slightly increased gate count. We improve the performance by 52.3%, while the area is increased by 25.8% compared with the previously known best architecture for H.264/AVC. The operating clock frequency of our design is 226 MHz in TSMC LVT 65 process. The proposed design delivers the performance to process 1080p HD at 60 fps.", "title": "" }, { "docid": "3f30c821132e07838de325c4f2183f84", "text": "This paper argues for the recognition of important experiential aspects of consumption. Specifically, a general framework is constructed to represent typical consumer behavior variables. Based on this paradigm, the prevailing information processing model is contrasted with an experiential view that focuses on the symbolic, hedonic, and esthetic nature of consumption. This view regards the consumption experience as a phenomenon directed toward the pursuit of fantasies, feelings, and fun.", "title": "" }, { "docid": "2496fa63868717ce2ed56c1777c4b0ed", "text": "Person re-identification (reID) is an important task that requires to retrieve a person’s images from an image dataset, given one image of the person of interest. For learning robust person features, the pose variation of person images is one of the key challenges. Existing works targeting the problem either perform human alignment, or learn human-region-based representations. Extra pose information and computational cost is generally required for inference. To solve this issue, a Feature Distilling Generative Adversarial Network (FD-GAN) is proposed for learning identity-related and pose-unrelated representations. It is a novel framework based on a Siamese structure with multiple novel discriminators on human poses and identities. In addition to the discriminators, a novel same-pose loss is also integrated, which requires appearance of a same person’s generated images to be similar. After learning pose-unrelated person features with pose guidance, no auxiliary pose information and additional computational cost is required during testing. Our proposed FD-GAN achieves state-of-the-art performance on three person reID datasets, which demonstrates that the effectiveness and robust feature distilling capability of the proposed FD-GAN. ‡‡", "title": "" }, { "docid": "2a56585a288405b9adc7d0844980b8bf", "text": "In this paper we propose the first exact solution to the problem of estimating the 3D room layout from a single image. This problem is typically formulated as inference in a Markov random field, where potentials count image features (e.g ., geometric context, orientation maps, lines in accordance with vanishing points) in each face of the layout. We present a novel branch and bound approach which splits the label space in terms of candidate sets of 3D layouts, and efficiently bounds the potentials in these sets by restricting the contribution of each individual face. We employ integral geometry in order to evaluate these bounds in constant time, and as a consequence, we not only obtain the exact solution, but also in less time than approximate inference tools such as message-passing. We demonstrate the effectiveness of our approach in two benchmarks and show that our bounds are tight, and only a few evaluations are necessary.", "title": "" }, { "docid": "6974bf94292b51fc4efd699c28c90003", "text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.", "title": "" }, { "docid": "c508f62dfd94d3205c71334638790c54", "text": "Financial and capital markets (especially stock markets) are considered high return investment fields, which in the same time are dominated by uncertainty and volatility. Stock market prediction tries to reduce this uncertainty and consequently the risk. As stock markets are influenced by many economical, political and even psychological factors, it is very difficult to forecast the movement of future values. Since classical statistical methods (primarily technical and fundamental analysis) are unable to deal with the non-linearity in the dataset, thus it became necessary the utilization of more advanced forecasting procedures. Financial prediction is a research active area and neural networks have been proposed as one of the most promising methods for such predictions. Artificial Neural Networks (ANNs) mimics, simulates the learning capability of the human brain. NNs are able to find accurate solutions in a complex, noisy environment or even to deal efficiently with partial information. In the last decade the ANNs have been widely used for predicting financial markets, because they are capable to detect and reproduce linear and nonlinear relationships among a set of variables. Furthermore they have a potential of learning the underlying mechanics of stock markets, i.e. to capture the complex dynamics and non-linearity of the stock market time series. In this paper, study we will get acquainted with some financial time series analysis concepts and theories linked to stock markets, as well as with the neural networks based systems and hybrid techniques that were used to solve several forecasting problems concerning the capital, financial and stock markets. Putting the foregoing experimental results to use, we will develop, implement a multilayer feedforward neural network based financial time series forecasting system. Thus, this system will be used to predict the future index values of major US and European stock exchanges and the evolution of interest rates as well as the future stock price of some US mammoth companies (primarily from IT branch).", "title": "" }, { "docid": "36fca3bd6a23b2f99438fe07ec0f0b9f", "text": "Best management practices (BMPs) have been widely used to address hydrology and water quality issues in both agricultural and urban areas. Increasing numbers of BMPs have been studied in research projects and implemented in watershed management projects, but a gap remains in quantifying their effectiveness through time. In this paper, we review the current knowledge about BMP efficiencies, which indicates that most empirical studies have focused on short-term efficiencies, while few have explored long-term efficiencies. Most simulation efforts that consider BMPs assume constant performance irrespective of ages of the practices, generally based on anticipated maintenance activities or the expected performance over the life of the BMP(s). However, efficiencies of BMPs likely change over time irrespective of maintenance due to factors such as degradation of structures and accumulation of pollutants. Generally, the impacts of BMPs implemented in water quality protection programs at watershed levels have not been as rapid or large as expected, possibly due to overly high expectations for practice long-term efficiency, with BMPs even being sources of pollutants under some conditions and during some time periods. The review of available datasets reveals that current data are limited regarding both short-term and long-term BMP efficiency. Based on this review, this paper provides suggestions regarding needs and opportunities. Existing practice efficiency data need to be compiled. New data on BMP efficiencies that consider important factors, such as maintenance activities, also need to be collected. Then, the existing and new data need to be analyzed. Further research is needed to create a framework, as well as modeling approaches built on the framework, to simulate changes in BMP efficiencies with time. The research community needs to work together in addressing these needs and opportunities, which will assist decision makers in formulating better decisions regarding BMP implementation in watershed management projects.", "title": "" }, { "docid": "fac03559daded831095dfc9e083b794d", "text": "Multi-label classification is prevalent in many real-world applications, where each example can be associated with a set of multiple labels simultaneously. The key challenge of multi-label classification comes from the large space of all possible label sets, which is exponential to the number of candidate labels. Most previous work focuses on exploiting correlations among different labels to facilitate the learning process. It is usually assumed that the label correlations are given beforehand or can be derived directly from data samples by counting their label co-occurrences. However, in many real-world multi-label classification tasks, the label correlations are not given and can be hard to learn directly from data samples within a moderate-sized training set. Heterogeneous information networks can provide abundant knowledge about relationships among different types of entities including data samples and class labels. In this paper, we propose to use heterogeneous information networks to facilitate the multi-label classification process. By mining the linkage structure of heterogeneous information networks, multiple types of relationships among different class labels and data samples can be extracted. Then we can use these relationships to effectively infer the correlations among different class labels in general, as well as the dependencies among the label sets of data examples inter-connected in the network. Empirical studies on real-world tasks demonstrate that the performance of multi-label classification can be effectively boosted using heterogeneous information net- works.", "title": "" }, { "docid": "516a2ec7c1dc332a4b375be7c11ba48e", "text": "Due to Evolution of internet and social media, every internet user expresses his opinion and views on the web. These views are both regarding day-to-day transaction and international issues as well. W ith the rapid growth of web technology internet has become the place for online learning and exchange ideas also. With this informat ion other users make up their mind about a particular service, product or organization. This gives birth to a huge opinion data available online in the form of on -line rev iew site, twitter, face book and personal blogs etc. This paper focuses on review of Opinion mining and sentiment analysis as it is the process of examining the text (opin ion or review) about a topic written in a natural language and classify them as positive, negative or neutral based on the humans sentiments involved in it. In this paper we have reviewed papers of last ten years to bring the research done in the field of sentiment analysis at a common platfo rm. It includes sentiment analysis tools, levels of sentiment analysis, its challenges a nd issues thus it will be very useful for the new researchers to have all information at a g lance.", "title": "" }, { "docid": "3476f91f068102ccf35c3855102f4d1b", "text": "Verification and validation (V&V) are the primary means to assess accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence application areas, such as, nuclear reactor safety, underground storage of nuclear waste, and safety of nuclear weapons. Although the terminology is not uniform across engineering disciplines, code verification deals with the assessment of the reliability of the software coding and solution verification deals with the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. Some fields, such as nuclear reactor safety, place little emphasis on code verification benchmarks and great emphasis on validation benchmarks that are closely related to actual reactors operating near safety-critical conditions. This paper proposes recommendations for the optimum design and use of code verification benchmarks based on classical analytical solutions, manufactured solutions, and highly accurate numerical solutions. It is believed that these benchmarks will prove useful to both in-house developed codes, as well as commercially licensed codes. In addition, this paper proposes recommendations for the design and use of validation benchmarks with emphasis on careful design of building-block experiments, estimation of experiment measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that predictive capability of a computational model is built on both the measurement of achievement in V&V, as well as how closely related are the V&V benchmarks to the actual application of interest, e.g., the magnitude of extrapolation beyond a validation benchmark to a complex engineering system of interest.", "title": "" }, { "docid": "14b616d5737369e3eecc7da82e97f0e8", "text": "This paper presents a novel algorithm which uses compact hash bits to greatly improve the efficiency of non-linear kernel SVM in very large scale visual classification problems. Our key idea is to represent each sample with compact hash bits, over which an inner product is defined to serve as the surrogate of the original nonlinear kernels. Then the problem of solving the nonlinear SVM can be transformed into solving a linear SVM over the hash bits. The proposed Hash-SVM enjoys dramatic storage cost reduction owing to the compact binary representation, as well as a (sub-)linear training complexity via linear SVM. As a critical component of Hash-SVM, we propose a novel hashing scheme for arbitrary non-linear kernels via random subspace projection in reproducing kernel Hilbert space. Our comprehensive analysis reveals a well behaved theoretic bound of the deviation between the proposed hashing-based kernel approximation and the original kernel function. We also derive requirements on the hash bits for achieving a satisfactory accuracy level. Several experiments on large-scale visual classification benchmarks are conducted, including one with over 1 million images. The results show that Hash-SVM greatly reduces the computational complexity (more than ten times faster in many cases) while keeping comparable accuracies.", "title": "" }, { "docid": "1ed9f257129a45388fcf976b87e37364", "text": "Mobile cloud computing is an extension of cloud computing that allow the users to access the cloud service via their mobile devices. Although mobile cloud computing is convenient and easy to use, the security challenges are increasing significantly. One of the major issues is unauthorized access. Identity Management enables to tackle this issue by protecting the identity of users and controlling access to resources. Although there are several IDM frameworks in place, they are vulnerable to attacks like timing attacks in OAuth, malicious code attack in OpenID and huge amount of information leakage when user’s identity is compromised in Single Sign-On. Our proposed framework implicitly authenticates a user based on user’s typing behavior. The authentication information is encrypted into homomorphic signature before being sent to IDM server and tokens are used to authorize users to access the cloud resources. Advantages of our proposed framework are: user’s identity protection and prevention from unauthorized access.", "title": "" }, { "docid": "b1272039194d07ff9b7568b7f295fbfb", "text": "Protein catalysis requires the atomic-level orchestration of side chains, substrates and cofactors, and yet the ability to design a small-molecule-binding protein entirely from first principles with a precisely predetermined structure has not been demonstrated. Here we report the design of a novel protein, PS1, that binds a highly electron-deficient non-natural porphyrin at temperatures up to 100 °C. The high-resolution structure of holo-PS1 is in sub-Å agreement with the design. The structure of apo-PS1 retains the remote core packing of the holoprotein, with a flexible binding region that is predisposed to ligand binding with the desired geometry. Our results illustrate the unification of core packing and binding-site definition as a central principle of ligand-binding protein design.", "title": "" }, { "docid": "ea87229e46fd049930c75a9d5187fd6c", "text": "Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.", "title": "" } ]
scidocsrr
305f440bdbf13e2791c5426ff4070efd
THE PSYCHOLOGY OF SELF ‐ DEFENSE : SELF ‐ AFFIRMATION THEORY
[ { "docid": "f5bc721d2b63912307c4ad04fb78dd2c", "text": "When women perform math, unlike men, they risk being judged by the negative stereotype that women have weaker math ability. We call this predicament st reotype threat and hypothesize that the apprehension it causes may disrupt women’s math performance. In Study 1 we demonstrated that the pattern observed in the literature that women underperform on difficult (but not easy) math tests was observed among a highly selected sample of men and women. In Study 2 we demonstrated that this difference in performance could be eliminated when we lowered stereotype threat by describing the test as not producing gender differences. However, when the test was described as producing gender differences and stereotype threat was high, women performed substantially worse than equally qualified men did. A third experiment replicated this finding with a less highly selected population and explored the mediation of the effect. The implication that stereotype threat may underlie gender differences in advanced math performance, even", "title": "" } ]
[ { "docid": "fe2b8921623f3bcf7b8789853b45e912", "text": "OBJECTIVE\nTo establish the psychosexual outcome of gender-dysphoric children at 16 years or older and to examine childhood characteristics related to psychosexual outcome.\n\n\nMETHOD\nWe studied 77 children who had been referred in childhood to our clinic because of gender dysphoria (59 boys, 18 girls; mean age 8.4 years, age range 5-12 years). In childhood, we measured the children's cross-gender identification and discomfort with their own sex and gender roles. At follow-up 10.4 +/- 3.4 years later, 54 children (mean age 18.9 years, age range 16-28 years) agreed to participate. In this group, we assessed gender dysphoria and sexual orientation.\n\n\nRESULTS\nAt follow-up, 30% of the 77 participants (19 boys and 4 girls) did not respond to our recruiting letter or were not traceable; 27% (12 boys and 9 girls) were still gender dysphoric (persistence group), and 43% (desistance group: 28 boys and 5 girls) were no longer gender dysphoric. Both boys and girls in the persistence group were more extremely cross-gendered in behavior and feelings and were more likely to fulfill gender identity disorder (GID) criteria in childhood than the children in the other two groups. At follow-up, nearly all male and female participants in the persistence group reported having a homosexual or bisexual sexual orientation. In the desistance group, all of the girls and half of the boys reported having a heterosexual orientation. The other half of the boys in the desistance group had a homosexual or bisexual sexual orientation.\n\n\nCONCLUSIONS\nMost children with gender dysphoria will not remain gender dysphoric after puberty. Children with persistent GID are characterized by more extreme gender dysphoria in childhood than children with desisting gender dysphoria. With regard to sexual orientation, the most likely outcome of childhood GID is homosexuality or bisexuality.", "title": "" }, { "docid": "1274e55cc173f64fcc9a191d859c2e41", "text": "We present an O*(n3) randomized algorithm for estimating the volume of a well-rounded convex body given by a membership oracle, improving on the previous best complexity of O*(n4). The new algorithmic ingredient is an accelerated cooling schedule where the rate of cooling increases with the temperature. Previously, the known approach for potentially achieving such complexity relied on a positive resolution of the KLS hyperplane conjecture, a central open problem in convex geometry.", "title": "" }, { "docid": "af84229b7237e9f85f2273896a808b83", "text": "Distributed word representation is an efficient method for capturing semantic and syntactic word relations. In this work, we introduce an extension to the continuous bag-of-words model for learning word representations efficiently by using implicit structure information. Instead of relying on a syntactic parser which might be noisy and slow to build, we compute weights representing probabilities of syntactic relations based on the Huffman softmax tree in an efficient heuristic. The constructed “implicit graphs” from these weights show that these weights contain useful implicit structure information. Extensive experiments performed on several word similarity and word analogy tasks show gains compared to the basic continuous bag-of-words model.", "title": "" }, { "docid": "fc09e1c012016c75418ec33dfe5868d5", "text": "Big data is the word used to describe structured and unstructured data. The term big data is originated from the web search companies who had to query loosely structured very large", "title": "" }, { "docid": "2ab4619cd5f7ec48596ce63bd111a23b", "text": "Growing demand for ubiquitous and pervasive computing has triggered a sharp rise in handheld device usage. At the same time, dynamic multimedia data has become accepted as core material which many important applications depend on, despite intensive costs in computation and resources. This paper investigates the suitability and constraints of using handheld devices for such applications. We firstly analyse the capabilities and limitations of current models of handheld devices and advanced features offered by next generation models. We then categorise these applications and discuss the typical requirements of each class. Important issues to be considered include data organisation and management, communication, and input and user interfaces. Finally, we briefly discuss future outlook and identify remaining areas for research.", "title": "" }, { "docid": "a65930b1f31421bb4222933a36ac93c7", "text": "Personalized nutrition is fast becoming a reality due to a number of technological, scientific, and societal developments that complement and extend current public health nutrition recommendations. Personalized nutrition tailors dietary recommendations to specific biological requirements on the basis of a person's health status and goals. The biology underpinning these recommendations is complex, and thus any recommendations must account for multiple biological processes and subprocesses occurring in various tissues and must be formed with an appreciation for how these processes interact with dietary nutrients and environmental factors. Therefore, a systems biology-based approach that considers the most relevant interacting biological mechanisms is necessary to formulate the best recommendations to help people meet their wellness goals. Here, the concept of \"systems flexibility\" is introduced to personalized nutrition biology. Systems flexibility allows the real-time evaluation of metabolism and other processes that maintain homeostasis following an environmental challenge, thereby enabling the formulation of personalized recommendations. Examples in the area of macro- and micronutrients are reviewed. Genetic variations and performance goals are integrated into this systems approach to provide a strategy for a balanced evaluation and an introduction to personalized nutrition. Finally, modeling approaches that combine personalized diagnosis and nutritional intervention into practice are reviewed.", "title": "" }, { "docid": "a241291333a570b7ca09e6ae49467ebf", "text": "This article aims to contribute to understanding how to use the Balanced Scorecard (BSC) effectively. The BSC lends itself to various interpretations. This article explores how the way in which the BSC is used affects performance. Empirical evidence from Dutch firms suggests BSC use will not automatically improve company performance, but that the manner of its use matters: BSC use that complements corporate strategy positively influences company performance, while BSC use that is not related to the strategy may decrease it. We discuss the findings and offer managers guidance for optimal use of the BSC. Q 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1592dc2c81d9d6b9c58cc1a5b530c923", "text": "We propose a cloudlet network architecture to bring the computing resources from the centralized cloud to the edge. Thus, each User Equipment (UE) can communicate with its Avatar, a software clone located in a cloudlet, and can thus lower the end-to-end (E2E) delay. However, UEs are moving over time, and so the low E2E delay may not be maintained if UEs' Avatars stay in their original cloudlets. Thus, live Avatar migration (i.e., migrating a UE's Avatar to a suitable cloudlet based on the UE's location) is enabled to maintain the low E2E delay between each UE and its Avatar. On the other hand, the migration itself incurs extra overheads in terms of resources of the Avatar, which compromise the performance of applications running in the Avatar. By considering the gain (i.e., the E2E delay reduction) and the cost (i.e., the migration overheads) of the live Avatar migration, we propose a PRofIt Maximization Avatar pLacement (PRIMAL) strategy for the cloudlet network in order to optimize the tradeoff between the migration gain and the migration cost by selectively migrating the Avatars to their optimal locations. Simulation results demonstrate that as compared to the other two strategies (i.e., Follow Me Avatar and Static), PRIMAL maximizes the profit in terms of maintaining the low average E2E delay between UEs and their Avatars and minimizing the migration cost simultaneously.", "title": "" }, { "docid": "1fcc1acdd4b7b170693af3d7da40f7f4", "text": "The intended purpose of this monograph is to provide a general overview of allergy diagnostics for health care professionals who care for patients with allergic disease. For a more comprehensive review of allergy diagnostic testing, readers can refer to the Allergy Diagnostic Practice Parameters. A key message is that a positive allergy test result (skin or blood) indicates only the presence of allergen specific IgE (called sensitization). It does not necessarily mean clinical allergy (ie, allergic symptoms with exposure). It is important for this reason that the allergy evaluation be based on the patient's history and directed by a health care professional with sufficient understanding of allergy diagnostic testing to use the information obtained from his/her evaluation of the patient to determine (1) what allergy diagnostic tests to order, (2) how to interpret the allergy diagnostic test results, and (3) how to use the information obtained from the allergy evaluation to develop an appropriate therapeutic treatment plan.", "title": "" }, { "docid": "847a64b0b5f2b8f3387c260bca8bb9c0", "text": "Pain-related emotions are a major barrier to effective self rehabilitation in chronic pain. Automated coaching systems capable of detecting these emotions are a potential solution. This paper lays the foundation for the development of such systems by making three contributions. First, through literature reviews, an overview of how pain is expressed in chronic pain and the motivation for detecting it in physical rehabilitation is provided. Second, a fully labelled multimodal dataset (named `EmoPain') containing high resolution multiple-view face videos, head mounted and room audio signals, full body 3D motion capture and electromyographic signals from back muscles is supplied. Natural unconstrained pain related facial expressions and body movement behaviours were elicited from people with chronic pain carrying out physical exercises. Both instructed and non-instructed exercises were considered to reflect traditional scenarios of physiotherapist directed therapy and home-based self-directed therapy. Two sets of labels were assigned: level of pain from facial expressions annotated by eight raters and the occurrence of six pain-related body behaviours segmented by four experts. Third, through exploratory experiments grounded in the data, the factors and challenges in the automated recognition of such expressions and behaviour are described, the paper concludes by discussing potential avenues in the context of these findings also highlighting differences for the two exercise scenarios addressed.", "title": "" }, { "docid": "1f1fd7217ed5bae04f9ac6f8ccc8c23f", "text": "Relating the brain's structural connectivity (SC) to its functional connectivity (FC) is a fundamental goal in neuroscience because it is capable of aiding our understanding of how the relatively fixed SC architecture underlies human cognition and diverse behaviors. With the aid of current noninvasive imaging technologies (e.g., structural MRI, diffusion MRI, and functional MRI) and graph theory methods, researchers have modeled the human brain as a complex network of interacting neuronal elements and characterized the underlying structural and functional connectivity patterns that support diverse cognitive functions. Specifically, research has demonstrated a tight SC-FC coupling, not only in interregional connectivity strength but also in network topologic organizations, such as community, rich-club, and motifs. Moreover, this SC-FC coupling exhibits significant changes in normal development and neuropsychiatric disorders, such as schizophrenia and epilepsy. This review summarizes recent progress regarding the SC-FC relationship of the human brain and emphasizes the important role of large-scale brain networks in the understanding of structural-functional associations. Future research directions related to this topic are also proposed.", "title": "" }, { "docid": "a8695230b065ae2e4c5308dfe4f8c10e", "text": "The paper describes a solution for the Yandex Personalized Web Search Challenge. The goal of the challenge is to rerank top ten web search query results to bring most personally relevant results on the top, thereby improving the search quality. The paper focuses on feature engineering for learning to rank in web search, including a novel pair-wise feature, shortand long-term personal navigation features. The paper demonstrates that point-wise logistic regression can achieve the stat-of-the-art performance in terms of normalized discounted cumulative gain with capability to scale up.", "title": "" }, { "docid": "59a7ed26693b41d6b07f843d0cf149cb", "text": "Now a day’s business is growing at a very rapid pace and a lot of information is generated. The more information we have, based on internal experiences or from external sources, the better our decisions would be. Business executives are faced with the same dilemmas when they make decisions. They need the best tools available to help them. Decision support system helps the managers to take better and quick decision by using historical and current data. By combining massive amounts of data with sophisticated analytical models and tools, and by making the system easy to use, they provide a much better source of information to use in the decision-making process. Health care is also one of the domains which get a lot of benefits and researches with the advent and progress in data mining. Data mining in medicine can resolve this problem and can provide promising results. It plays a vital role in extracting useful knowledge and making scientific decision for diagnosis and treatment of disease. Treatment records of millions of patients have been recorded and many tools and algorithms are applied to understand and analyze the data. Heart failure is a common disease which is difficult to diagnose. To aid physicians in diagnosing heart failure, a decision support system has been proposed. A classification based methods in health care is used to diagnose based on certain parameters to diagnosis if the patient have certain disease or not. The purpose is to explore the aspects of Clinical Decision Support Systems and to figure out the most optimal methodology that can be used in Clinical Decision Support Systems to provide the best solutions and diagnosis to medical problems.", "title": "" }, { "docid": "af3cc5fc9cf58048f9805923b45305d6", "text": "Spell checkers are one of the most widely recognized and heavily employed features of word processing applications in existence today. This remains true despite the many problems inherent in the spell checking methods employed by all modern spell checkers. In this paper we present a proof-ofconcept spell checking system that is able to intrinsically avoid many of these problems. In particular, it is the actual corrections performed by the typist that provides the basis for error detection. These corrections are used to train a feed-forward neural network so that if the same error is remade, the network can flag the offending word as a possible error. Since these corrections are the observations of a single typist’s behavior, a spell checker employing this system is essentially specific to the typist that made the corrections. A discussion of the benefits and deficits of the system is presented with the conclusion that the system is most effective as a supplement to current spell checking methods.", "title": "" }, { "docid": "8539b0107b37cb97b11804b0adafeae3", "text": "The exchange of independent information between two nodes in a wireless network can be viewed as two unicast sessions, corresponding to information transfer along one direction and the opposite direction. In this paper we show such information exchange can be efficiently performed by exploiting network coding and the physical-layer broadcast property offered by the wireless medium, which improves upon conventional solutions that separate the processing of the two unicast sessions. We propose a distributed scheme that obviates the need for synchronization and is robust to random packet loss and delay, and so on. The scheme is simple and incurs minor overhead.", "title": "" }, { "docid": "5caedb986844afcd40b5deb9ca8ba116", "text": "We present here because it will be so easy for you to access the internet service. As in this new era, much technology is sophistically offered by connecting to the internet. No any problems to face, just for this day, you can really keep in mind that the book is the best book for you. We offer the best here to read. After deciding how your feeling will be, you can enjoy to visit the link and get the book.", "title": "" }, { "docid": "4213993be9e2cf6d3470c59db20ea091", "text": "The virtual instrument is the main content of instrument technology nowadays. This article details the implementation process of the virtual oscilloscope. It is designed by LabVIEW graphical programming language. The virtual oscilloscope can achieve waveform display, channel selection, data collection, data reading, writing and storage, spectrum analysis, printing and waveform parameters measurement. It also has a friendly user interface and can be operated conveniently.", "title": "" }, { "docid": "3be5e04dab978b55064f0621839b4003", "text": "These lecture notes introduce some basic concepts from Shannon’s information theory, such as (conditional) Shannon entropy, mutual information, and Rényi entropy, as well as a number of basic results involving these notions. Subsequently, well-known bounds on perfectly secure encryption, source coding (i.e. data compression), and reliable communication over unreliable channels are discussed. We also cover and prove the elegant privacy amplification theorem. This provides a means to mod out the adversary’s partial information and to distill a highly secret key. It is a key result in theoretical cryptography, and a primary starting point for the very active subarea of unconditional security.", "title": "" }, { "docid": "70bed43cdfd50586e803bf1a9c8b3c0a", "text": "We design a way to model apps as vectors, inspired by the recent deep learning approach to vectorization of words called word2vec. Our method relies on how users use apps. In particular, we visualize the time series of how each user uses mobile apps as a “document”, and apply the recent word2vec modeling on these documents, but the novelty is that the training context is carefully weighted by the time interval between the usage of successive apps. This gives us the app2vec vectorization of apps. We apply this to industrial scale data from Yahoo! and (a) show examples that app2vec captures semantic relationships between apps, much as word2vec does with words, (b) show using Yahoo!'s extensive human evaluation system that 82% of the retrieved top similar apps are semantically relevant, achieving 37% lift over bag-of-word approach and 140% lift over matrix factorization approach to vectorizing apps, and (c) finally, we use app2vec to predict app-install conversion and improve ad conversion prediction accuracy by almost 5%. This is the first industry scale design, training and use of app vectorization.", "title": "" }, { "docid": "36a494bbe8d93d664fa2e30761ff79c4", "text": "Text clustering is used on a variety of applications such as content-based recommendation, categorization, summarization, information retrieval and automatic topic extraction. Since most pair of documents usually shares just a small percentage of words, the dataset representation tends to become very sparse, thus the need of using a similarity metric capable of a partial matching of a set of features. The technique known as Co-Clustering is capable of finding several clusters inside a dataset with each cluster composed of just a subset of the object and feature sets. In word-document data this can be useful to identify the clusters of documents pertaining to the same topic, even though they share just a small fraction of words. In this paper a scalable co-clustering algorithm is proposed using the Locality-sensitive hashing technique in order to find co-clusters of documents. The proposed algorithm will be tested against other co-clustering and traditional algorithms in well known datasets. The results show that this algorithm is capable of finding clusters more accurately than other approaches while maintaining a linear complexity.", "title": "" } ]
scidocsrr